Message ID | 20230725074148.2936402-1-wei.fang@nxp.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:9010:0:b0:3e4:2afc:c1 with SMTP id l16csp2325620vqg; Tue, 25 Jul 2023 01:40:30 -0700 (PDT) X-Google-Smtp-Source: APBJJlFKzqGI50vv2vsgst8CGXxvLOVFfIk8RHwsdCY5aTMWmsmfSjJ/LaPXJEYsFVTAsEbGoXat X-Received: by 2002:a0d:c606:0:b0:56c:e1e0:8da1 with SMTP id i6-20020a0dc606000000b0056ce1e08da1mr10954715ywd.19.1690274429574; Tue, 25 Jul 2023 01:40:29 -0700 (PDT) ARC-Seal: i=2; a=rsa-sha256; t=1690274429; cv=pass; d=google.com; s=arc-20160816; b=K5Kg34duIbevt0quFFMxpVgyV/Npv1UQN29lnNHHygi2GM8/s2y32pUSwA7jjsTi4r qX9WHw2FyPxB70k2UP6wgrOclQW6Z/h9Ts+YKV4v2iZs90JQ8cK2cOwQAm+VO8sdz5nq DeDGZU9PmHKo9Qnh59eYG6JQlSkQNVVk1UbgmUoTct3qlmOz5fbn4GO2iQG6u8niH+se c6gwAOAcQIebIYUsXEj7vHDOrksKSbd7Da1j5U7ggVbwp/jl2PPBDMlpCYw0n+BvbqPV aUzTWc/VtYzJYvDMSYaWPhuqwWLNCzBCAxyLQLmoWVJUG2e0FN0/rn0nK39Va2f8BmxN j+ng== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:content-transfer-encoding :message-id:date:subject:cc:to:from:dkim-signature; bh=KPD4r70exCIxlpDF9uRwDVRGKD216pl5ZxBnd5JxlTw=; fh=EQi+tRgVRpRW2VxsetoAZm+U3bD+X9Mr8skYMqtew5M=; b=v5skYK/4UuLssLhXjPAb4h0ab2RPI6rlaJDpZQiIxKjTg8uLbkTW8LBbaRRbiXTMnL F63ex+NQ2Jza/xwHQMHn7HGEi1+6E7U5m7fMVIoYqVSzbpU2X7mhWgSX9IDnX79YoDHZ /gaEYgcdjPMC9/YirNipfvoJlcsmuXp5J6uQ8vI8yiRmKx5gGpCNeobroMJVUy+xKPDL MwTwTxDQZOy1ai6Q+E5R/YGlCiwXoYQghtFV1XwictakqV4rvprq5ec0GbB6Pcj0o22X Q92U1xdgs5TvcKA1VXSTHJH4eQt/F+tA9MZ5jKh7L4sFjrF2+hn7nAh77eBYirSMiXuy gElw== ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@nxp.com header.s=selector2 header.b=W4bxqKsV; arc=pass (i=1 spf=pass spfdomain=nxp.com dkim=pass dkdomain=nxp.com dmarc=pass fromdomain=nxp.com); spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nxp.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id mh17-20020a17090b4ad100b002637d779bf3si14713714pjb.17.2023.07.25.01.40.16; Tue, 25 Jul 2023 01:40:29 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@nxp.com header.s=selector2 header.b=W4bxqKsV; arc=pass (i=1 spf=pass spfdomain=nxp.com dkim=pass dkdomain=nxp.com dmarc=pass fromdomain=nxp.com); spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nxp.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232795AbjGYHs5 (ORCPT <rfc822;kloczko.tomasz@gmail.com> + 99 others); Tue, 25 Jul 2023 03:48:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34082 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230208AbjGYHsu (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Tue, 25 Jul 2023 03:48:50 -0400 Received: from EUR03-AM7-obe.outbound.protection.outlook.com (mail-am7eur03on2062.outbound.protection.outlook.com [40.107.105.62]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2D3C9D3; Tue, 25 Jul 2023 00:48:49 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=eBNzNfYm8QiPoE5c+2oGo75mJgR7k37EQOC7L3oGpahVIG4e6yYBDMWW+LF9AuSo3LRCU45lh3i57Bm1U1nlSsKT75F/AE6dXU4PiVD8I6toz6KVXOBazdauipKKJam14jsCKesm8n/ecERw89ULRb20eoaxGBbpECqiRZyaKTWaKMJFvvNwMveK6xmpM1WTmx6pctq6RfloSHQcV/5UYf5ZyCMb9K+7/Erq7SMGL+b3ZIrg3w75Hm5oL+9k2UFvkElHY7i4WBCn5oadiMS+apoWMHpSH2qqcJ4kGBuiRIBKFMg5KKfbnPqpJBlQvynuaU2ScqiFQ9GQD7nmb1QybQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=KPD4r70exCIxlpDF9uRwDVRGKD216pl5ZxBnd5JxlTw=; b=L6sbhMZDUULwHHWF4tmw848GSEK7tmDIQ1DoIt7trcJPpm1Bo0Kbww2RsGoIW5izAVp7EBGWHEoKewC0niRO1zSBq9ZVCj2vrPJGvfBnhikmD27RX/MIdoC2PEok8kTsSeLDnJqfyJ0ZcGwQ0HbgYoHHvWVd5Na9uKAWlTThbg3o2NonS3ye0hGWpRkJ9MkvHaNwDXs1Bvb9QGleHaBaXxmiD0hPoEBwDxsfysaA4BXJPIwi0ZqYes4VIG9v4tMTat38XeAG4/VpbfQePui3nqyGxLAt3gDTTu/TqAvICs3TA1tE+rcHXZa3x3sTeXu/x3gq4dLgMtV5ZSGFE3mEPg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nxp.com; dmarc=pass action=none header.from=nxp.com; dkim=pass header.d=nxp.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nxp.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=KPD4r70exCIxlpDF9uRwDVRGKD216pl5ZxBnd5JxlTw=; b=W4bxqKsVQhpvaemDYyLxs3MZaWSwyBUR8SR1YDu9BXwmnxAJb+vUaTXUhhPE0PwZTIvnYBtTy/W3uSZbMx5hGN9fDJ6KQRPgAOM8z8fJtqfMuSvE+VMGeYUzyxP+7dxRoJle8szH6Qyd6djFqEEsSZdfe656+cUZnSD61WfElzQ= Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nxp.com; Received: from AM5PR04MB3139.eurprd04.prod.outlook.com (2603:10a6:206:8::20) by DB9PR04MB9380.eurprd04.prod.outlook.com (2603:10a6:10:368::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6609.33; Tue, 25 Jul 2023 07:48:45 +0000 Received: from AM5PR04MB3139.eurprd04.prod.outlook.com ([fe80::2468:a15e:aa9b:7f8e]) by AM5PR04MB3139.eurprd04.prod.outlook.com ([fe80::2468:a15e:aa9b:7f8e%4]) with mapi id 15.20.6609.032; Tue, 25 Jul 2023 07:48:45 +0000 From: Wei Fang <wei.fang@nxp.com> To: davem@davemloft.net, edumazet@google.com, kuba@kernel.org, shenwei.wang@nxp.com, xiaoning.wang@nxp.com, pabeni@redhat.com, netdev@vger.kernel.org Cc: linux-imx@nxp.com, linux-kernel@vger.kernel.org Subject: [PATCH net] net: fec: tx processing does not call XDP APIs if budget is 0 Date: Tue, 25 Jul 2023 15:41:48 +0800 Message-Id: <20230725074148.2936402-1-wei.fang@nxp.com> X-Mailer: git-send-email 2.25.1 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-ClientProxiedBy: SG3P274CA0015.SGPP274.PROD.OUTLOOK.COM (2603:1096:4:be::27) To AM5PR04MB3139.eurprd04.prod.outlook.com (2603:10a6:206:8::20) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: AM5PR04MB3139:EE_|DB9PR04MB9380:EE_ X-MS-Office365-Filtering-Correlation-Id: 14b8500c-67da-43de-4eb1-08db8ce392c4 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: fK1M4R17gxzSCHQ2sSaBD3AGU/MPH0snqv+khUu+q8JtyFine6zs+kahVJmTHyDC/kXcHIlmjRs9wjRBuxsAjbIc9VGvghHV5n/tBX0ok1tt/OY1Mc4xgiKwNXEj0503+Armv2nPt9D1pDOIq2c3Hi/Ys6q1b50ysjdV2fcPBwZXTJhkLAZe+U82JecIcUe/He7ezULrVwiG1u5o+0G8JKhLTRI6GcLXuH3wa8PBE3nOz/pECgpanmK+SllOHlOQSKevr0lO1MxzqrAzSTd0vSQVygVG2/ekP0B5nkEFKjF97so4YWOBqRAHbaCv8W2svDjYGRXT6YDhBfmQlIwjshjvhpqLCEzZKIqB0L43v7QpHIehIzlA/5zqEqRbRIDH0rKuQoO2+dJ8+fQVai20wY/zsKLRzW1fKeB39rwBC7zJc4o8e+pUM2Jtj1cQolJPg/TArZ7ac/cPClSlukxGRGok+pVeDM3akuRN31+ev9vR6qKYPihCvj7pM7addsQlnsEzKx+7UpFj5EaoJsepOk2K+fYObJOIvVBIuSYYxjQD3YXBP2d8jNH8l3TX3Ks5H0eSIDfNHobc4zl3CMPfp3Qpcwq4zSeKAHWoDtsT+dM= X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM5PR04MB3139.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(136003)(346002)(39860400002)(376002)(366004)(451199021)(44832011)(8936002)(8676002)(5660300002)(316002)(41300700001)(2906002)(4326008)(66946007)(66476007)(66556008)(6486002)(6512007)(26005)(186003)(52116002)(478600001)(966005)(1076003)(6506007)(86362001)(36756003)(2616005)(83380400001)(38100700002)(38350700002);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: 59rnqVCw1/CzgwGphhXMleUXIoBjEOmyPehvXpYb4lrU0hgarjTCYMdvKrFCthW4JzIQUNLWb2gJMA0zb1bVcnIn5kqpvkldx+NSmUL1bedvUNOPPOXIQwSYmv/QIw08isCyqYbYSsQ3Y38yyMJ0V/R32qazAJZXHlTHmpGVGO8EkjCs4PHTYwUxVSS8brfiDu9j8KvQixmbTmZV3CWau4eTfKnrK3MZVxCWeouDQvviRCUWEPkwVB9smq4b380bKRWSw8deP4/5ATTBSHlWFM6zTTmWt8Qyr+EIUUP9rwuK38cIE/WTIB5zZS6eYSr5iBjPpUc503lZU9b+BoupryldwCJDeuvRuGqEf9KAN3zg1p6DokxKfvnABe2ynh395N5vfpbaNq5dHDGhtj7//GiihZxLnwv3dTRlKw9IngSDHnQk/2o9Rea73rULsLug58ybumtkzRdUHyKd8CBLCwOi1X7CjjpRF6pxrAIwN4nvulpDMAtrW/ouNYG/F4bEWgHmU3dW4vx9RGTHSLu0/Ymtcvp4k98Td99tHwOoZt8pCQ6VD7WixBc4z17Ji4GVMVaJK9EzzwX2hQ9LUC68/fguD2hp+Sx1Sw9pk6o6Ui68bsacXjQOJamPsocnG/dUW1QaRPsZUZJbfdOkQ7vwZYbwCg9neTl3X1vi6yYor8eJSLKvprU77dSuBeBgr/8y39Py8yrJCQ7O4J/gbAq/n7BldJbGlhBFRbe3u6sj+44i8MvJGwuIjDKPD0I9lDPSuY5fteuwDlH9v0olWITgU/8BV8NHlGSFQmexm01rnSMui5TR1CiWPMNtApKCA/kTXlk2ale/iIflFWbTEJ8PeexB6W+KDp6+Cl1RgzDhn141QKUi9lCceGbMoMDMwiknPRX108zjRkZ61B4zfaxDWxbTmUQruMmz8JrpPy001YFHtUaIwlwclx2bP/+jtzDyoXzU09YDMSfAksasJFUI2FT9nZ+ay7RPRr25Q9VUWZsLMm33KY32PGGxmfJuSy7yx9gBQ72uLJzEFSv58XF/UQmzPh9fAksY7EnMpdC+Turt9eKwF//gFykle8VWjZaoD3mxkPWa3fh/xKVBLXAJ1YZKcLD3keYAZdylypk+esg3whF3GwJTEKVSoAbfaXREGExT08ZCtJ/lk6XDTnnB1XHwMX6t5zNMSlOMNUVGmInSRZ4QuXco4FDbtE4otpMXDD1xdff8H/nfcISTKuRJ1YexEdj+WSlS4e36u/vUyhOKHLNz+yiFSLs+UWkPQdhe7qoKwS+DkwZ/GURnc4b8qXP2hRgnFstPMEjXDlliKv/3/9GaRDzEK1PS2HEuH4cv3SLx1CxS3yEC4MgzK9QAkX31iCalKvVaOZO16UCmroLmI2sYi2ZvNXU1ODoFt/qzX0EGoiQYK055e+J3VIApO1yVmMYuvgwbRJznE6GPxFeesmK8QhfUCBr5p9NsYIcWJjwRCWiUqam1ZXKOAuO3vdeT+RI0RMrugkANG2cE562O4Vcf/wy6wkl1vh9BqVfIsH78al7oGM8LvzicQ5a/gKZydTsYwFwc3gjm3upSAnvxkbLyECbrrq0PFO8M1xdZ X-OriginatorOrg: nxp.com X-MS-Exchange-CrossTenant-Network-Message-Id: 14b8500c-67da-43de-4eb1-08db8ce392c4 X-MS-Exchange-CrossTenant-AuthSource: AM5PR04MB3139.eurprd04.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Jul 2023 07:48:45.3297 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 686ea1d3-bc2b-4c6f-a92c-d99c5c301635 X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: WVY1jReBZ/nycXTnxFYvJDQ+alYIXgIeHMntSySbhxSRlf+uClu6RCFRJqeLiwdTgC0tTXIAXf+DQvEKJMf/sA== X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR04MB9380 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_PASS,SPF_PASS,T_SCC_BODY_TEXT_LINE, URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1772381200196959496 X-GMAIL-MSGID: 1772381200196959496 |
Series |
[net] net: fec: tx processing does not call XDP APIs if budget is 0
|
|
Commit Message
Wei Fang
July 25, 2023, 7:41 a.m. UTC
According to the clarification [1] in the latest napi.rst, the tx
processing cannot call any XDP (or page pool) APIs if the "budget"
is 0. Because NAPI is called with the budget of 0 (such as netpoll)
indicates we may be in an IRQ context, however, we cannot use the
page pool from IRQ context.
[1] https://lore.kernel.org/all/20230720161323.2025379-1-kuba@kernel.org/
Fixes: 20f797399035 ("net: fec: recycle pages for transmitted XDP frames")
Signed-off-by: Wei Fang <wei.fang@nxp.com>
Suggested-by: Jakub Kicinski <kuba@kernel.org>
---
drivers/net/ethernet/freescale/fec_main.c | 16 ++++++++++++----
1 file changed, 12 insertions(+), 4 deletions(-)
Comments
On Tue, 2023-07-25 at 15:41 +0800, Wei Fang wrote: > According to the clarification [1] in the latest napi.rst, the tx > processing cannot call any XDP (or page pool) APIs if the "budget" > is 0. Because NAPI is called with the budget of 0 (such as netpoll) > indicates we may be in an IRQ context, however, we cannot use the > page pool from IRQ context. > > [1] https://lore.kernel.org/all/20230720161323.2025379-1-kuba@kernel.org/ > > Fixes: 20f797399035 ("net: fec: recycle pages for transmitted XDP frames") > Signed-off-by: Wei Fang <wei.fang@nxp.com> > Suggested-by: Jakub Kicinski <kuba@kernel.org> > --- > drivers/net/ethernet/freescale/fec_main.c | 16 ++++++++++++---- > 1 file changed, 12 insertions(+), 4 deletions(-) > > diff --git a/drivers/net/ethernet/freescale/fec_main.c b/drivers/net/ethernet/freescale/fec_main.c > index 073d61619336..66b5cbdb43b9 100644 > --- a/drivers/net/ethernet/freescale/fec_main.c > +++ b/drivers/net/ethernet/freescale/fec_main.c > @@ -1372,7 +1372,7 @@ fec_enet_hwtstamp(struct fec_enet_private *fep, unsigned ts, > } > > static void > -fec_enet_tx_queue(struct net_device *ndev, u16 queue_id) > +fec_enet_tx_queue(struct net_device *ndev, u16 queue_id, int budget) > { > struct fec_enet_private *fep; > struct xdp_frame *xdpf; > @@ -1416,6 +1416,14 @@ fec_enet_tx_queue(struct net_device *ndev, u16 queue_id) > if (!skb) > goto tx_buf_done; > } else { > + /* Tx processing cannot call any XDP (or page pool) APIs if > + * the "budget" is 0. Because NAPI is called with budget of > + * 0 (such as netpoll) indicates we may be in an IRQ context, > + * however, we can't use the page pool from IRQ context. > + */ > + if (unlikely(!budget)) > + break; > + > xdpf = txq->tx_buf[index].xdp; > if (bdp->cbd_bufaddr) > dma_unmap_single(&fep->pdev->dev, This statement isn't correct. There are napi enabled and non-napi versions of these calls. This is the reason for things like the "allow_direct" parameter in page_pool_put_full_page and the "napi_direct" parameter in __xdp_return. By blocking on these cases you can end up hanging the Tx queue which is going to break netpoll as you are going to stall the ring on XDP packets if they are already in the queue. From what I can tell your driver is using xdp_return_frame in the case of an XDP frame which doesn't make use of the NAPI optimizations in freeing from what I can tell. The NAPI optimized version is xdp_return_frame_rx. > @@ -1508,14 +1516,14 @@ fec_enet_tx_queue(struct net_device *ndev, u16 queue_id) > writel(0, txq->bd.reg_desc_active); > } > > -static void fec_enet_tx(struct net_device *ndev) > +static void fec_enet_tx(struct net_device *ndev, int budget) > { > struct fec_enet_private *fep = netdev_priv(ndev); > int i; > > /* Make sure that AVB queues are processed first. */ > for (i = fep->num_tx_queues - 1; i >= 0; i--) > - fec_enet_tx_queue(ndev, i); > + fec_enet_tx_queue(ndev, i, budget); > } > > static void fec_enet_update_cbd(struct fec_enet_priv_rx_q *rxq, > @@ -1858,7 +1866,7 @@ static int fec_enet_rx_napi(struct napi_struct *napi, int budget) > > do { > done += fec_enet_rx(ndev, budget - done); > - fec_enet_tx(ndev); > + fec_enet_tx(ndev, budget); > } while ((done < budget) && fec_enet_collect_events(fep)); > > if (done < budget) { Since you are passing budget, one optimization you could make use of would be napi_consume_skb in your Tx path instead of dev_kfree_skb_any.
Hi Alexander, > > @@ -1416,6 +1416,14 @@ fec_enet_tx_queue(struct net_device *ndev, > u16 queue_id) > > if (!skb) > > goto tx_buf_done; > > } else { > > + /* Tx processing cannot call any XDP (or page pool) APIs if > > + * the "budget" is 0. Because NAPI is called with budget of > > + * 0 (such as netpoll) indicates we may be in an IRQ context, > > + * however, we can't use the page pool from IRQ context. > > + */ > > + if (unlikely(!budget)) > > + break; > > + > > xdpf = txq->tx_buf[index].xdp; > > if (bdp->cbd_bufaddr) > > dma_unmap_single(&fep->pdev->dev, > > This statement isn't correct. There are napi enabled and non-napi > versions of these calls. This is the reason for things like the > "allow_direct" parameter in page_pool_put_full_page and the > "napi_direct" parameter in __xdp_return. > > By blocking on these cases you can end up hanging the Tx queue which is > going to break netpoll as you are going to stall the ring on XDP > packets if they are already in the queue. > > From what I can tell your driver is using xdp_return_frame in the case > of an XDP frame which doesn't make use of the NAPI optimizations in > freeing from what I can tell. The NAPI optimized version is > xdp_return_frame_rx. > So you mean it is safe to use xdp_return_frame no matter in NAPI context or non-NAPI context? And xdp_return_frame_rx_napi must be used in NAPI context? If so, I think I must have misunderstood, then this patch is not necessary. > > @@ -1508,14 +1516,14 @@ fec_enet_tx_queue(struct net_device *ndev, > u16 queue_id) > > writel(0, txq->bd.reg_desc_active); > > } > > > > -static void fec_enet_tx(struct net_device *ndev) > > +static void fec_enet_tx(struct net_device *ndev, int budget) > > { > > struct fec_enet_private *fep = netdev_priv(ndev); > > int i; > > > > /* Make sure that AVB queues are processed first. */ > > for (i = fep->num_tx_queues - 1; i >= 0; i--) > > - fec_enet_tx_queue(ndev, i); > > + fec_enet_tx_queue(ndev, i, budget); > > } > > > > static void fec_enet_update_cbd(struct fec_enet_priv_rx_q *rxq, > > @@ -1858,7 +1866,7 @@ static int fec_enet_rx_napi(struct napi_struct > *napi, int budget) > > > > do { > > done += fec_enet_rx(ndev, budget - done); > > - fec_enet_tx(ndev); > > + fec_enet_tx(ndev, budget); > > } while ((done < budget) && fec_enet_collect_events(fep)); > > > > if (done < budget) { > > Since you are passing budget, one optimization you could make use of > would be napi_consume_skb in your Tx path instead of dev_kfree_skb_any. That's good suggestion, I think I can add this optimization in my XDP_TX support patch. Thanks!
On Tue, Jul 25, 2023 at 8:40 PM Wei Fang <wei.fang@nxp.com> wrote: > > Hi Alexander, > > > > @@ -1416,6 +1416,14 @@ fec_enet_tx_queue(struct net_device *ndev, > > u16 queue_id) > > > if (!skb) > > > goto tx_buf_done; > > > } else { > > > + /* Tx processing cannot call any XDP (or page pool) APIs if > > > + * the "budget" is 0. Because NAPI is called with budget of > > > + * 0 (such as netpoll) indicates we may be in an IRQ context, > > > + * however, we can't use the page pool from IRQ context. > > > + */ > > > + if (unlikely(!budget)) > > > + break; > > > + > > > xdpf = txq->tx_buf[index].xdp; > > > if (bdp->cbd_bufaddr) > > > dma_unmap_single(&fep->pdev->dev, > > > > This statement isn't correct. There are napi enabled and non-napi > > versions of these calls. This is the reason for things like the > > "allow_direct" parameter in page_pool_put_full_page and the > > "napi_direct" parameter in __xdp_return. > > > > By blocking on these cases you can end up hanging the Tx queue which is > > going to break netpoll as you are going to stall the ring on XDP > > packets if they are already in the queue. > > > > From what I can tell your driver is using xdp_return_frame in the case > > of an XDP frame which doesn't make use of the NAPI optimizations in > > freeing from what I can tell. The NAPI optimized version is > > xdp_return_frame_rx. > > > So you mean it is safe to use xdp_return_frame no matter in NAPI context > or non-NAPI context? And xdp_return_frame_rx_napi must be used in NAPI > context? If so, I think I must have misunderstood, then this patch is not necessary. Actually after talking with Jakub a bit more there is an issue here, but not freeing the frames isn't the solution. We likely need to just fix the page pool code so that it doesn't attempt to recycle the frames if operating in IRQ context. The way this is dealt with for skbs is that we queue skbs if we are in IRQ context so that it can be deferred to be freed by the net_tx_action. We likely need to look at doing something similar for page_pool pages or XDP frames. > > > @@ -1508,14 +1516,14 @@ fec_enet_tx_queue(struct net_device *ndev, > > u16 queue_id) > > > writel(0, txq->bd.reg_desc_active); > > > } > > > > > > -static void fec_enet_tx(struct net_device *ndev) > > > +static void fec_enet_tx(struct net_device *ndev, int budget) > > > { > > > struct fec_enet_private *fep = netdev_priv(ndev); > > > int i; > > > > > > /* Make sure that AVB queues are processed first. */ > > > for (i = fep->num_tx_queues - 1; i >= 0; i--) > > > - fec_enet_tx_queue(ndev, i); > > > + fec_enet_tx_queue(ndev, i, budget); > > > } > > > > > > static void fec_enet_update_cbd(struct fec_enet_priv_rx_q *rxq, > > > @@ -1858,7 +1866,7 @@ static int fec_enet_rx_napi(struct napi_struct > > *napi, int budget) > > > > > > do { > > > done += fec_enet_rx(ndev, budget - done); > > > - fec_enet_tx(ndev); > > > + fec_enet_tx(ndev, budget); > > > } while ((done < budget) && fec_enet_collect_events(fep)); > > > > > > if (done < budget) { > > > > Since you are passing budget, one optimization you could make use of > > would be napi_consume_skb in your Tx path instead of dev_kfree_skb_any. > That's good suggestion, I think I can add this optimization in my XDP_TX support > patch. Thanks!
> > > This statement isn't correct. There are napi enabled and non-napi > > > versions of these calls. This is the reason for things like the > > > "allow_direct" parameter in page_pool_put_full_page and the > > > "napi_direct" parameter in __xdp_return. > > > > > > By blocking on these cases you can end up hanging the Tx queue which is > > > going to break netpoll as you are going to stall the ring on XDP > > > packets if they are already in the queue. > > > > > > From what I can tell your driver is using xdp_return_frame in the case > > > of an XDP frame which doesn't make use of the NAPI optimizations in > > > freeing from what I can tell. The NAPI optimized version is > > > xdp_return_frame_rx. > > > > > So you mean it is safe to use xdp_return_frame no matter in NAPI context > > or non-NAPI context? And xdp_return_frame_rx_napi must be used in NAPI > > context? If so, I think I must have misunderstood, then this patch is not > necessary. > > Actually after talking with Jakub a bit more there is an issue here, > but not freeing the frames isn't the solution. We likely need to just > fix the page pool code so that it doesn't attempt to recycle the > frames if operating in IRQ context. > > The way this is dealt with for skbs is that we queue skbs if we are in > IRQ context so that it can be deferred to be freed by the > net_tx_action. We likely need to look at doing something similar for > page_pool pages or XDP frames. > After reading your discussion with Jakub, I understand this issue a bit more. But we are not sure when this issue will be fixed in page pool, currently we can only tolerate a delay in sending of a netpoll message. So I think this patch is necessary, and I will refine it in the future when the page pool has fixed the issue. In addition, as you mentioned before, napi_consume_skb should be used to instead of dev_kfree_skb_any, so I will improve this patch in version 2. Thanks.
On Thu, 27 Jul 2023 02:08:32 +0000 Wei Fang wrote: > > Actually after talking with Jakub a bit more there is an issue here, > > but not freeing the frames isn't the solution. We likely need to just > > fix the page pool code so that it doesn't attempt to recycle the > > frames if operating in IRQ context. > > > > The way this is dealt with for skbs is that we queue skbs if we are in > > IRQ context so that it can be deferred to be freed by the > > net_tx_action. We likely need to look at doing something similar for > > page_pool pages or XDP frames. > > > After reading your discussion with Jakub, I understand this issue a bit more. > But we are not sure when this issue will be fixed in page pool, currently we > can only tolerate a delay in sending of a netpoll message. So I think this patch > is necessary, and I will refine it in the future when the page pool has fixed the > issue. In addition, as you mentioned before, napi_consume_skb should be > used to instead of dev_kfree_skb_any, so I will improve this patch in version 2. I think so too, since the patch can only help, you already wrote it and it won't be extra backporting work since the code is only present in 6.5 - I think it's worth applying. And we can refine things as page pool limitations get listed (the napi_consume_skb() is net-next material, anyway).
Hello: This patch was applied to netdev/net.git (main) by Jakub Kicinski <kuba@kernel.org>: On Tue, 25 Jul 2023 15:41:48 +0800 you wrote: > According to the clarification [1] in the latest napi.rst, the tx > processing cannot call any XDP (or page pool) APIs if the "budget" > is 0. Because NAPI is called with the budget of 0 (such as netpoll) > indicates we may be in an IRQ context, however, we cannot use the > page pool from IRQ context. > > [1] https://lore.kernel.org/all/20230720161323.2025379-1-kuba@kernel.org/ > > [...] Here is the summary with links: - [net] net: fec: tx processing does not call XDP APIs if budget is 0 https://git.kernel.org/netdev/net/c/15cec633fc7b You are awesome, thank you!
> On Thu, 27 Jul 2023 02:08:32 +0000 Wei Fang wrote: > > > Actually after talking with Jakub a bit more there is an issue here, > > > but not freeing the frames isn't the solution. We likely need to > > > just fix the page pool code so that it doesn't attempt to recycle > > > the frames if operating in IRQ context. > > > > > > The way this is dealt with for skbs is that we queue skbs if we are > > > in IRQ context so that it can be deferred to be freed by the > > > net_tx_action. We likely need to look at doing something similar for > > > page_pool pages or XDP frames. > > > > > After reading your discussion with Jakub, I understand this issue a bit more. > > But we are not sure when this issue will be fixed in page pool, > > currently we can only tolerate a delay in sending of a netpoll > > message. So I think this patch is necessary, and I will refine it in > > the future when the page pool has fixed the issue. In addition, as you > > mentioned before, napi_consume_skb should be used to instead of > dev_kfree_skb_any, so I will improve this patch in version 2. > > I think so too, since the patch can only help, you already wrote it and it won't > be extra backporting work since the code is only present in > 6.5 - I think it's worth applying. And we can refine things as page pool > limitations get listed (the napi_consume_skb() is net-next material, anyway). Okay, thank you. :)
diff --git a/drivers/net/ethernet/freescale/fec_main.c b/drivers/net/ethernet/freescale/fec_main.c index 073d61619336..66b5cbdb43b9 100644 --- a/drivers/net/ethernet/freescale/fec_main.c +++ b/drivers/net/ethernet/freescale/fec_main.c @@ -1372,7 +1372,7 @@ fec_enet_hwtstamp(struct fec_enet_private *fep, unsigned ts, } static void -fec_enet_tx_queue(struct net_device *ndev, u16 queue_id) +fec_enet_tx_queue(struct net_device *ndev, u16 queue_id, int budget) { struct fec_enet_private *fep; struct xdp_frame *xdpf; @@ -1416,6 +1416,14 @@ fec_enet_tx_queue(struct net_device *ndev, u16 queue_id) if (!skb) goto tx_buf_done; } else { + /* Tx processing cannot call any XDP (or page pool) APIs if + * the "budget" is 0. Because NAPI is called with budget of + * 0 (such as netpoll) indicates we may be in an IRQ context, + * however, we can't use the page pool from IRQ context. + */ + if (unlikely(!budget)) + break; + xdpf = txq->tx_buf[index].xdp; if (bdp->cbd_bufaddr) dma_unmap_single(&fep->pdev->dev, @@ -1508,14 +1516,14 @@ fec_enet_tx_queue(struct net_device *ndev, u16 queue_id) writel(0, txq->bd.reg_desc_active); } -static void fec_enet_tx(struct net_device *ndev) +static void fec_enet_tx(struct net_device *ndev, int budget) { struct fec_enet_private *fep = netdev_priv(ndev); int i; /* Make sure that AVB queues are processed first. */ for (i = fep->num_tx_queues - 1; i >= 0; i--) - fec_enet_tx_queue(ndev, i); + fec_enet_tx_queue(ndev, i, budget); } static void fec_enet_update_cbd(struct fec_enet_priv_rx_q *rxq, @@ -1858,7 +1866,7 @@ static int fec_enet_rx_napi(struct napi_struct *napi, int budget) do { done += fec_enet_rx(ndev, budget - done); - fec_enet_tx(ndev); + fec_enet_tx(ndev, budget); } while ((done < budget) && fec_enet_collect_events(fep)); if (done < budget) {