From patchwork Sat Dec 23 02:55:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 182878 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:2483:b0:fb:cd0c:d3e with SMTP id q3csp1460495dyi; Fri, 22 Dec 2023 19:02:44 -0800 (PST) X-Google-Smtp-Source: AGHT+IFgG3xgn5rajB+b+Y7zp+4I11Inhtt+gVYfMRiUPbMGETuYdFkDjPr7eXF5b2vUzz6kxySu X-Received: by 2002:a17:90b:128d:b0:28c:e12:705f with SMTP id fw13-20020a17090b128d00b0028c0e12705fmr642812pjb.83.1703300563734; Fri, 22 Dec 2023 19:02:43 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1703300563; cv=none; d=google.com; s=arc-20160816; b=qyC5OhH0F7jA4S9+FAfgNS9Fx8utoKSVRUlgvnElTNFC6G2uwVCXYW9K11zz+rYH7U IQGfB1PgrTyFQwShjgKU9j4FjyaxTm6GBkVp2YAJrVFvZSZBha0G/aBGRLv2JjeuVtSs pxqE7of3TPM0XyRAq0m/RgZ79KePDez5VuLukYGXffr3adBDgGUA8ZaVeaBQ7mRfMZdF S2o8GF/0fLOMOaY5FA5TETilgXccMHkuv+Cj6B2H0P5FTG+TkTaq4vdJUUi7TAZrQDfy 2de6GdSNe8HdBjCenKNGm2/YN58zh3YY0VoHyprqyFLkjJUc1i0/dhX4fG/cDGD52/RG 83yA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=8b/ltFgMRxxFP8uT4LsPcVvhDQfSgC80747AWweuLVQ=; fh=J3H3jkZRTdH/CD0R7uXO8r43FYFiJ1FqB2ouoqeBxmQ=; b=oJIchkTn5J5Y2FByQvuWE2lHd9xE8FGdU0f//KHLq0/p1okpNsJxMDhIIsQN/vExlO Jz079k29BZ7xofWwzJyd2oa5DhPaPbKua86AlnnMTLs8Kn5Ih1LwW7yPd+mgq7uXsBPs Bb0frfpLTr/WshooWHeGJgkGrRqIfiPJls9vvj/61Gfq2s3pkEUcTdQRyXAUd3LsAy9N 5Ba2cpinNIoHk7nKt0863Hm30YigoJ6bLNVyTX5o3CYfUK02bjhFAD0pLmQ7rUhbTMxX 6xes84jsyIn6pCJatorTpPpNOrHe03ARwD+fbWk/N4a1CeYsaQQ9lWQpDq/Il8/IcBwc 8Wbw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=E8YzQdRO; spf=pass (google.com: domain of linux-kernel+bounces-10227-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:40f1:3f00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-10227-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from sy.mirrors.kernel.org (sy.mirrors.kernel.org. [2604:1380:40f1:3f00::1]) by mx.google.com with ESMTPS id l12-20020a17090a850c00b0028bde0db2e0si4986940pjn.144.2023.12.22.19.02.43 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Dec 2023 19:02:43 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-10227-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:40f1:3f00::1 as permitted sender) client-ip=2604:1380:40f1:3f00::1; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=E8YzQdRO; spf=pass (google.com: domain of linux-kernel+bounces-10227-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:40f1:3f00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-10227-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sy.mirrors.kernel.org (Postfix) with ESMTPS id 4B118B22A21 for ; Sat, 23 Dec 2023 03:02:27 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id D0A631D69A; Sat, 23 Dec 2023 02:58:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="E8YzQdRO" X-Original-To: linux-kernel@vger.kernel.org Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.20]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E9CD116400; Sat, 23 Dec 2023 02:58:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1703300326; x=1734836326; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Mr6Snaw/+lFcIGo7rqhsa9beABjxqH7MK8K28PCFbjs=; b=E8YzQdRODWNdLGvhde1qgktOtyoNZLw+xHoQK6/nIQC7mp40JUwTXcy3 pghQFpg1X0taAlLylZ5+MNp1jP6HT/hMfez8WPmVTEwqdC1w6YX+sKrwd hUJhdeRdnaL7ECmahe8yQh45fAEQ5pkrZ6ZrIB323uNxn6iz8uIwvk77k Ay68VGeGHDpNdXdnz2QApJJjmol8a6j7By9mEQ1Kf14xAJiu8QZz/EWHT a7n293c+o1Q033vZKRknVrpcvx8ELBR8HxXKqeTysppjEhcETwv4S6Lqz NHnahaiTxm8hqIuxa4AhK7OU3zQ46zp8FsZBBOcbcbdxu4csd9HN5hnzr Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10932"; a="386610857" X-IronPort-AV: E=Sophos;i="6.04,298,1695711600"; d="scan'208";a="386610857" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Dec 2023 18:58:46 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.04,298,1695711600"; d="scan'208";a="25537489" Received: from newjersey.igk.intel.com ([10.102.20.203]) by orviesa001.jf.intel.com with ESMTP; 22 Dec 2023 18:58:43 -0800 From: Alexander Lobakin To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Alexander Lobakin , Maciej Fijalkowski , Michal Kubiak , Larysa Zaremba , Alexei Starovoitov , Daniel Borkmann , Willem de Bruijn , intel-wired-lan@lists.osuosl.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH RFC net-next 11/34] xdp: allow attaching already registered memory model to xdp_rxq_info Date: Sat, 23 Dec 2023 03:55:31 +0100 Message-ID: <20231223025554.2316836-12-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20231223025554.2316836-1-aleksander.lobakin@intel.com> References: <20231223025554.2316836-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1786040091846955541 X-GMAIL-MSGID: 1786040091846955541 One may need to register memory model separately from xdp_rxq_info. One simple example may be XDP test run code, but in general, it might be useful when memory model registering is managed by one layer and then XDP RxQ info by a different one. Allow such scenarios by adding a simple helper which "attaches" an already registered memory model to the desired xdp_rxq_info. As this is mostly needed for Page Pool, add a special function to do that for a &page_pool pointer. Signed-off-by: Alexander Lobakin --- include/net/xdp.h | 14 ++++++++++++++ net/bpf/test_run.c | 4 ++-- net/core/xdp.c | 12 ++++++++++++ 3 files changed, 28 insertions(+), 2 deletions(-) diff --git a/include/net/xdp.h b/include/net/xdp.h index 197808df1ee1..909c0bc50517 100644 --- a/include/net/xdp.h +++ b/include/net/xdp.h @@ -356,6 +356,20 @@ void xdp_rxq_info_unreg_mem_model(struct xdp_rxq_info *xdp_rxq); int xdp_reg_mem_model(struct xdp_mem_info *mem, enum xdp_mem_type type, void *allocator); void xdp_unreg_mem_model(struct xdp_mem_info *mem); +void xdp_rxq_info_attach_page_pool(struct xdp_rxq_info *xdp_rxq, + const struct page_pool *pool); + +static inline void +xdp_rxq_info_attach_mem_model(struct xdp_rxq_info *xdp_rxq, + const struct xdp_mem_info *mem) +{ + xdp_rxq->mem = *mem; +} + +static inline void xdp_rxq_info_detach_mem_model(struct xdp_rxq_info *xdp_rxq) +{ + xdp_rxq->mem = (struct xdp_mem_info){ }; +} /* Drivers not supporting XDP metadata can use this helper, which * rejects any room expansion for metadata as a result. diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c index dfd919374017..b612b28ebeac 100644 --- a/net/bpf/test_run.c +++ b/net/bpf/test_run.c @@ -194,8 +194,7 @@ static int xdp_test_run_setup(struct xdp_test_data *xdp, struct xdp_buff *orig_c * xdp_mem_info pointing to our page_pool */ xdp_rxq_info_reg(&xdp->rxq, orig_ctx->rxq->dev, 0, 0); - xdp->rxq.mem.type = MEM_TYPE_PAGE_POOL; - xdp->rxq.mem.id = pp->xdp_mem_id; + xdp_rxq_info_attach_page_pool(&xdp->rxq, xdp->pp); xdp->dev = orig_ctx->rxq->dev; xdp->orig_ctx = orig_ctx; @@ -212,6 +211,7 @@ static int xdp_test_run_setup(struct xdp_test_data *xdp, struct xdp_buff *orig_c static void xdp_test_run_teardown(struct xdp_test_data *xdp) { + xdp_rxq_info_detach_mem_model(&xdp->rxq); xdp_unreg_mem_model(&xdp->mem); page_pool_destroy(xdp->pp); kfree(xdp->frames); diff --git a/net/core/xdp.c b/net/core/xdp.c index 4869c1c2d8f3..03ebdb21ea62 100644 --- a/net/core/xdp.c +++ b/net/core/xdp.c @@ -368,6 +368,18 @@ int xdp_rxq_info_reg_mem_model(struct xdp_rxq_info *xdp_rxq, EXPORT_SYMBOL_GPL(xdp_rxq_info_reg_mem_model); +void xdp_rxq_info_attach_page_pool(struct xdp_rxq_info *xdp_rxq, + const struct page_pool *pool) +{ + struct xdp_mem_info mem = { + .type = MEM_TYPE_PAGE_POOL, + .id = pool->xdp_mem_id, + }; + + xdp_rxq_info_attach_mem_model(xdp_rxq, &mem); +} +EXPORT_SYMBOL_GPL(xdp_rxq_info_attach_page_pool); + /* XDP RX runs under NAPI protection, and in different delivery error * scenarios (e.g. queue full), it is possible to return the xdp_frame * while still leveraging this protection. The @napi_direct boolean