From patchwork Mon Feb 5 11:04:24 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 196740 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7301:168b:b0:106:860b:bbdd with SMTP id ma11csp800990dyb; Mon, 5 Feb 2024 03:07:36 -0800 (PST) X-Google-Smtp-Source: AGHT+IG6RRpacwVN5OJkAGzsdYWMq53Q2REeqWZZBg/EJxT9pvy23iZRLxj8npsD72smGVsll9Tu X-Received: by 2002:a0c:e34f:0:b0:68c:88fe:a40c with SMTP id a15-20020a0ce34f000000b0068c88fea40cmr5279018qvm.36.1707131256650; Mon, 05 Feb 2024 03:07:36 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1707131256; cv=pass; d=google.com; s=arc-20160816; b=E42OWcac4BSeCRoJNyFoHTKhliJdPygRcD+1vB7yk5ofIG+lM0LkP4BKUQPvGy+OFi 65hYDDn3HJSFcl11er1K+Yd74cqdM0DFrZgC04L0Gl4yxu3L++BwS+1dbMxf96ogL1jc UG5ta7QcW0izBjr20Vv3MWqqa2J+Udf8Cm/5AVWfg90LL15jAHg2RitzZBMU5lb148oO 759Ei5G4mE777JeF1atOZRzPqER/Th2ObcvbbxihKKo2lhMOB2ooTMq1hbhLOIvr8yyj cSge9r+R5TR2zclhUuToRwNDsdvmaOBLcKTms027D+F5+HRLR1vSrcwda8UkUXUHgwfc XMoQ== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=HCgKfU4pneQXUQIp/h87D2YbNfsaDuqTvWBWySRCeEE=; fh=nIusjPXX8w/oYU05iwG5VjHAy+14xY3qle4ZaNCVcYw=; b=xlM6GW0noAD7VTbKPQj30v/gymqAUfCPR9DjAKA8OVUfnhRISihGUY3ckNGyd+5yfL 8PJat356CS4O9JZamesVMk+tbrEA0U550lmVdYg5ZgAaYSYChBCLZr8IbkG/kOzxuV8c sbXKetxVJNKz9jLWea6oq4/s2G4emdraeSbHyY3PUV8FmS5W2jmsU0pw6Ekhljb//cuZ ecU8TBOV+r4p/tc+je2/FEepUCO/m2/xmnpiRdwvuqbpby9oc2HHFKJ/TmhgDJwt7UAb eLOPMbHoROn2p8cp/wkOp8vUvfef1P5OnUpyK3HYHKA7HOAUyDdxnbGuVPsibm5rzq6l 68fw==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=DM6bXYFB; arc=pass (i=1 spf=pass spfdomain=intel.com dkim=pass dkdomain=intel.com dmarc=pass fromdomain=intel.com); spf=pass (google.com: domain of linux-kernel+bounces-52453-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-52453-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com X-Forwarded-Encrypted: i=1; AJvYcCXPHY1MCVr3ql8DfzTh5fS2234UpmJvSkyhGhPiIjJu2RvCE6i5/30GuGY5AKMth3T9FL+LNq75mapt7Sz+MBTR35kEpg== Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [147.75.199.223]) by mx.google.com with ESMTPS id a4-20020a0ce904000000b0068c78eccd4bsi7970281qvo.269.2024.02.05.03.07.36 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 05 Feb 2024 03:07:36 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-52453-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) client-ip=147.75.199.223; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=DM6bXYFB; arc=pass (i=1 spf=pass spfdomain=intel.com dkim=pass dkdomain=intel.com dmarc=pass fromdomain=intel.com); spf=pass (google.com: domain of linux-kernel+bounces-52453-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-52453-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id 3EEB81C21A42 for ; Mon, 5 Feb 2024 11:07:36 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 54B591BC20; Mon, 5 Feb 2024 11:05:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="DM6bXYFB" Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.7]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 975B01B940; Mon, 5 Feb 2024 11:05:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.7 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707131130; cv=none; b=UeRPTa0OYcZUnHC5o6bggXNU8G19I/B8A2BS0fHGzn2klKKWYNMCFEXZRZYoqyAhMIEfCAz/H2wWoSG9brkotc+Zpm4yULoIcrr/Z4abYaq3y/heKPP2j17WZN/OSJ/ZHgDSnMDHEdrLYwfd7bwf5ijSAclLLA+NlbveSalKoSQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707131130; c=relaxed/simple; bh=9vXQ5bF/Xi6PCf2WeVv071d/6ZrsHREf5IxWopzq0LY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=RsiMPVwZmPcLqzwfGentRMtUZnne6NFijE+H9AUxXF3mUX1dURFpkiZp5pmr8TV4XxgfGKHlYvZDpGHv//85I4CYToQy7s7xM88NhxaoCCwyBPDv/RwUEXjLiTI5yuNAckJo/WKdQzFjOz0pDfttOtCDJgB/B5iTE4CyCAUYCYk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=DM6bXYFB; arc=none smtp.client-ip=192.198.163.7 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1707131128; x=1738667128; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=9vXQ5bF/Xi6PCf2WeVv071d/6ZrsHREf5IxWopzq0LY=; b=DM6bXYFBd02QtM7k/0sy0cuvfDEan3CLafRV+Yfqdb6WO5sBCJKSTzXO ywYDAFqtWC3fVjAoFAMHHTumQLO2rVvLtF6H6XpyhAavpGnslXS4B03hS nybMZOcR8phTX9bOgtZdwtBYp7gLlgKgp7pAAW+GUnI+OIUL9FyBVADgk YiBwE7dd6R2BTPjUR77Xh0tRAX0w875bwD0ZuGDWz3pDx485Sc1o278CC X4ytablxFdWTgF2WgpNyLG3DOzx4tNFQ02GET/4jVQ42qSkqZgdOKuBvi JuZTFnd1gBGjkEpqc48WxzlwCeKLNGurQWOcmCjp0SNuLBebK4XGSRMuI Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10974"; a="25945465" X-IronPort-AV: E=Sophos;i="6.05,245,1701158400"; d="scan'208";a="25945465" Received: from fmviesa004.fm.intel.com ([10.60.135.144]) by fmvoesa101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Feb 2024 03:05:28 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.05,245,1701158400"; d="scan'208";a="5328255" Received: from newjersey.igk.intel.com ([10.102.20.203]) by fmviesa004.fm.intel.com with ESMTP; 05 Feb 2024 03:05:24 -0800 From: Alexander Lobakin To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Alexander Lobakin , Christoph Hellwig , Marek Szyprowski , Robin Murphy , Joerg Roedel , Will Deacon , Greg Kroah-Hartman , "Rafael J. Wysocki" , Magnus Karlsson , Maciej Fijalkowski , Alexander Duyck , bpf@vger.kernel.org, netdev@vger.kernel.org, iommu@lists.linux.dev, linux-kernel@vger.kernel.org Subject: [PATCH net-next v2 5/7] page_pool: don't use driver-set flags field directly Date: Mon, 5 Feb 2024 12:04:24 +0100 Message-ID: <20240205110426.764393-6-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240205110426.764393-1-aleksander.lobakin@intel.com> References: <20240205110426.764393-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1790056864460631158 X-GMAIL-MSGID: 1790056864460631158 page_pool::p is driver-defined params, copied directly from the structure passed to page_pool_create(). The structure isn't meant to be modified by the Page Pool core code and this even might look confusing[0][1]. In order to be able to alter some flags, let's define our own, internal fields the same way as the already existing one (::has_init_callback). They are defined as bits in the driver-set params, leave them so here as well, to not waste byte-per-bit or so. Almost 30 bits are still free for future extensions. We could've defined only new flags here or only the ones we may need to alter, but checking some flags in one place while others in another doesn't sound convenient or intuitive. ::flags passed by the driver can now go to the "slow" PP params. Suggested-by: Jakub Kicinski Link[0]: https://lore.kernel.org/netdev/20230703133207.4f0c54ce@kernel.org Suggested-by: Alexander Duyck Link[1]: https://lore.kernel.org/netdev/CAKgT0UfZCGnWgOH96E4GV3ZP6LLbROHM7SHE8NKwq+exX+Gk_Q@mail.gmail.com Signed-off-by: Alexander Lobakin --- include/net/page_pool/types.h | 9 ++++++--- net/core/page_pool.c | 34 ++++++++++++++++++---------------- 2 files changed, 24 insertions(+), 19 deletions(-) diff --git a/include/net/page_pool/types.h b/include/net/page_pool/types.h index 217e73b7e4fc..6a767ad1c572 100644 --- a/include/net/page_pool/types.h +++ b/include/net/page_pool/types.h @@ -44,7 +44,6 @@ struct pp_alloc_cache { /** * struct page_pool_params - page pool parameters - * @flags: PP_FLAG_DMA_MAP, PP_FLAG_DMA_SYNC_DEV * @order: 2^order pages on allocation * @pool_size: size of the ptr_ring * @nid: NUMA node id to allocate from pages from @@ -54,10 +53,10 @@ struct pp_alloc_cache { * @dma_dir: DMA mapping direction * @max_len: max DMA sync memory size for PP_FLAG_DMA_SYNC_DEV * @offset: DMA sync address offset for PP_FLAG_DMA_SYNC_DEV + * @flags: PP_FLAG_DMA_MAP, PP_FLAG_DMA_SYNC_DEV */ struct page_pool_params { struct_group_tagged(page_pool_params_fast, fast, - unsigned int flags; unsigned int order; unsigned int pool_size; int nid; @@ -68,6 +67,7 @@ struct page_pool_params { unsigned int offset; ); struct_group_tagged(page_pool_params_slow, slow, + unsigned int flags; struct net_device *netdev; /* private: used by test code only */ void (*init_callback)(struct page *page, void *arg); @@ -129,7 +129,10 @@ struct page_pool { struct page_pool_params_fast p; u32 pages_state_hold_cnt; - bool has_init_callback; + + bool dma_map:1; /* Perform DMA mapping */ + bool dma_sync:1; /* Perform DMA sync */ + bool has_init_callback:1; /* slow.init_callback is set */ /* The following block must stay within one cacheline. On 32-bit * systems, sizeof(long) == sizeof(int), so that the block size is diff --git a/net/core/page_pool.c b/net/core/page_pool.c index be1219816990..2c353906407c 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -188,7 +188,7 @@ static int page_pool_init(struct page_pool *pool, memcpy(&pool->slow, ¶ms->slow, sizeof(pool->slow)); /* Validate only known flags were used */ - if (pool->p.flags & ~(PP_FLAG_ALL)) + if (pool->slow.flags & ~(PP_FLAG_ALL)) return -EINVAL; if (pool->p.pool_size) @@ -202,22 +202,26 @@ static int page_pool_init(struct page_pool *pool, * DMA_BIDIRECTIONAL is for allowing page used for DMA sending, * which is the XDP_TX use-case. */ - if (pool->p.flags & PP_FLAG_DMA_MAP) { + if (pool->slow.flags & PP_FLAG_DMA_MAP) { if ((pool->p.dma_dir != DMA_FROM_DEVICE) && (pool->p.dma_dir != DMA_BIDIRECTIONAL)) return -EINVAL; + + pool->dma_map = true; } - if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV) { + if (pool->slow.flags & PP_FLAG_DMA_SYNC_DEV) { /* In order to request DMA-sync-for-device the page * needs to be mapped */ - if (!(pool->p.flags & PP_FLAG_DMA_MAP)) + if (!(pool->slow.flags & PP_FLAG_DMA_MAP)) return -EINVAL; if (!pool->p.max_len) return -EINVAL; + pool->dma_sync = true; + /* pool->p.offset has to be set according to the address * offset used by the DMA engine to start copying rx data */ @@ -243,7 +247,7 @@ static int page_pool_init(struct page_pool *pool, /* Driver calling page_pool_create() also call page_pool_destroy() */ refcount_set(&pool->user_cnt, 1); - if (pool->p.flags & PP_FLAG_DMA_MAP) + if (pool->dma_map) get_device(pool->p.dev); return 0; @@ -253,7 +257,7 @@ static void page_pool_uninit(struct page_pool *pool) { ptr_ring_cleanup(&pool->ring, NULL); - if (pool->p.flags & PP_FLAG_DMA_MAP) + if (pool->dma_map) put_device(pool->p.dev); #ifdef CONFIG_PAGE_POOL_STATS @@ -396,7 +400,7 @@ static bool page_pool_dma_map(struct page_pool *pool, struct page *page) if (page_pool_set_dma_addr(page, dma)) goto unmap_failed; - if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV) + if (pool->dma_sync) page_pool_dma_sync_for_device(pool, page, pool->p.max_len); return true; @@ -442,8 +446,7 @@ static struct page *__page_pool_alloc_page_order(struct page_pool *pool, if (unlikely(!page)) return NULL; - if ((pool->p.flags & PP_FLAG_DMA_MAP) && - unlikely(!page_pool_dma_map(pool, page))) { + if (pool->dma_map && unlikely(!page_pool_dma_map(pool, page))) { put_page(page); return NULL; } @@ -463,8 +466,8 @@ static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool, gfp_t gfp) { const int bulk = PP_ALLOC_CACHE_REFILL; - unsigned int pp_flags = pool->p.flags; unsigned int pp_order = pool->p.order; + bool dma_map = pool->dma_map; struct page *page; int i, nr_pages; @@ -489,8 +492,7 @@ static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool, */ for (i = 0; i < nr_pages; i++) { page = pool->alloc.cache[i]; - if ((pp_flags & PP_FLAG_DMA_MAP) && - unlikely(!page_pool_dma_map(pool, page))) { + if (dma_map && unlikely(!page_pool_dma_map(pool, page))) { put_page(page); continue; } @@ -562,7 +564,7 @@ void __page_pool_release_page_dma(struct page_pool *pool, struct page *page) { dma_addr_t dma; - if (!(pool->p.flags & PP_FLAG_DMA_MAP)) + if (!pool->dma_map) /* Always account for inflight pages, even if we didn't * map them */ @@ -640,7 +642,7 @@ static bool page_pool_recycle_in_cache(struct page *page, } /* If the page refcnt == 1, this will try to recycle the page. - * if PP_FLAG_DMA_SYNC_DEV is set, we'll try to sync the DMA area for + * If pool->dma_sync is set, we'll try to sync the DMA area for * the configured size min(dma_sync_size, pool->max_len). * If the page refcnt != 1, then the page will be returned to memory * subsystem. @@ -663,7 +665,7 @@ __page_pool_put_page(struct page_pool *pool, struct page *page, if (likely(page_ref_count(page) == 1 && !page_is_pfmemalloc(page))) { /* Read barrier done in page_ref_count / READ_ONCE */ - if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV) + if (pool->dma_sync) page_pool_dma_sync_for_device(pool, page, dma_sync_size); @@ -776,7 +778,7 @@ static struct page *page_pool_drain_frag(struct page_pool *pool, return NULL; if (page_ref_count(page) == 1 && !page_is_pfmemalloc(page)) { - if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV) + if (pool->dma_sync) page_pool_dma_sync_for_device(pool, page, -1); return page;