From patchwork Sat Dec 23 02:55:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 182879 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:2483:b0:fb:cd0c:d3e with SMTP id q3csp1460514dyi; Fri, 22 Dec 2023 19:02:46 -0800 (PST) X-Google-Smtp-Source: AGHT+IGPiQll4hMTh/cEe5VrDYpi0KJghcRJBMvahr21GNG7bJz1LNoDjRFjBQNb4/cfenkoa+46 X-Received: by 2002:a17:906:701:b0:a23:5cdd:fe31 with SMTP id y1-20020a170906070100b00a235cddfe31mr1038229ejb.117.1703300566291; Fri, 22 Dec 2023 19:02:46 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1703300566; cv=none; d=google.com; s=arc-20160816; b=1F41Ob6z05AAJSWV0+etHQn7hBti+orsSgIFP409xzBmifkHOdMZ/ejprkAiNdwDzk 6P4Nvj1MakcqfxKYLR3QATIY5iV5RZweRfKEn2UPwpvL2oM3L01LtVDvsdDHz69igv9f EdQzlTEWUeSm/hjqdUuxa7QQul04ttul2XYpIiMvsB1A2H2gW1jdsfwmLOfQwU0kCGVK zvXR4lW8jJ9/YKBRNgJwaK4C9BBGO+zwD8sla3d+2G3DjEbDt/QNCagi6yiSnLzVJwGO Pu7IA0FATwJK2D9Pz2iTpIsNkzoURJyvw+/yCLWLrLMtjR8w5LljAEPs3pSobJfony5O vIqA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=RAnhNZqf44y/2GlZzpyDT4tuigD9BCZ5s/VKOIezywQ=; fh=J3H3jkZRTdH/CD0R7uXO8r43FYFiJ1FqB2ouoqeBxmQ=; b=wAc/CCBvJi18IO9Jf1Z6iKdcBF1C6nEP3Wbu6FDymfa29/rTEzp0uaLVwPMaYMlqf4 nzkPbhNC5cGQhuo0f9MLgGL6jzT1CVyi1cdWcyV2GDmyvuAXmmrLtZar+Zh6dUBTEDxm JGxgGk2jOxK+OOz4+NdbiXU9ZmCSH59aioJqj6Z+NjtttxBTUOTRrXVG81AQjfL0Cbvq jIhIc4BF4AamgruQyakiY4LhKJXVj07AAnCdFPXT6zD4TcUSL/3hHn8MRc0ln8krxnlt sddbMiw733Z6Aq9ZtcPnZeTie9n0JQ2mGsVICDB1iiqjzElYDT3OsGULkHFs+v2qquNL WqvA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=jVkDPg5N; spf=pass (google.com: domain of linux-kernel+bounces-10228-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) smtp.mailfrom="linux-kernel+bounces-10228-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [147.75.80.249]) by mx.google.com with ESMTPS id l2-20020a170907914200b00a26866833f7si2243360ejs.127.2023.12.22.19.02.46 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Dec 2023 19:02:46 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-10228-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) client-ip=147.75.80.249; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=jVkDPg5N; spf=pass (google.com: domain of linux-kernel+bounces-10228-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) smtp.mailfrom="linux-kernel+bounces-10228-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id E49111F23A75 for ; Sat, 23 Dec 2023 03:02:45 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id C1F11241E8; Sat, 23 Dec 2023 02:58:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="jVkDPg5N" X-Original-To: linux-kernel@vger.kernel.org Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.20]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A8F9B1D696; Sat, 23 Dec 2023 02:58:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1703300330; x=1734836330; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=lwem7QAKrTATP+Ps1asjKzaTx+4QKgH8qLT9fJ4NywM=; b=jVkDPg5NTNw/M/bNdjDvryr0p6l7/Rx/CcUtoify5Af/wMpgCON3zmC4 C9fMrL7mtzbnxknyEVO4eXEM2UH59DH+iQ3P+bweet4fMQ0TxipjneHy7 /O9kFJViFmCzFdMwuej94EzW2gjgnofkHOLoXDec/MKvfm//DZ0PWzyY0 xJh7bHsTaTN6VzDi3GvpUSYZQWoCGYcLh7czOalCQ3L+v4LeRbLzRJTLa pvccrnBy5U/5s2Ei1YPFXw3pp/1yUga7zDZE7B+aCKZ9A/BczbizMXXyh dep+WXZe8pMoC4FEHnMPhYaL9Se/Ho/x77Tgc2ZjqdDZWC/Ni4zsslwn4 Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10932"; a="386610865" X-IronPort-AV: E=Sophos;i="6.04,298,1695711600"; d="scan'208";a="386610865" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Dec 2023 18:58:50 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.04,298,1695711600"; d="scan'208";a="25537492" Received: from newjersey.igk.intel.com ([10.102.20.203]) by orviesa001.jf.intel.com with ESMTP; 22 Dec 2023 18:58:47 -0800 From: Alexander Lobakin To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Alexander Lobakin , Maciej Fijalkowski , Michal Kubiak , Larysa Zaremba , Alexei Starovoitov , Daniel Borkmann , Willem de Bruijn , intel-wired-lan@lists.osuosl.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH RFC net-next 12/34] xdp: add generic xdp_buff_add_frag() Date: Sat, 23 Dec 2023 03:55:32 +0100 Message-ID: <20231223025554.2316836-13-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20231223025554.2316836-1-aleksander.lobakin@intel.com> References: <20231223025554.2316836-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1786040094434181228 X-GMAIL-MSGID: 1786040094434181228 The code piece which would attach a frag to &xdp_buff is almost identical across the drivers supporting XDP multi-buffer on Rx. Make it a generic elegant onelner. Also, I see lots of drivers calculating frags_truesize as `xdp->frame_sz * nr_frags`. I can't say this is fully correct, since frags might be backed by chunks of different sizes, especially with stuff like the header split. Even page_pool_alloc() can give you two different truesizes on two subsequent requests to allocate the same buffer size. Add a field to &skb_shared_info (unionized as there's no free slot currently on x6_64) to track the "true" truesize. It can be used later when updating an skb. Signed-off-by: Alexander Lobakin --- include/linux/skbuff.h | 14 ++++++++++---- include/net/xdp.h | 36 +++++++++++++++++++++++++++++++++++- 2 files changed, 45 insertions(+), 5 deletions(-) diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index ea5c8ab3ed00..e350efa04070 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -598,11 +598,17 @@ struct skb_shared_info { * Warning : all fields before dataref are cleared in __alloc_skb() */ atomic_t dataref; - unsigned int xdp_frags_size; - /* Intermediate layers must ensure that destructor_arg - * remains valid until skb destructor */ - void * destructor_arg; + union { + struct { + unsigned int xdp_frags_size; + u32 xdp_frags_truesize; + }; + + /* Intermediate layers must ensure that destructor_arg + * remains valid until skb destructor */ + void * destructor_arg; + }; /* must be last field, see pskb_expand_head() */ skb_frag_t frags[MAX_SKB_FRAGS]; diff --git a/include/net/xdp.h b/include/net/xdp.h index 909c0bc50517..a3dc0f39b437 100644 --- a/include/net/xdp.h +++ b/include/net/xdp.h @@ -165,6 +165,34 @@ xdp_get_buff_len(const struct xdp_buff *xdp) return len; } +static inline bool xdp_buff_add_frag(struct xdp_buff *xdp, struct page *page, + u32 offset, u32 size, u32 truesize) +{ + struct skb_shared_info *sinfo = xdp_get_shared_info_from_buff(xdp); + + if (!xdp_buff_has_frags(xdp)) { + sinfo->nr_frags = 0; + + sinfo->xdp_frags_size = 0; + sinfo->xdp_frags_truesize = 0; + + xdp_buff_set_frags_flag(xdp); + } + + if (unlikely(sinfo->nr_frags == MAX_SKB_FRAGS)) + return false; + + __skb_fill_page_desc_noacc(sinfo, sinfo->nr_frags++, page, offset, + size); + sinfo->xdp_frags_size += size; + sinfo->xdp_frags_truesize += truesize; + + if (unlikely(page_is_pfmemalloc(page))) + xdp_buff_set_frag_pfmemalloc(xdp); + + return true; +} + struct xdp_frame { void *data; u16 len; @@ -230,7 +258,13 @@ xdp_update_skb_shared_info(struct sk_buff *skb, u8 nr_frags, unsigned int size, unsigned int truesize, bool pfmemalloc) { - skb_shinfo(skb)->nr_frags = nr_frags; + struct skb_shared_info *sinfo = skb_shinfo(skb); + + sinfo->nr_frags = nr_frags; + /* ``destructor_arg`` is unionized with ``xdp_frags_{,true}size``, + * reset it after that these fields aren't used anymore. + */ + sinfo->destructor_arg = NULL; skb->len += size; skb->data_len += size;