From patchwork Sat Jun 17 12:11:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 109468 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp1939061vqr; Sat, 17 Jun 2023 05:13:58 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ7tnL9Irck2/0A8X3s5g4kEI1L8W0rPNqghuFdo2x2F/IAYs7NcoNDY2sHMI9Q9rg3Gt6iO X-Received: by 2002:a17:902:daca:b0:1af:c602:cd52 with SMTP id q10-20020a170902daca00b001afc602cd52mr3061760plx.67.1687004037315; Sat, 17 Jun 2023 05:13:57 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1687004037; cv=none; d=google.com; s=arc-20160816; b=sBMgkhFR3vnayGGD8z2bo0cW0yp5mM0hGAqNPMlD+c0Kf/sXUOK6e9zAXfWIKHCzEf 1ETpPdikMw78kz4ASDhuvTgrX/q6/EFJw4B0+7aETzpIfBaPfBzEcMurcLLccQtqAEvZ 3VsM1+N+RCkxzSOvT9dCCqyC1uK20RJWttgYTyMjKj6SzSvBKy27rS+1zWu+EVjptLXu P1ah/XuTsdu4u6gJ5bASNHSnHzgF5ouUWh5wHqsA3co5iVf6h1etz8rfJS8acqJQt9Cp laS/cTQKkXjG0KAQE1s4XWFzq7IJBh5CgokMcCJ2YO0VsCw8U76tSEWSHK46jcpjsbj/ LEmg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=+u30ziD9KgNs/SO68HcOpUwCa+4S9m3KYto4qjbvl9U=; b=uR+NChqMTFmF29pod3xGOmaMBpwsVwoy+U6MGmd6thbD0RNIp4Gjiq7GN9VF8cIx3x 4Hm5TIhOiUHSPOq/b2DomCNy2qQCU/UHpd8fnK26ogiJKq34b5ySM2l2TwTiYYVvAS7W MQud2jqKe1RPLSItM/fX2SR57EUSjxriECzvIUOKe0ISjmP4k1WVlXvROFBQtKQLQWXs broY8OTC2zG4bwNu/uAqyxdkdPeqbOsHM8S4WsyPVmOOgSXMCkXOHGG4U2Biw7e3Lj9+ qcaUPVnEhC99SrSf4darssXT5EfFmgWuVMKh2ulLL5NRF5qJpDE/X+bJjiI46QDgXDkK CC3g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=COjJND+9; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id f18-20020a170902ce9200b001b0347b904csi17978117plg.275.2023.06.17.05.13.42; Sat, 17 Jun 2023 05:13:57 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=COjJND+9; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232097AbjFQMNP (ORCPT + 99 others); Sat, 17 Jun 2023 08:13:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46434 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234148AbjFQMNI (ORCPT ); Sat, 17 Jun 2023 08:13:08 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BAB711B7 for ; Sat, 17 Jun 2023 05:12:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1687003945; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=+u30ziD9KgNs/SO68HcOpUwCa+4S9m3KYto4qjbvl9U=; b=COjJND+9ECPbKTrOzwaXno5Rm09Z/TGBjqei/cdnhOiGPx3MBLTFXTOlaxNxPWrZCKmlDl 8gBat5aBmZGcXgOBQHaOLS1vaWbB0Dh5MqPdMCXGWP4cFPaUPbKtwWyHf3OqPdX0yC/Ch2 u1NnM/uLMdFW/UoaN/PNUsMSJls06yo= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-638-OoK_dJyAPWWi4zKuB6nYXA-1; Sat, 17 Jun 2023 08:12:18 -0400 X-MC-Unique: OoK_dJyAPWWi4zKuB6nYXA-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 3AC0A8030B4; Sat, 17 Jun 2023 12:12:15 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.42.28.51]) by smtp.corp.redhat.com (Postfix) with ESMTP id A932940C2077; Sat, 17 Jun 2023 12:12:13 +0000 (UTC) From: David Howells To: netdev@vger.kernel.org Cc: David Howells , Alexander Duyck , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Willem de Bruijn , David Ahern , Matthew Wilcox , Jens Axboe , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Menglong Dong Subject: [PATCH net-next v2 02/17] net: Display info about MSG_SPLICE_PAGES memory handling in proc Date: Sat, 17 Jun 2023 13:11:31 +0100 Message-ID: <20230617121146.716077-3-dhowells@redhat.com> In-Reply-To: <20230617121146.716077-1-dhowells@redhat.com> References: <20230617121146.716077-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.1 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H5,RCVD_IN_MSPIKE_WL,SPF_HELO_NONE,SPF_NONE, T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1768951945473293868?= X-GMAIL-MSGID: =?utf-8?q?1768951945473293868?= Display information about the memory handling MSG_SPLICE_PAGES does to copy slabbed data into page fragments. For each CPU that has a cached folio, it displays the folio pfn, the offset pointer within the folio and the size of the folio. It also displays the number of pages refurbished and the number of pages replaced. Signed-off-by: David Howells cc: Alexander Duyck cc: Eric Dumazet cc: "David S. Miller" cc: David Ahern cc: Jakub Kicinski cc: Paolo Abeni cc: Jens Axboe cc: Matthew Wilcox cc: Menglong Dong cc: netdev@vger.kernel.org --- net/core/skbuff.c | 42 +++++++++++++++++++++++++++++++++++++++--- 1 file changed, 39 insertions(+), 3 deletions(-) diff --git a/net/core/skbuff.c b/net/core/skbuff.c index d962c93a429d..36605510a76d 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -83,6 +83,7 @@ #include #include #include +#include #include "dev.h" #include "sock_destructor.h" @@ -6758,6 +6759,7 @@ nodefer: __kfree_skb(skb); struct skb_splice_frag_cache { struct folio *folio; void *virt; + unsigned int fsize; unsigned int offset; /* we maintain a pagecount bias, so that we dont dirty cache line * containing page->_refcount every time we allocate a fragment. @@ -6767,6 +6769,26 @@ struct skb_splice_frag_cache { }; static DEFINE_PER_CPU(struct skb_splice_frag_cache, skb_splice_frag_cache); +static atomic_t skb_splice_frag_replaced, skb_splice_frag_refurbished; + +static int skb_splice_show(struct seq_file *m, void *data) +{ + int cpu; + + seq_printf(m, "refurb=%u repl=%u\n", + atomic_read(&skb_splice_frag_refurbished), + atomic_read(&skb_splice_frag_replaced)); + + for_each_possible_cpu(cpu) { + const struct skb_splice_frag_cache *cache = + per_cpu_ptr(&skb_splice_frag_cache, cpu); + + seq_printf(m, "[%u] %lx %u/%u\n", + cpu, folio_pfn(cache->folio), + cache->offset, cache->fsize); + } + return 0; +} /** * alloc_skb_frag - Allocate a page fragment for using in a socket @@ -6803,17 +6825,21 @@ void *alloc_skb_frag(size_t fragsz, gfp_t gfp) insufficient_space: /* See if we can refurbish the current folio. */ - if (!folio || !folio_ref_sub_and_test(folio, cache->pagecnt_bias)) + if (!folio) goto get_new_folio; + if (!folio_ref_sub_and_test(folio, cache->pagecnt_bias)) + goto replace_folio; if (unlikely(cache->pfmemalloc)) { __folio_put(folio); - goto get_new_folio; + goto replace_folio; } fsize = folio_size(folio); if (unlikely(fragsz > fsize)) goto frag_too_big; + atomic_inc(&skb_splice_frag_refurbished); + /* OK, page count is 0, we can safely set it */ folio_set_count(folio, PAGE_FRAG_CACHE_MAX_SIZE + 1); @@ -6822,6 +6848,8 @@ void *alloc_skb_frag(size_t fragsz, gfp_t gfp) offset = fsize; goto try_again; +replace_folio: + atomic_inc(&skb_splice_frag_replaced); get_new_folio: if (!spare) { cache->folio = NULL; @@ -6848,6 +6876,7 @@ void *alloc_skb_frag(size_t fragsz, gfp_t gfp) cache->folio = spare; cache->virt = folio_address(spare); + cache->fsize = folio_size(spare); folio = spare; spare = NULL; @@ -6858,7 +6887,7 @@ void *alloc_skb_frag(size_t fragsz, gfp_t gfp) /* Reset page count bias and offset to start of new frag */ cache->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; - offset = folio_size(folio); + offset = cache->fsize; goto try_again; frag_too_big: @@ -7007,3 +7036,10 @@ ssize_t skb_splice_from_iter(struct sk_buff *skb, struct iov_iter *iter, return spliced ?: ret; } EXPORT_SYMBOL(skb_splice_from_iter); + +static int skb_splice_init(void) +{ + proc_create_single("pagefrags", S_IFREG | 0444, NULL, &skb_splice_show); + return 0; +} +late_initcall(skb_splice_init);