From patchwork Tue Jun 20 14:53:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 110559 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp3736979vqr; Tue, 20 Jun 2023 08:12:14 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ4Vjx0PhvnYNVbzYHdKowKJ7+PHSnAjAsvR5/aW1nYrd//hTxLgTKpXvXAK7MJhwoUoY8Ou X-Received: by 2002:a05:6a20:4299:b0:121:d74e:8b46 with SMTP id o25-20020a056a20429900b00121d74e8b46mr4057363pzj.55.1687273934236; Tue, 20 Jun 2023 08:12:14 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1687273934; cv=none; d=google.com; s=arc-20160816; b=v0DOl1z2ObySok5z0uYL0jAy1IU1OfVvpHXZJLcIC+wphBuZxn93tB2sMhVZ9ORI2I MxUxPm9d0KnE0CqV0l6+UPXm6ccip4pQcnNemogq3++EjF5ttK9mYScs9+OtaXoZYiCm OUEUfBfy0r+4vlpIxBAs6CHHCuZPvxA44gLwgF++i0dC9Nw+yTfeeVjnSaYYQ1wbKC0z Rigkx2trWRu9lAGf1E1lgLpH2z5zECUCGa+7zjCUmwihSh9K7KuVROj40Ato8XQc0PtZ xEOO58NGX/2VmdZC5+xrCQFgepT61ev6uw2XfzgwaxExVbbDjqjYa5LgEa2bOVnU6YwC mtCA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=+u30ziD9KgNs/SO68HcOpUwCa+4S9m3KYto4qjbvl9U=; b=bxs9ydjd4AXV8nnk8n91nekjrRitnpt11e0pjYpHehHls3PLaHVLuBpQiWdyRegc++ wrzLLbUcjCodyLJMl0EG5hspjJc9VgBN+L7VBngXRpQg6z5eMzxSQWrka1RSUHHQiq7M lZyHODtNN8+2YKjFsqGc0BBwz7iVramxKbPnQ5e5TlN20bBT+dJsbIRcj4EFaZy91ODV 7Yp6Qr+46C75uKIHE6rUH1mOJUb7ht4AaXtl5raJbY0SNUpDPXG7xsbLdvdR6m2m+saU 3bwb0D2ZirNhuIzwJxwsl2B8eLnNHZ52CmtFQVsL2I7wu0O/s6fmrX9wfbEDdKny1P6E cdHw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=D1NBsNoz; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id n7-20020a63b447000000b00534866eb2c2si1799363pgu.835.2023.06.20.08.12.00; Tue, 20 Jun 2023 08:12:14 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=D1NBsNoz; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231968AbjFTOzs (ORCPT + 99 others); Tue, 20 Jun 2023 10:55:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58356 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231879AbjFTOzr (ORCPT ); Tue, 20 Jun 2023 10:55:47 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E0EE31709 for ; Tue, 20 Jun 2023 07:54:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1687272898; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=+u30ziD9KgNs/SO68HcOpUwCa+4S9m3KYto4qjbvl9U=; b=D1NBsNozghwelIVfKn8DQ+4969LMyK50obJCrJA4oCzEl5TaoXcFXpGiC7QaUavp/+462j UPo3x4u0vsi7arMS1CAWQdHgdoFfRkafLdi4fSFtcxU5wxOOXR2EJymqpk9YwLc74kfHi0 ErOnUQMBTaocUMjNvQOcJVN/G7sVUIw= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-43-AZi11j61MEqhDouArFBQJA-1; Tue, 20 Jun 2023 10:54:52 -0400 X-MC-Unique: AZi11j61MEqhDouArFBQJA-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 341D238117F0; Tue, 20 Jun 2023 14:53:47 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.42.28.4]) by smtp.corp.redhat.com (Postfix) with ESMTP id AD2FE40C6CD2; Tue, 20 Jun 2023 14:53:45 +0000 (UTC) From: David Howells To: netdev@vger.kernel.org Cc: David Howells , Alexander Duyck , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Willem de Bruijn , David Ahern , Matthew Wilcox , Jens Axboe , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Menglong Dong Subject: [PATCH net-next v3 02/18] net: Display info about MSG_SPLICE_PAGES memory handling in proc Date: Tue, 20 Jun 2023 15:53:21 +0100 Message-ID: <20230620145338.1300897-3-dhowells@redhat.com> In-Reply-To: <20230620145338.1300897-1-dhowells@redhat.com> References: <20230620145338.1300897-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.2 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1769234952481963331?= X-GMAIL-MSGID: =?utf-8?q?1769234952481963331?= Display information about the memory handling MSG_SPLICE_PAGES does to copy slabbed data into page fragments. For each CPU that has a cached folio, it displays the folio pfn, the offset pointer within the folio and the size of the folio. It also displays the number of pages refurbished and the number of pages replaced. Signed-off-by: David Howells cc: Alexander Duyck cc: Eric Dumazet cc: "David S. Miller" cc: David Ahern cc: Jakub Kicinski cc: Paolo Abeni cc: Jens Axboe cc: Matthew Wilcox cc: Menglong Dong cc: netdev@vger.kernel.org --- net/core/skbuff.c | 42 +++++++++++++++++++++++++++++++++++++++--- 1 file changed, 39 insertions(+), 3 deletions(-) diff --git a/net/core/skbuff.c b/net/core/skbuff.c index d962c93a429d..36605510a76d 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -83,6 +83,7 @@ #include #include #include +#include #include "dev.h" #include "sock_destructor.h" @@ -6758,6 +6759,7 @@ nodefer: __kfree_skb(skb); struct skb_splice_frag_cache { struct folio *folio; void *virt; + unsigned int fsize; unsigned int offset; /* we maintain a pagecount bias, so that we dont dirty cache line * containing page->_refcount every time we allocate a fragment. @@ -6767,6 +6769,26 @@ struct skb_splice_frag_cache { }; static DEFINE_PER_CPU(struct skb_splice_frag_cache, skb_splice_frag_cache); +static atomic_t skb_splice_frag_replaced, skb_splice_frag_refurbished; + +static int skb_splice_show(struct seq_file *m, void *data) +{ + int cpu; + + seq_printf(m, "refurb=%u repl=%u\n", + atomic_read(&skb_splice_frag_refurbished), + atomic_read(&skb_splice_frag_replaced)); + + for_each_possible_cpu(cpu) { + const struct skb_splice_frag_cache *cache = + per_cpu_ptr(&skb_splice_frag_cache, cpu); + + seq_printf(m, "[%u] %lx %u/%u\n", + cpu, folio_pfn(cache->folio), + cache->offset, cache->fsize); + } + return 0; +} /** * alloc_skb_frag - Allocate a page fragment for using in a socket @@ -6803,17 +6825,21 @@ void *alloc_skb_frag(size_t fragsz, gfp_t gfp) insufficient_space: /* See if we can refurbish the current folio. */ - if (!folio || !folio_ref_sub_and_test(folio, cache->pagecnt_bias)) + if (!folio) goto get_new_folio; + if (!folio_ref_sub_and_test(folio, cache->pagecnt_bias)) + goto replace_folio; if (unlikely(cache->pfmemalloc)) { __folio_put(folio); - goto get_new_folio; + goto replace_folio; } fsize = folio_size(folio); if (unlikely(fragsz > fsize)) goto frag_too_big; + atomic_inc(&skb_splice_frag_refurbished); + /* OK, page count is 0, we can safely set it */ folio_set_count(folio, PAGE_FRAG_CACHE_MAX_SIZE + 1); @@ -6822,6 +6848,8 @@ void *alloc_skb_frag(size_t fragsz, gfp_t gfp) offset = fsize; goto try_again; +replace_folio: + atomic_inc(&skb_splice_frag_replaced); get_new_folio: if (!spare) { cache->folio = NULL; @@ -6848,6 +6876,7 @@ void *alloc_skb_frag(size_t fragsz, gfp_t gfp) cache->folio = spare; cache->virt = folio_address(spare); + cache->fsize = folio_size(spare); folio = spare; spare = NULL; @@ -6858,7 +6887,7 @@ void *alloc_skb_frag(size_t fragsz, gfp_t gfp) /* Reset page count bias and offset to start of new frag */ cache->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; - offset = folio_size(folio); + offset = cache->fsize; goto try_again; frag_too_big: @@ -7007,3 +7036,10 @@ ssize_t skb_splice_from_iter(struct sk_buff *skb, struct iov_iter *iter, return spliced ?: ret; } EXPORT_SYMBOL(skb_splice_from_iter); + +static int skb_splice_init(void) +{ + proc_create_single("pagefrags", S_IFREG | 0444, NULL, &skb_splice_show); + return 0; +} +late_initcall(skb_splice_init);