From patchwork Fri Jun 16 16:12:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 109241 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp1459015vqr; Fri, 16 Jun 2023 09:17:12 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ7tXCsxGd+c5W9uImjEnGRULoZkMe7OqGkZatb4r1BGbcKpR8mAl0APLkLKzfPPkw4kUs5A X-Received: by 2002:a9d:5911:0:b0:6b2:c2a1:38cd with SMTP id t17-20020a9d5911000000b006b2c2a138cdmr2643465oth.21.1686932232262; Fri, 16 Jun 2023 09:17:12 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1686932232; cv=none; d=google.com; s=arc-20160816; b=Vvf5VlBRzLZ+tsksZDdYeytvyxk38KW53G9DsC5LGZXvHcO6HDYJ+ovICqZs/yOc5P PnS9LmR40iTnzKkvqy80ArNWAmzu9KOflBntERiCbu4UrvThHN8VVWVe/KlA457c3BQp 64e1sV3cY0lvOdk8vnEX2Et2W/gzysHqDTOAE9bRu1dtTlPoQ2pen24Qoe7YNBWk5TZn x7ip5TYvbDz6pBcQDxBGtnHGUJOsB3pfNp5OBC6UMQ5NCrMv2JlIpIh40OruL7N+ofom NCRO0VJVewSyPDAqqh8JiKa7M2R/q78XZTl3JiaWtmlBlS6ItYpt8LCB27gxza7FWxRq 1+uA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=hLyIcwmfB2andhOIerohYp47fZMshUve8T8pZZkzjqE=; b=YvjC3plYH8y+q8uYfTEr+5joqbWyEjqCqeR+Xj6+jlk8gAWYorrKq0Ecwt89s0Qe3g krdlji6od6QZIkK9rtcjKGmfmFTHe8dpX6Jc4biYplUbIVvgcvrHRsIY7Q0Pj7luGjeR XeOebjRt7F2+Cr+U1v+59n28Ky9a5pX7By7Tg/q/9dmFbfIRqtAPpKoxnB2q4W5+Q6fG si5dCKR+UMJHUVspBu0b/Qk4YOXM447Y4HpyKjpEXkR2sySblYgXeUAnhcsio6OPp2a5 Mc/DJGp6D5hdKiYTnnl+ljaXOkoOu8+F5EtH5EVKwOUthx/noydD5JTOUUReHmuPGG2i FwoA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=ELzJhcDp; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id 101-20020a17090a09ee00b0025e23a82aa3si1908937pjo.40.2023.06.16.09.16.55; Fri, 16 Jun 2023 09:17:12 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=ELzJhcDp; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343979AbjFPQO2 (ORCPT + 99 others); Fri, 16 Jun 2023 12:14:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40488 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S242108AbjFPQOE (ORCPT ); Fri, 16 Jun 2023 12:14:04 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A37A02D6A for ; Fri, 16 Jun 2023 09:13:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1686931997; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=hLyIcwmfB2andhOIerohYp47fZMshUve8T8pZZkzjqE=; b=ELzJhcDpSW5OPiP3kGx25hp/rsgkILY515/ENn8Kd3/yTqK6rxLiYLD6X/s0569ICbPXRV CaP1KhsCPeE2LWk0McNm5avUyqic7EK1u3nHrk0+eVAYsV/5CgLvPqNsYM7X+6VFlJf2w2 wWgJz5ZCjf9ebah32Ox5rjbdZxmz4q4= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-237-DuKHWMn7Mna8lkq6yhfy6g-1; Fri, 16 Jun 2023 12:13:14 -0400 X-MC-Unique: DuKHWMn7Mna8lkq6yhfy6g-1 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id C97C43C025B4; Fri, 16 Jun 2023 16:13:13 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.42.28.51]) by smtp.corp.redhat.com (Postfix) with ESMTP id 913B348FB01; Fri, 16 Jun 2023 16:13:10 +0000 (UTC) From: David Howells To: netdev@vger.kernel.org Cc: David Howells , Alexander Duyck , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Willem de Bruijn , David Ahern , Matthew Wilcox , Jens Axboe , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Menglong Dong Subject: [PATCH net-next 02/17] net: Display info about MSG_SPLICE_PAGES memory handling in proc Date: Fri, 16 Jun 2023 17:12:45 +0100 Message-ID: <20230616161301.622169-3-dhowells@redhat.com> In-Reply-To: <20230616161301.622169-1-dhowells@redhat.com> References: <20230616161301.622169-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.9 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1768876652699083689?= X-GMAIL-MSGID: =?utf-8?q?1768876652699083689?= Display information about the memory handling MSG_SPLICE_PAGES does to copy slabbed data into page fragments. For each CPU that has a cached folio, it displays the folio pfn, the offset pointer within the folio and the size of the folio. It also displays the number of pages refurbished and the number of pages replaced. Signed-off-by: David Howells cc: Alexander Duyck cc: Eric Dumazet cc: "David S. Miller" cc: David Ahern cc: Jakub Kicinski cc: Paolo Abeni cc: Jens Axboe cc: Matthew Wilcox cc: Menglong Dong cc: netdev@vger.kernel.org --- net/core/skbuff.c | 42 +++++++++++++++++++++++++++++++++++++++--- 1 file changed, 39 insertions(+), 3 deletions(-) diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 9bd8d6bf6c21..c388a73e5d4e 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -83,6 +83,7 @@ #include #include #include +#include #include "dev.h" #include "sock_destructor.h" @@ -6758,6 +6759,7 @@ nodefer: __kfree_skb(skb); struct skb_splice_frag_cache { struct folio *folio; void *virt; + unsigned int fsize; unsigned int offset; /* we maintain a pagecount bias, so that we dont dirty cache line * containing page->_refcount every time we allocate a fragment. @@ -6767,6 +6769,26 @@ struct skb_splice_frag_cache { }; static DEFINE_PER_CPU(struct skb_splice_frag_cache, skb_splice_frag_cache); +static atomic_t skb_splice_frag_replaced, skb_splice_frag_refurbished; + +static int skb_splice_show(struct seq_file *m, void *data) +{ + int cpu; + + seq_printf(m, "refurb=%u repl=%u\n", + atomic_read(&skb_splice_frag_refurbished), + atomic_read(&skb_splice_frag_replaced)); + + for_each_possible_cpu(cpu) { + const struct skb_splice_frag_cache *cache = + per_cpu_ptr(&skb_splice_frag_cache, cpu); + + seq_printf(m, "[%u] %lx %u/%u\n", + cpu, folio_pfn(cache->folio), + cache->offset, cache->fsize); + } + return 0; +} /** * alloc_skb_frag - Allocate a page fragment for using in a socket @@ -6803,17 +6825,21 @@ void *alloc_skb_frag(size_t fragsz, gfp_t gfp) insufficient_space: /* See if we can refurbish the current folio. */ - if (!folio || !folio_ref_sub_and_test(folio, cache->pagecnt_bias)) + if (!folio) goto get_new_folio; + if (!folio_ref_sub_and_test(folio, cache->pagecnt_bias)) + goto replace_folio; if (unlikely(cache->pfmemalloc)) { __folio_put(folio); - goto get_new_folio; + goto replace_folio; } fsize = folio_size(folio); if (unlikely(fragsz > fsize)) goto frag_too_big; + atomic_inc(&skb_splice_frag_refurbished); + /* OK, page count is 0, we can safely set it */ folio_set_count(folio, PAGE_FRAG_CACHE_MAX_SIZE + 1); @@ -6822,6 +6848,8 @@ void *alloc_skb_frag(size_t fragsz, gfp_t gfp) offset = fsize; goto try_again; +replace_folio: + atomic_inc(&skb_splice_frag_replaced); get_new_folio: if (!spare) { cache->folio = NULL; @@ -6848,6 +6876,7 @@ void *alloc_skb_frag(size_t fragsz, gfp_t gfp) cache->folio = spare; cache->virt = folio_address(spare); + cache->fsize = folio_size(spare); folio = spare; spare = NULL; @@ -6858,7 +6887,7 @@ void *alloc_skb_frag(size_t fragsz, gfp_t gfp) /* Reset page count bias and offset to start of new frag */ cache->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; - offset = folio_size(folio); + offset = cache->fsize; goto try_again; frag_too_big: @@ -7008,3 +7037,10 @@ ssize_t skb_splice_from_iter(struct sk_buff *skb, struct iov_iter *iter, return spliced ?: ret; } EXPORT_SYMBOL(skb_splice_from_iter); + +static int skb_splice_init(void) +{ + proc_create_single("pagefrags", S_IFREG | 0444, NULL, &skb_splice_show); + return 0; +} +late_initcall(skb_splice_init);