From patchwork Wed May 24 15:33:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 98559 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:6358:3046:b0:115:7a1d:dabb with SMTP id p6csp2914178rwl; Wed, 24 May 2023 08:41:16 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6gJEodzxNeAJ6DO9lKvt2chBtFU0hyOUMEi2mfX8M8S60JMnBEcw4p2EGd6t4P44+50opp X-Received: by 2002:a17:903:2288:b0:1af:e8cf:701c with SMTP id b8-20020a170903228800b001afe8cf701cmr1295104plh.66.1684942876645; Wed, 24 May 2023 08:41:16 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1684942876; cv=none; d=google.com; s=arc-20160816; b=CJ4Ccfo7sychZN/11TPMUMVFu4i2pEv+0lUuqgEwm/zXCGBjRmlz+mNSlUNtOHp4sX zIwJyQ9T3u5MyJToGdvoQIL0Q1K6CjKThrcs173r+S2ynvPPEV4KvT3AOF5DTaYHPQp1 bcYEIL4SR4wcnVhuF+hL6BEaMOFY9dL8ul45a4C5myoHx8A060S9ndV1CBNbi0aoZx6R eH4xK62fSP7WoiRt8vTsbx4xZ0pPTyORy3iYz2YKb7JSsWyOgm48x4dzprTLSfeOR+Rh /9e75mOzfm0/bNfvF8B/lcgpO1iuIGx68XaZNs+wy4hfFq1f3f2nKHeFu2OQL82dgMVG G3mQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=5Xt/kb1I9tBW/zHVjBQrREyGjijOb+tpZUMT6BFY+/c=; b=igce8Px4ERrXgEx1ydHrdaKMpSAQievl7mQMwb1vYRpqnYain3FwqJoEttdFzMtTcc 6g4oWAOXWCVWbSufA/Tl/VVn1hA7JPyFIX6qXog0wQa9IwZG16vE7mbYGTQgvg8ok7AN 9oNGR38DoaHQsXKB6wC6aNHTw3dWw/vyE4Y+x0ThUYwaztncY/J4g1Z1kz/mxNgVrfPD StV4jG+3I/FIIBhCJ53vu/EgNQe7cP8j7VdWJhEvTjQIInj3xdnvJV+d2bbGxRAb01Wo Y4hr4KeRheDroHKhf0pJL7T1jAy0p3S+16lveJv11E4Rx0JaSMsSI+5jhKhTx++6zGiT 8l2Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=HZ8Of0rK; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id g23-20020a1709029f9700b001a97bf417cdsi8414907plq.571.2023.05.24.08.41.04; Wed, 24 May 2023 08:41:16 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=HZ8Of0rK; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236376AbjEXPf0 (ORCPT + 99 others); Wed, 24 May 2023 11:35:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33402 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236640AbjEXPe5 (ORCPT ); Wed, 24 May 2023 11:34:57 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E20CCE4B for ; Wed, 24 May 2023 08:33:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1684942408; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=5Xt/kb1I9tBW/zHVjBQrREyGjijOb+tpZUMT6BFY+/c=; b=HZ8Of0rK1hyiWfcfsq/n1TQ4H44jwzSuyskoP3Xryi302meqeGzHEvoVATpV9YiR2GNG+6 CIcHInZD4ckVVi5gM2ajw2g4GvRoZ0siIranHiFJmQZWD6ZyvnrkqcJJZP0HV41w3hoQFK ni9sDY+5Z2+vodB0va9kc1KrZ5khZQM= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-147-h5Ilq50YMBixAqm1S8Ipag-1; Wed, 24 May 2023 11:33:24 -0400 X-MC-Unique: h5Ilq50YMBixAqm1S8Ipag-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 52D53280BC8A; Wed, 24 May 2023 15:33:22 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.39.192.68]) by smtp.corp.redhat.com (Postfix) with ESMTP id 24FB920296C8; Wed, 24 May 2023 15:33:18 +0000 (UTC) From: David Howells To: netdev@vger.kernel.org Cc: David Howells , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Willem de Bruijn , David Ahern , Matthew Wilcox , Jens Axboe , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Jeroen de Borst , Catherine Sullivan , Shailend Chand , Felix Fietkau , John Crispin , Sean Wang , Mark Lee , Lorenzo Bianconi , Matthias Brugger , AngeloGioacchino Del Regno , Keith Busch , Jens Axboe , Christoph Hellwig , Sagi Grimberg , Chaitanya Kulkarni , Andrew Morton , linux-arm-kernel@lists.infradead.org, linux-mediatek@lists.infradead.org, linux-nvme@lists.infradead.org Subject: [PATCH net-next 02/12] mm: Provide a page_frag_cache allocator cleanup function Date: Wed, 24 May 2023 16:33:01 +0100 Message-Id: <20230524153311.3625329-3-dhowells@redhat.com> In-Reply-To: <20230524153311.3625329-1-dhowells@redhat.com> References: <20230524153311.3625329-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1766790661372845702?= X-GMAIL-MSGID: =?utf-8?q?1766790661372845702?= Provide a function to clean up a page_frag_cache allocator rather than doing it manually each time. Signed-off-by: David Howells cc: "David S. Miller" cc: Eric Dumazet cc: Jakub Kicinski cc: Paolo Abeni cc: Jens Axboe cc: Jeroen de Borst cc: Catherine Sullivan cc: Shailend Chand cc: Felix Fietkau cc: John Crispin cc: Sean Wang cc: Mark Lee cc: Lorenzo Bianconi cc: Matthias Brugger cc: AngeloGioacchino Del Regno cc: Keith Busch cc: Jens Axboe cc: Christoph Hellwig cc: Sagi Grimberg cc: Chaitanya Kulkarni cc: Andrew Morton cc: Matthew Wilcox cc: netdev@vger.kernel.org cc: linux-arm-kernel@lists.infradead.org cc: linux-mediatek@lists.infradead.org cc: linux-nvme@lists.infradead.org cc: linux-mm@kvack.org --- drivers/net/ethernet/google/gve/gve_main.c | 11 ++--------- drivers/net/ethernet/mediatek/mtk_wed_wo.c | 17 ++--------------- drivers/nvme/host/tcp.c | 8 +------- drivers/nvme/target/tcp.c | 5 +---- include/linux/gfp.h | 2 ++ mm/page_frag_alloc.c | 17 +++++++++++++++++ 6 files changed, 25 insertions(+), 35 deletions(-) diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c index 8fb70db63b8b..55feab29bed9 100644 --- a/drivers/net/ethernet/google/gve/gve_main.c +++ b/drivers/net/ethernet/google/gve/gve_main.c @@ -1251,17 +1251,10 @@ static void gve_unreg_xdp_info(struct gve_priv *priv) static void gve_drain_page_cache(struct gve_priv *priv) { - struct page_frag_cache *nc; int i; - for (i = 0; i < priv->rx_cfg.num_queues; i++) { - nc = &priv->rx[i].page_cache; - if (nc->va) { - __page_frag_cache_drain(virt_to_page(nc->va), - nc->pagecnt_bias); - nc->va = NULL; - } - } + for (i = 0; i < priv->rx_cfg.num_queues; i++) + page_frag_cache_clear(&priv->rx[i].page_cache); } static int gve_open(struct net_device *dev) diff --git a/drivers/net/ethernet/mediatek/mtk_wed_wo.c b/drivers/net/ethernet/mediatek/mtk_wed_wo.c index 69fba29055e9..d90fea2c7d04 100644 --- a/drivers/net/ethernet/mediatek/mtk_wed_wo.c +++ b/drivers/net/ethernet/mediatek/mtk_wed_wo.c @@ -286,7 +286,6 @@ mtk_wed_wo_queue_free(struct mtk_wed_wo *wo, struct mtk_wed_wo_queue *q) static void mtk_wed_wo_queue_tx_clean(struct mtk_wed_wo *wo, struct mtk_wed_wo_queue *q) { - struct page *page; int i; for (i = 0; i < q->n_desc; i++) { @@ -298,19 +297,12 @@ mtk_wed_wo_queue_tx_clean(struct mtk_wed_wo *wo, struct mtk_wed_wo_queue *q) entry->buf = NULL; } - if (!q->cache.va) - return; - - page = virt_to_page(q->cache.va); - __page_frag_cache_drain(page, q->cache.pagecnt_bias); - memset(&q->cache, 0, sizeof(q->cache)); + page_frag_cache_clear(&q->cache); } static void mtk_wed_wo_queue_rx_clean(struct mtk_wed_wo *wo, struct mtk_wed_wo_queue *q) { - struct page *page; - for (;;) { void *buf = mtk_wed_wo_dequeue(wo, q, NULL, true); @@ -320,12 +312,7 @@ mtk_wed_wo_queue_rx_clean(struct mtk_wed_wo *wo, struct mtk_wed_wo_queue *q) skb_free_frag(buf); } - if (!q->cache.va) - return; - - page = virt_to_page(q->cache.va); - __page_frag_cache_drain(page, q->cache.pagecnt_bias); - memset(&q->cache, 0, sizeof(q->cache)); + page_frag_cache_clear(&q->cache); } static void diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c index bf0230442d57..dcc35f6bff8c 100644 --- a/drivers/nvme/host/tcp.c +++ b/drivers/nvme/host/tcp.c @@ -1315,7 +1315,6 @@ static int nvme_tcp_alloc_async_req(struct nvme_tcp_ctrl *ctrl) static void nvme_tcp_free_queue(struct nvme_ctrl *nctrl, int qid) { - struct page *page; struct nvme_tcp_ctrl *ctrl = to_tcp_ctrl(nctrl); struct nvme_tcp_queue *queue = &ctrl->queues[qid]; unsigned int noreclaim_flag; @@ -1326,12 +1325,7 @@ static void nvme_tcp_free_queue(struct nvme_ctrl *nctrl, int qid) if (queue->hdr_digest || queue->data_digest) nvme_tcp_free_crypto(queue); - if (queue->pf_cache.va) { - page = virt_to_head_page(queue->pf_cache.va); - __page_frag_cache_drain(page, queue->pf_cache.pagecnt_bias); - queue->pf_cache.va = NULL; - } - + page_frag_cache_clear(&queue->pf_cache); noreclaim_flag = memalloc_noreclaim_save(); sock_release(queue->sock); memalloc_noreclaim_restore(noreclaim_flag); diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c index ed98df72c76b..984e6ce85dcd 100644 --- a/drivers/nvme/target/tcp.c +++ b/drivers/nvme/target/tcp.c @@ -1464,7 +1464,6 @@ static void nvmet_tcp_free_cmd_data_in_buffers(struct nvmet_tcp_queue *queue) static void nvmet_tcp_release_queue_work(struct work_struct *w) { - struct page *page; struct nvmet_tcp_queue *queue = container_of(w, struct nvmet_tcp_queue, release_work); @@ -1486,9 +1485,7 @@ static void nvmet_tcp_release_queue_work(struct work_struct *w) if (queue->hdr_digest || queue->data_digest) nvmet_tcp_free_crypto(queue); ida_free(&nvmet_tcp_queue_ida, queue->idx); - - page = virt_to_head_page(queue->pf_cache.va); - __page_frag_cache_drain(page, queue->pf_cache.pagecnt_bias); + page_frag_cache_clear(&queue->pf_cache); kfree(queue); } diff --git a/include/linux/gfp.h b/include/linux/gfp.h index ed8cb537c6a7..03504beb51e4 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -314,6 +314,8 @@ static inline void *page_frag_alloc(struct page_frag_cache *nc, return page_frag_alloc_align(nc, fragsz, gfp_mask, ~0u); } +void page_frag_cache_clear(struct page_frag_cache *nc); + extern void page_frag_free(void *addr); #define __free_page(page) __free_pages((page), 0) diff --git a/mm/page_frag_alloc.c b/mm/page_frag_alloc.c index bee95824ef8f..e02b81d68dc4 100644 --- a/mm/page_frag_alloc.c +++ b/mm/page_frag_alloc.c @@ -46,6 +46,23 @@ void __page_frag_cache_drain(struct page *page, unsigned int count) } EXPORT_SYMBOL(__page_frag_cache_drain); +/** + * page_frag_cache_clear - Clear out a page fragment cache + * @nc: The cache to clear + * + * Discard any pages still cached in a page fragment cache. + */ +void page_frag_cache_clear(struct page_frag_cache *nc) +{ + if (nc->va) { + struct page *page = virt_to_head_page(nc->va); + + __page_frag_cache_drain(page, nc->pagecnt_bias); + nc->va = NULL; + } +} +EXPORT_SYMBOL(page_frag_cache_clear); + void *page_frag_alloc_align(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask, unsigned int align_mask)