From patchwork Sat Jun 17 12:11:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 109467 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp1939037vqr; Sat, 17 Jun 2023 05:13:54 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ49EHroU7NNdKLGkUmkg2aOqhZzEtqAeLPmvbWykilthRQruJOZsA6hWvcSXJLL7OMapC5z X-Received: by 2002:a17:90a:94cc:b0:25e:a6ab:77d5 with SMTP id j12-20020a17090a94cc00b0025ea6ab77d5mr2979445pjw.20.1687004034550; Sat, 17 Jun 2023 05:13:54 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1687004034; cv=none; d=google.com; s=arc-20160816; b=pf6Yot8jnpHd3thJO5oheR1J0HSbmMGKSyXOAWax9d7kMzwK/rANH6jw/3XL7Fk5m4 U92TI32R3sO1ECDdFTX6YNdb+MHJBXLrvCPYDYMXYLerP58xaxlz5cVneCmsrYm1giIw 86cevIW61dZ1zSWNcqwLOY6Xp/rUiYLWEnodkP8xgE1h938J8YAqiPAR5ISkBhoBMX86 D0roil5rLC11Di8WE1l96UoNLTNFIw1aUoqfeyeGjJy9WCOB1+FkpV5ZTcetxsjmw+jE 8SbCq4BSYFK921q4tJQ9ShP/Y6SCnNWwy4UI9x5nVUKf8jXuDIwff4I92HLO8WXNuue2 DpmA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=ejBheymwORocF7PA51bmsdG6ZmxRm3UYoYQNGQf509Y=; b=KHaNbBl+TdI4hel2w61dMgwQmMQZApheYrZZpc88YB04CNbWWThVGO/sBiJETr/oXO Ucq7xn8TO0eIzfHqmsIjwU7QDwaiWC7Av8NQaGhxSCjyvaTrvTQJgaiOxbieH2Pj56zf z6nk2NQiN72Sywf7YWsdluPQXERGajr3SR9ubeFxl4ySRPa/H+qdZ5/CoCpDl1kcwn1y TiT8d5KfshnP3F1aO1Qy1wNv/802EpfdsRKEVgvPNFlE4JeRDMwm7w0nVwYPNK7kE7tj YhPT1j6BrFSZEDJWeIae3hnE6jqQbUBJT03ZcGZLy+SrLZT1bgq4suLmnzWlPU/WvhTx 6jew== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=VMWud80j; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id l14-20020a17090a72ce00b0025c1f64f29dsi3631655pjk.171.2023.06.17.05.13.39; Sat, 17 Jun 2023 05:13:54 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=VMWud80j; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234055AbjFQMNN (ORCPT + 99 others); Sat, 17 Jun 2023 08:13:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46430 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234099AbjFQMNI (ORCPT ); Sat, 17 Jun 2023 08:13:08 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DE9081BD6 for ; Sat, 17 Jun 2023 05:12:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1687003944; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ejBheymwORocF7PA51bmsdG6ZmxRm3UYoYQNGQf509Y=; b=VMWud80jZq6XEmwzaaIkT8ePvb+Vfonhw1Wi1Ozlr1sUsZTMnFPGvE/NRe6xGk2MFMWlTD +N0GbMA4HZUReT5TZ5t4zEEpLHovU+cuApS2cjehvaW4ZIQ9qc8qOjjuD9XSCY+gnlaZG7 UoG2Xoaj2lVkuaNt8SAD5+636wx2aD0= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-372-w6pbkuscPMuYgv1oelN8Vg-1; Sat, 17 Jun 2023 08:12:17 -0400 X-MC-Unique: w6pbkuscPMuYgv1oelN8Vg-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 1221E8352D1; Sat, 17 Jun 2023 12:12:13 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.42.28.51]) by smtp.corp.redhat.com (Postfix) with ESMTP id 40202422E2; Sat, 17 Jun 2023 12:12:11 +0000 (UTC) From: David Howells To: netdev@vger.kernel.org Cc: David Howells , Alexander Duyck , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Willem de Bruijn , David Ahern , Matthew Wilcox , Jens Axboe , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Menglong Dong Subject: [PATCH net-next v2 01/17] net: Copy slab data for sendmsg(MSG_SPLICE_PAGES) Date: Sat, 17 Jun 2023 13:11:30 +0100 Message-ID: <20230617121146.716077-2-dhowells@redhat.com> In-Reply-To: <20230617121146.716077-1-dhowells@redhat.com> References: <20230617121146.716077-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.5 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H5,RCVD_IN_MSPIKE_WL,SPF_HELO_NONE,SPF_NONE, T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1768951942130144464?= X-GMAIL-MSGID: =?utf-8?q?1768951942130144464?= If sendmsg() is passed MSG_SPLICE_PAGES and is given a buffer that contains some data that's resident in the slab, copy it rather than returning EIO. This can be made use of by a number of drivers in the kernel, including: iwarp, ceph/rds, dlm, nvme, ocfs2, drdb. It could also be used by iscsi, rxrpc, sunrpc, cifs and probably others. skb_splice_from_iter() is given it's own fragment allocator as page_frag_alloc_align() can't be used because it does no locking to prevent parallel callers from racing. alloc_skb_frag() uses a separate folio for each cpu and locks to the cpu whilst allocating, reenabling cpu migration around folio allocation. This could allocate a whole page instead for each fragment to be copied, as alloc_skb_with_frags() would do instead, but that would waste a lot of space (most of the fragments look like they're going to be small). This allows an entire message that consists of, say, a protocol header or two, a number of pages of data and a protocol footer to be sent using a single call to sock_sendmsg(). The callers could be made to copy the data into fragments before calling sendmsg(), but that then penalises them if MSG_SPLICE_PAGES gets ignored. Signed-off-by: David Howells cc: Alexander Duyck cc: Eric Dumazet cc: "David S. Miller" cc: David Ahern cc: Jakub Kicinski cc: Paolo Abeni cc: Jens Axboe cc: Matthew Wilcox cc: Menglong Dong cc: netdev@vger.kernel.org --- Notes: ver #2) - Fix parameter to put_cpu_ptr() to have an '&'. include/linux/skbuff.h | 5 ++ net/core/skbuff.c | 171 ++++++++++++++++++++++++++++++++++++++++- 2 files changed, 173 insertions(+), 3 deletions(-) diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index 91ed66952580..0ba776cd9be8 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -5037,6 +5037,11 @@ static inline void skb_mark_for_recycle(struct sk_buff *skb) #endif } +void *alloc_skb_frag(size_t fragsz, gfp_t gfp); +void *copy_skb_frag(const void *s, size_t len, gfp_t gfp); +ssize_t skb_splice_from_iter(struct sk_buff *skb, struct iov_iter *iter, + ssize_t maxsize, gfp_t gfp); + ssize_t skb_splice_from_iter(struct sk_buff *skb, struct iov_iter *iter, ssize_t maxsize, gfp_t gfp); diff --git a/net/core/skbuff.c b/net/core/skbuff.c index fee2b1c105fe..d962c93a429d 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -6755,6 +6755,145 @@ nodefer: __kfree_skb(skb); smp_call_function_single_async(cpu, &sd->defer_csd); } +struct skb_splice_frag_cache { + struct folio *folio; + void *virt; + unsigned int offset; + /* we maintain a pagecount bias, so that we dont dirty cache line + * containing page->_refcount every time we allocate a fragment. + */ + unsigned int pagecnt_bias; + bool pfmemalloc; +}; + +static DEFINE_PER_CPU(struct skb_splice_frag_cache, skb_splice_frag_cache); + +/** + * alloc_skb_frag - Allocate a page fragment for using in a socket + * @fragsz: The size of fragment required + * @gfp: Allocation flags + */ +void *alloc_skb_frag(size_t fragsz, gfp_t gfp) +{ + struct skb_splice_frag_cache *cache; + struct folio *folio, *spare = NULL; + size_t offset, fsize; + void *p; + + if (WARN_ON_ONCE(fragsz == 0)) + fragsz = 1; + + cache = get_cpu_ptr(&skb_splice_frag_cache); +reload: + folio = cache->folio; + offset = cache->offset; +try_again: + if (fragsz > offset) + goto insufficient_space; + + /* Make the allocation. */ + cache->pagecnt_bias--; + offset = ALIGN_DOWN(offset - fragsz, SMP_CACHE_BYTES); + cache->offset = offset; + p = cache->virt + offset; + put_cpu_ptr(&skb_splice_frag_cache); + if (spare) + folio_put(spare); + return p; + +insufficient_space: + /* See if we can refurbish the current folio. */ + if (!folio || !folio_ref_sub_and_test(folio, cache->pagecnt_bias)) + goto get_new_folio; + if (unlikely(cache->pfmemalloc)) { + __folio_put(folio); + goto get_new_folio; + } + + fsize = folio_size(folio); + if (unlikely(fragsz > fsize)) + goto frag_too_big; + + /* OK, page count is 0, we can safely set it */ + folio_set_count(folio, PAGE_FRAG_CACHE_MAX_SIZE + 1); + + /* Reset page count bias and offset to start of new frag */ + cache->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; + offset = fsize; + goto try_again; + +get_new_folio: + if (!spare) { + cache->folio = NULL; + put_cpu_ptr(&skb_splice_frag_cache); + +#if PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE + spare = folio_alloc(gfp | __GFP_NOWARN | __GFP_NORETRY | + __GFP_NOMEMALLOC, + PAGE_FRAG_CACHE_MAX_ORDER); + if (!spare) +#endif + spare = folio_alloc(gfp, 0); + if (!spare) + return NULL; + + cache = get_cpu_ptr(&skb_splice_frag_cache); + /* We may now be on a different cpu and/or someone else may + * have refilled it + */ + cache->pfmemalloc = folio_is_pfmemalloc(spare); + if (cache->folio) + goto reload; + } + + cache->folio = spare; + cache->virt = folio_address(spare); + folio = spare; + spare = NULL; + + /* Even if we own the page, we do not use atomic_set(). This would + * break get_page_unless_zero() users. + */ + folio_ref_add(folio, PAGE_FRAG_CACHE_MAX_SIZE); + + /* Reset page count bias and offset to start of new frag */ + cache->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; + offset = folio_size(folio); + goto try_again; + +frag_too_big: + /* The caller is trying to allocate a fragment with fragsz > PAGE_SIZE + * but the cache isn't big enough to satisfy the request, this may + * happen in low memory conditions. We don't release the cache page + * because it could make memory pressure worse so we simply return NULL + * here. + */ + cache->offset = offset; + put_cpu_ptr(&skb_splice_frag_cache); + if (spare) + folio_put(spare); + return NULL; +} +EXPORT_SYMBOL(alloc_skb_frag); + +/** + * copy_skb_frag - Copy data into a page fragment. + * @s: The data to copy + * @len: The size of the data + * @gfp: Allocation flags + */ +void *copy_skb_frag(const void *s, size_t len, gfp_t gfp) +{ + void *p; + + p = alloc_skb_frag(len, gfp); + if (!p) + return NULL; + + return memcpy(p, s, len); +} +EXPORT_SYMBOL(copy_skb_frag); + static void skb_splice_csum_page(struct sk_buff *skb, struct page *page, size_t offset, size_t len) { @@ -6808,17 +6947,43 @@ ssize_t skb_splice_from_iter(struct sk_buff *skb, struct iov_iter *iter, break; } + if (space == 0 && + !skb_can_coalesce(skb, skb_shinfo(skb)->nr_frags, + pages[0], off)) { + iov_iter_revert(iter, len); + break; + } + i = 0; do { struct page *page = pages[i++]; size_t part = min_t(size_t, PAGE_SIZE - off, len); - - ret = -EIO; - if (WARN_ON_ONCE(!sendpage_ok(page))) + bool put = false; + + if (PageSlab(page)) { + const void *p; + void *q; + + p = kmap_local_page(page); + q = copy_skb_frag(p + off, part, gfp); + kunmap_local(p); + if (!q) { + iov_iter_revert(iter, len); + ret = -ENOMEM; + goto out; + } + page = virt_to_page(q); + off = offset_in_page(q); + put = true; + } else if (WARN_ON_ONCE(!sendpage_ok(page))) { + ret = -EIO; goto out; + } ret = skb_append_pagefrags(skb, page, off, part, frag_limit); + if (put) + put_page(page); if (ret < 0) { iov_iter_revert(iter, len); goto out;