From patchwork Tue Apr 11 16:08:50 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 82077 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp2705934vqo; Tue, 11 Apr 2023 09:36:04 -0700 (PDT) X-Google-Smtp-Source: AKy350Y8qiVqLpCtazJ79CpvWwn3yAEmHJqrjygu0lg32ICoWVf2SHq3ErBuRX6IjDJkuRLFUqcT X-Received: by 2002:a17:906:9bed:b0:94a:5615:6a80 with SMTP id de45-20020a1709069bed00b0094a56156a80mr11569038ejc.30.1681230964401; Tue, 11 Apr 2023 09:36:04 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1681230964; cv=none; d=google.com; s=arc-20160816; b=z5TiezD/eN8Z/R0xVPnFIpZicj3sXfIN/yJ2fdmL7/M2iWKZH4QEZUhplR/9nPqkqv lr807/Nx2xwmo7zXecZEMSULed9zTIiCPQtZOZAmFmptd/6XxOwA9DUKQfBwrc8Dm30g KA4vocgadjw5v/IIBlkU3v1s56U50x8bX0+zk0m0Ah1ul/Qxw6Z+zLbiKbpHc7dNHFvB 2UZJfx/6EX/q+wUJ21tGAG05lbOoSchnX7Seu0jdm4yvHtsnKdDcnCHmAA+u42kvpUPd 64+1uSNkGjurFD0lhX1wT2AD+mn1RyJj1ErbQgx0ZuGSdEjmtzfsZKQuC2xKfeItAVX0 sbzg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=rzqtbLu7PzckfDCHcN9H9EFIWHgmlYZvGgMWnBPjkms=; b=ztcGYE91UjDTEEel/7J2VVa7zebUhBQKOt6hwmqVkHu1UwpNQu23eL9QHtHHW2/fnE 4+8Xn7rttmQsDCykzAIh1IAVyCskKquFV0EoJiGaxP09MsCFV4jmexOolMS6PHdkHbKe B4c+cKbEBG9LEkdN75C2TXmk7YCa7Ov8uXh9OETlK1pzUZ6GUQQNASx8wjUG9iu0zWwg Oy/pXiGsL5FVvnhZieYonuZFK7IV6Dlt/iZo9h8uxXzqf/Z5MwM8eWiCVY5qTMe5i9La HRs7Cz6ghp1fJ/r1RPmDDv2mZwZcHIdBnyUg01xchdL1Dt9vo+dG+0ThhuNrBJqL74DM ANLg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=ZyhtJgDI; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id tf9-20020a1709078d8900b0094a93685e19si4110699ejc.382.2023.04.11.09.35.39; Tue, 11 Apr 2023 09:36:04 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=ZyhtJgDI; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230127AbjDKQLX (ORCPT + 99 others); Tue, 11 Apr 2023 12:11:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39722 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230034AbjDKQKt (ORCPT ); Tue, 11 Apr 2023 12:10:49 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 98B0340EE for ; Tue, 11 Apr 2023 09:09:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1681229373; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=rzqtbLu7PzckfDCHcN9H9EFIWHgmlYZvGgMWnBPjkms=; b=ZyhtJgDIIQKonxOFPX5e2kATXwaXR0NC1F0upo6SgpUSbNNGgvYxU/Vj01InPX9YVSWXjK +kAYzZOTF5owvqtQBhxx09EEKovaJSJd0LioUxRX5IPOOnhkdwOhIdrHsEkgQwa3lwDLdU mHy4wI4As8bxNl5VzIvuIF9gtfkpWZA= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-361-5X01tIs_NgyU79LDUXo_dg-1; Tue, 11 Apr 2023 12:09:31 -0400 X-MC-Unique: 5X01tIs_NgyU79LDUXo_dg-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 168421010424; Tue, 11 Apr 2023 16:09:29 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.33.36.18]) by smtp.corp.redhat.com (Postfix) with ESMTP id EE86140C83A9; Tue, 11 Apr 2023 16:09:26 +0000 (UTC) From: David Howells To: netdev@vger.kernel.org Cc: David Howells , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Willem de Bruijn , David Ahern , Matthew Wilcox , Al Viro , Christoph Hellwig , Jens Axboe , Jeff Layton , Christian Brauner , Chuck Lever III , Linus Torvalds , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH net-next v6 06/18] net: Add a function to splice pages into an skbuff for MSG_SPLICE_PAGES Date: Tue, 11 Apr 2023 17:08:50 +0100 Message-Id: <20230411160902.4134381-7-dhowells@redhat.com> In-Reply-To: <20230411160902.4134381-1-dhowells@redhat.com> References: <20230411160902.4134381-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.1 X-Spam-Status: No, score=-0.2 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1762898440028978389?= X-GMAIL-MSGID: =?utf-8?q?1762898440028978389?= Add a function to handle MSG_SPLICE_PAGES being passed internally to sendmsg(). Pages are spliced into the given socket buffer if possible and copied in if not (ie. they're slab pages or have a zero refcount). Signed-off-by: David Howells cc: Eric Dumazet cc: "David S. Miller" cc: David Ahern cc: Jakub Kicinski cc: Paolo Abeni cc: Jens Axboe cc: Matthew Wilcox cc: netdev@vger.kernel.org --- include/linux/skbuff.h | 3 ++ net/core/skbuff.c | 110 +++++++++++++++++++++++++++++++++++++++++ 2 files changed, 113 insertions(+) diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index 6e508274d2a5..add43417b798 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -5070,5 +5070,8 @@ static inline void skb_mark_for_recycle(struct sk_buff *skb) } #endif +ssize_t skb_splice_from_iter(struct sk_buff *skb, struct iov_iter *iter, + ssize_t maxsize, gfp_t gfp); + #endif /* __KERNEL__ */ #endif /* _LINUX_SKBUFF_H */ diff --git a/net/core/skbuff.c b/net/core/skbuff.c index d96175f58ca4..c90fc48a63a5 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -6838,3 +6838,113 @@ nodefer: __kfree_skb(skb); if (unlikely(kick) && !cmpxchg(&sd->defer_ipi_scheduled, 0, 1)) smp_call_function_single_async(cpu, &sd->defer_csd); } + +static void skb_splice_csum_page(struct sk_buff *skb, struct page *page, + size_t offset, size_t len) +{ + const char *kaddr; + __wsum csum; + + kaddr = kmap_local_page(page); + csum = csum_partial(kaddr + offset, len, 0); + kunmap_local(kaddr); + skb->csum = csum_block_add(skb->csum, csum, skb->len); +} + +/** + * skb_splice_from_iter - Splice (or copy) pages to skbuff + * @skb: The buffer to add pages to + * @iter: Iterator representing the pages to be added + * @maxsize: Maximum amount of pages to be added + * @gfp: Allocation flags + * + * This is a common helper function for supporting MSG_SPLICE_PAGES. It + * extracts pages from an iterator and adds them to the socket buffer if + * possible, copying them to fragments if not possible (such as if they're slab + * pages). + * + * Returns the amount of data spliced/copied or -EMSGSIZE if there's + * insufficient space in the buffer to transfer anything. + */ +ssize_t skb_splice_from_iter(struct sk_buff *skb, struct iov_iter *iter, + ssize_t maxsize, gfp_t gfp) +{ + struct page *pages[8], **ppages = pages; + unsigned int i; + ssize_t spliced = 0, ret = 0; + size_t frag_limit = READ_ONCE(sysctl_max_skb_frags); + + while (iter->count > 0) { + ssize_t space, nr; + size_t off, len; + + ret = -EMSGSIZE; + space = frag_limit - skb_shinfo(skb)->nr_frags; + if (space < 0) + break; + + /* We might be able to coalesce without increasing nr_frags */ + nr = clamp_t(size_t, space, 1, ARRAY_SIZE(pages)); + + len = iov_iter_extract_pages(iter, &ppages, maxsize, nr, 0, &off); + if (len <= 0) { + ret = len ?: -EIO; + break; + } + + if (space == 0 && + !skb_can_coalesce(skb, skb_shinfo(skb)->nr_frags, + pages[0], off)) { + iov_iter_revert(iter, len); + break; + } + + i = 0; + do { + struct page *page = pages[i++]; + size_t part = min_t(size_t, PAGE_SIZE - off, len); + bool put = false; + + if (!sendpage_ok(page)) { + const void *p = kmap_local_page(page); + void *q; + + q = page_frag_memdup(NULL, p + off, part, gfp, + ULONG_MAX); + kunmap_local(p); + if (!q) { + iov_iter_revert(iter, len); + ret = -ENOMEM; + goto out; + } + page = virt_to_page(q); + off = offset_in_page(q); + put = true; + } + + ret = skb_append_pagefrags(skb, page, off, part, + frag_limit); + if (put) + put_page(page); + if (ret < 0) { + iov_iter_revert(iter, len); + goto out; + } + + if (skb->ip_summed == CHECKSUM_NONE) + skb_splice_csum_page(skb, page, off, part); + + off = 0; + spliced += part; + maxsize -= part; + len -= part; + } while (len > 0); + + if (maxsize <= 0) + break; + } + +out: + skb_len_add(skb, spliced); + return spliced ?: ret; +}