From patchwork Thu Jun 29 15:54:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 114352 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp9757983vqr; Thu, 29 Jun 2023 09:25:00 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6svYJ0XNFPS/r9Cn60j9TEsfFfG415VaFyq1T/LivIz02CSJEWPVOVemJSIvvPgib2G2EW X-Received: by 2002:a17:903:1105:b0:1b3:ebda:654e with SMTP id n5-20020a170903110500b001b3ebda654emr45256002plh.5.1688055900199; Thu, 29 Jun 2023 09:25:00 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1688055900; cv=none; d=google.com; s=arc-20160816; b=zsehQvaiV+TlCELyIw/qSfW2LSux4V59eT9Q7OHyWFCNALqEvABCXPomH08pfgKJuh 12Dh8XHQLvicwhlvQgmhU7G/IxdKTYgk0PY5wk7PTKIwbuwPIMBDFO3rvIoFzGNmhaLl 0U1iWQOsG3FzIsKsO/30+qmoBtoYAjAlJAbWfne+X1mcol0joT245GOQhrYCcBt66FMp ZUtBqIlmfo9K0mA/cVXZVvxO+aw6cbdbezAi05zbzBV9tQF8iWwEXC4fd+cys3QOvVyY YGyQkgS7FaYszCKg9jI9WxRiuqyRGjNJefxWqS/vJEVidcyQoBCbO1i9U8ZV7UADepAx L1Rw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=2eueBEW3aL+xqYTSeuQ3xb53C/MZpWPSC4TMM2avShg=; fh=81Bfh6eOhgY2t6o93vDzmkY3HzDGSkzgtzixc13rWB0=; b=JiiJGw4gxx1qyyuQuVWHFcp1k76Gur7DLkLtvdTzVjIOAmQeHR2sLJ7OGNaUCadhrB UjreO++vRMRiEkBFEBzgpBX4YyZYdMvcQzISchxuRZA+fWCR75YLk3LRRtfGTMv/KjVn uQb8MtGfINRK8rHNIIIj5EhPf4uUnNdR0NoluF3gnJA/YU3I/1kVlfcEujI6AfCprxLm WAZOJlbYsyxcYc58L5rxj0SkKl8DKv5CrCdGogTCZoywCjieD4oyIjd3QqriA4G9SrCx /LF0tr6IcA/H7U88/IitDy/EQ0TCQAWeNqDPkeIJ86tswNSUqOU5gt7vKeNfltOJmu6+ nbQQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=G9XzZqYO; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id 204-20020a6301d5000000b005578c6a7645si10956148pgb.69.2023.06.29.09.24.47; Thu, 29 Jun 2023 09:25:00 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=G9XzZqYO; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229522AbjF2P4A (ORCPT + 99 others); Thu, 29 Jun 2023 11:56:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59694 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232587AbjF2Pzy (ORCPT ); Thu, 29 Jun 2023 11:55:54 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5BAF0359E for ; Thu, 29 Jun 2023 08:55:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1688054104; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=2eueBEW3aL+xqYTSeuQ3xb53C/MZpWPSC4TMM2avShg=; b=G9XzZqYOU/JTDCv8bVABRfWkJn3eN8FJFRcvnYtMrU+jHNmMBpkgziQJsXMBxEq3PA2zv+ ZvRd4PPVkpcyh6CjPbUCzwNDpUp1IgKMp+bDCVAMxl+7ol0I6Bhq5kvlz7qyazWCnGzImv UaH8DaClqOqj1ymP+LiwwVrbC4wllkk= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-381-pxvax7y_Nu26XJJKh4xM3A-1; Thu, 29 Jun 2023 11:54:59 -0400 X-MC-Unique: pxvax7y_Nu26XJJKh4xM3A-1 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 66B8A280AA42; Thu, 29 Jun 2023 15:54:39 +0000 (UTC) Received: from warthog.procyon.org.uk.com (unknown [10.42.28.4]) by smtp.corp.redhat.com (Postfix) with ESMTP id 0EF7D4CD0C3; Thu, 29 Jun 2023 15:54:37 +0000 (UTC) From: David Howells To: netdev@vger.kernel.org Cc: David Howells , Matthew Wilcox , Dave Chinner , Matt Whitlock , Linus Torvalds , Jens Axboe , linux-fsdevel@kvack.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Christoph Hellwig , linux-fsdevel@vger.kernel.org Subject: [RFC PATCH 1/4] splice: Fix corruption of spliced data after splice() returns Date: Thu, 29 Jun 2023 16:54:30 +0100 Message-ID: <20230629155433.4170837-2-dhowells@redhat.com> In-Reply-To: <20230629155433.4170837-1-dhowells@redhat.com> References: <20230629155433.4170837-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.10 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H5,RCVD_IN_MSPIKE_WL,SPF_HELO_NONE,SPF_NONE, T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1770054903911633980?= X-GMAIL-MSGID: =?utf-8?q?1770054903911633980?= Splicing data from, say, a file into a pipe currently leaves the source pages in the pipe after splice() returns - but this means that those pages can be subsequently modified by shared-writable mmap(), write(), fallocate(), etc. before they're consumed. Fix this by stealing the pages in splice() before they're added to the pipe if no one else is using them or has them mapped and copying them otherwise. Reported-by: Matt Whitlock Link: https://lore.kernel.org/r/ec804f26-fa76-4fbe-9b1c-8fbbd829b735@mattwhitlock.name/ Signed-off-by: David Howells cc: Matthew Wilcox cc: Dave Chinner cc: Christoph Hellwig cc: Jens Axboe cc: linux-fsdevel@vger.kernel.org --- mm/filemap.c | 92 ++++++++++++++++++++++++++++++++++++++++++++++++--- mm/internal.h | 4 +-- mm/shmem.c | 8 +++-- 3 files changed, 95 insertions(+), 9 deletions(-) diff --git a/mm/filemap.c b/mm/filemap.c index 9e44a49bbd74..a002df515966 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -2838,15 +2838,87 @@ generic_file_read_iter(struct kiocb *iocb, struct iov_iter *iter) } EXPORT_SYMBOL(generic_file_read_iter); +static inline void copy_folio_to_folio(struct folio *src, size_t src_offset, + struct folio *dst, size_t dst_offset, + size_t size) +{ + void *p, *q; + + while (size > 0) { + size_t part = min3(PAGE_SIZE - src_offset % PAGE_SIZE, + PAGE_SIZE - dst_offset % PAGE_SIZE, + size); + + p = kmap_local_folio(src, src_offset); + q = kmap_local_folio(dst, dst_offset); + memcpy(q, p, part); + kunmap_local(p); + kunmap_local(q); + src_offset += part; + dst_offset += part; + size -= part; + } +} + /* - * Splice subpages from a folio into a pipe. + * Splice data from a folio into a pipe. The folio is stolen if no one else is + * using it and copied otherwise. We can't put the folio into the pipe still + * attached to the pagecache as that allows someone to modify it after the + * splice. */ -size_t splice_folio_into_pipe(struct pipe_inode_info *pipe, - struct folio *folio, loff_t fpos, size_t size) +ssize_t splice_folio_into_pipe(struct pipe_inode_info *pipe, + struct folio *folio, loff_t fpos, size_t size) { + struct address_space *mapping; + struct folio *copy = NULL; struct page *page; + unsigned int flags = 0; + ssize_t ret; size_t spliced = 0, offset = offset_in_folio(folio, fpos); + folio_lock(folio); + + mapping = folio_mapping(folio); + ret = -ENODATA; + if (!folio->mapping) + goto err_unlock; /* Truncated */ + ret = -EIO; + if (!folio_test_uptodate(folio)) + goto err_unlock; + + /* + * At least for ext2 with nobh option, we need to wait on writeback + * completing on this folio, since we'll remove it from the pagecache. + * Otherwise truncate wont wait on the folio, allowing the disk blocks + * to be reused by someone else before we actually wrote our data to + * them. fs corruption ensues. + */ + folio_wait_writeback(folio); + + if (folio_has_private(folio) && + !filemap_release_folio(folio, GFP_KERNEL)) + goto need_copy; + + /* If we succeed in removing the mapping, set LRU flag and add it. */ + if (remove_mapping(mapping, folio)) { + folio_unlock(folio); + flags = PIPE_BUF_FLAG_LRU; + goto add_to_pipe; + } + +need_copy: + folio_unlock(folio); + + copy = folio_alloc(GFP_KERNEL, 0); + if (!copy) + return -ENOMEM; + + size = min(size, PAGE_SIZE - offset % PAGE_SIZE); + copy_folio_to_folio(folio, offset, copy, 0, size); + folio = copy; + offset = 0; + +add_to_pipe: page = folio_page(folio, offset / PAGE_SIZE); size = min(size, folio_size(folio) - offset); offset %= PAGE_SIZE; @@ -2861,6 +2933,7 @@ size_t splice_folio_into_pipe(struct pipe_inode_info *pipe, .page = page, .offset = offset, .len = part, + .flags = flags, }; folio_get(folio); pipe->head++; @@ -2869,7 +2942,13 @@ size_t splice_folio_into_pipe(struct pipe_inode_info *pipe, offset = 0; } + if (copy) + folio_put(copy); return spliced; + +err_unlock: + folio_unlock(folio); + return ret; } /** @@ -2947,7 +3026,7 @@ ssize_t filemap_splice_read(struct file *in, loff_t *ppos, for (i = 0; i < folio_batch_count(&fbatch); i++) { struct folio *folio = fbatch.folios[i]; - size_t n; + ssize_t n; if (folio_pos(folio) >= end_offset) goto out; @@ -2963,8 +3042,11 @@ ssize_t filemap_splice_read(struct file *in, loff_t *ppos, n = min_t(loff_t, len, isize - *ppos); n = splice_folio_into_pipe(pipe, folio, *ppos, n); - if (!n) + if (n <= 0) { + if (n < 0) + error = n; goto out; + } len -= n; total_spliced += n; *ppos += n; diff --git a/mm/internal.h b/mm/internal.h index a7d9e980429a..ae395e0f31d5 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -881,8 +881,8 @@ struct migration_target_control { /* * mm/filemap.c */ -size_t splice_folio_into_pipe(struct pipe_inode_info *pipe, - struct folio *folio, loff_t fpos, size_t size); +ssize_t splice_folio_into_pipe(struct pipe_inode_info *pipe, + struct folio *folio, loff_t fpos, size_t size); /* * mm/vmalloc.c diff --git a/mm/shmem.c b/mm/shmem.c index 2f2e0e618072..969931b0f00e 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -2783,7 +2783,8 @@ static ssize_t shmem_file_splice_read(struct file *in, loff_t *ppos, struct inode *inode = file_inode(in); struct address_space *mapping = inode->i_mapping; struct folio *folio = NULL; - size_t total_spliced = 0, used, npages, n, part; + ssize_t n; + size_t total_spliced = 0, used, npages, part; loff_t isize; int error = 0; @@ -2844,8 +2845,11 @@ static ssize_t shmem_file_splice_read(struct file *in, loff_t *ppos, n = splice_zeropage_into_pipe(pipe, *ppos, len); } - if (!n) + if (n <= 0) { + if (n < 0) + error = n; break; + } len -= n; total_spliced += n; *ppos += n; From patchwork Thu Jun 29 15:54:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 114350 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp9757260vqr; Thu, 29 Jun 2023 09:23:52 -0700 (PDT) X-Google-Smtp-Source: APBJJlGpvrAO/1Lzx7/7YEfQ4R8xXMxVTvdOchC89/nk0FFyScYCdmXnVqHJ2ru03TdUSRWa99OS X-Received: by 2002:a05:6a00:1f90:b0:675:8627:a291 with SMTP id bg16-20020a056a001f9000b006758627a291mr448442pfb.3.1688055831748; Thu, 29 Jun 2023 09:23:51 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1688055831; cv=none; d=google.com; s=arc-20160816; b=h8pEgdAskSl+Kaim9hAnrzKNpktmIaI8khad4hRUARsG//Yd6N/xbVtuGlzI9wTSvy +Zuh1NEduNoSzBRcSUN163OPgKgIQMXYuMNI9nJJRIwNWkOuq+u/gdBrGAY1yZf8VJoR Z/E9BPu8mI3KbZcCTXWwpcDwSpbgDnbjie5qyaT3GevfmdAsStdjAzE5j5j2IKIEl5U+ LTExhJfKNemshVnELd/xaMdCLGVp82g6xGvXwaazqq785/JG5cNBNN5jjsSPGAQXe4rA AhhQNQ4AYJ1iIMvbWpLOlDiE7QsEfudiQvxvmzVyGpeDgDasvSgAsy35TPAcsEW9/G8+ pxQQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=dlZE9cCK+MsXM3xyy8z+SEcTpg9WUnrn3xFqBWWAVgI=; fh=81Bfh6eOhgY2t6o93vDzmkY3HzDGSkzgtzixc13rWB0=; b=d3dZGIt6ZnhiXSCrLYmJslBdEsOY/hkZ4Tu8YC88J09/o9SPTmYfibq5AYxg2CyzUg I/e/WWCGtig65ji66i0Fd00GR8DostTCQpHLVTDpJlOIwENa0JPkMQf1WNWBsFw6wpG5 fa2zWZequ3NrXhtU+hL7s7PHNDz43JQFaRNoM/yHv7nhABBVWdwXIW7R8QHOMEvMmvwG fS8+dgyfBqqI1CKs4uflzmo73AL5wanqA/L3hARYovlVNGWWb+p76ggo5WmHfsuqcNpb dP0JW2shNwATeI3Rz8/GTnB6phG/g77/7Nb9GLpjsObdymsteMGU93vK6d/A8nOR7Qk0 2M7Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=HKdJgXJu; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id fi31-20020a056a00399f00b0066356d0888dsi10796749pfb.126.2023.06.29.09.23.36; Thu, 29 Jun 2023 09:23:51 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=HKdJgXJu; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232370AbjF2P4h (ORCPT + 99 others); Thu, 29 Jun 2023 11:56:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59770 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232708AbjF2P4H (ORCPT ); Thu, 29 Jun 2023 11:56:07 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B9FB43596 for ; Thu, 29 Jun 2023 08:55:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1688054116; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=dlZE9cCK+MsXM3xyy8z+SEcTpg9WUnrn3xFqBWWAVgI=; b=HKdJgXJu3mwJfUkC6OhQNS6Rbj1ZBuZ4jSxuC1ClVLt8E+NYunNV3nQ4PHEVUuQJ7k0Ihz UFP8MrzXVyyypLmbLBacxdOcqpYsV6mhxeGCsFsL/YYAeg6s932fuTKMSILsUfz9n/20/t J81PnYog+c8vy9lzkvjcMmJATY/+1Mw= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-631-GQN9IkHCNG6moXhhB7DL1w-1; Thu, 29 Jun 2023 11:55:05 -0400 X-MC-Unique: GQN9IkHCNG6moXhhB7DL1w-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 6212C8A5150; Thu, 29 Jun 2023 15:54:41 +0000 (UTC) Received: from warthog.procyon.org.uk.com (unknown [10.42.28.4]) by smtp.corp.redhat.com (Postfix) with ESMTP id F1DA41121315; Thu, 29 Jun 2023 15:54:39 +0000 (UTC) From: David Howells To: netdev@vger.kernel.org Cc: David Howells , Matthew Wilcox , Dave Chinner , Matt Whitlock , Linus Torvalds , Jens Axboe , linux-fsdevel@kvack.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Christoph Hellwig , linux-fsdevel@vger.kernel.org Subject: [RFC PATCH 2/4] splice: Make vmsplice() steal or copy Date: Thu, 29 Jun 2023 16:54:31 +0100 Message-ID: <20230629155433.4170837-3-dhowells@redhat.com> In-Reply-To: <20230629155433.4170837-1-dhowells@redhat.com> References: <20230629155433.4170837-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.3 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H5,RCVD_IN_MSPIKE_WL,SPF_HELO_NONE,SPF_NONE, T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1770054831796454677?= X-GMAIL-MSGID: =?utf-8?q?1770054831796454677?= Make vmsplice()-to-pipe try to steal gifted data or else copy the source data immediately before adding it to the pipe. This prevents the data added to the pipe from being modified by write(), by shared-writable mmap and by fallocate(). [!] Note: I'm using unmap_mapping_folio() and remove_mapping() to steal a gifted page on behalf of vmsplice(). It works partly, but after a large batch of stealing, it will oops, but I can't tell why as it dies in the middle of a huge chunk of macro-generated interval tree code. [!] Note: I'm only allowing theft of pages with refcount <= 4. refcount == 3 would actually seem to be the right thing (one for the caller, one for the pagecache and one for our page table), but sometimes a fourth ref is held transiently (possibly deferred put from page-in). Reported-by: Matt Whitlock Link: https://lore.kernel.org/r/ec804f26-fa76-4fbe-9b1c-8fbbd829b735@mattwhitlock.name/ Signed-off-by: David Howells cc: Matthew Wilcox cc: Dave Chinner cc: Christoph Hellwig cc: Jens Axboe cc: linux-fsdevel@vger.kernel.org --- fs/splice.c | 123 +++++++++++++++++++++++++++++++++++++++++++++++----- 1 file changed, 113 insertions(+), 10 deletions(-) diff --git a/fs/splice.c b/fs/splice.c index 004eb1c4ce31..42af642c0ff8 100644 --- a/fs/splice.c +++ b/fs/splice.c @@ -37,6 +37,7 @@ #include #include +#include "../mm/internal.h" #include "internal.h" /* @@ -1382,14 +1383,117 @@ static long __do_splice(struct file *in, loff_t __user *off_in, return ret; } +static void copy_folio_to_folio(struct folio *src, size_t src_offset, + struct folio *dst, size_t dst_offset, + size_t size) +{ + void *p, *q; + + while (size > 0) { + size_t part = min3(PAGE_SIZE - src_offset % PAGE_SIZE, + PAGE_SIZE - dst_offset % PAGE_SIZE, + size); + + p = kmap_local_folio(src, src_offset); + q = kmap_local_folio(dst, dst_offset); + memcpy(q, p, part); + kunmap_local(p); + kunmap_local(q); + src_offset += part; + dst_offset += part; + size -= part; + } +} + +static int splice_try_to_steal_page(struct pipe_inode_info *pipe, + struct page *page, size_t offset, + size_t size, unsigned int splice_flags) +{ + struct folio *folio = page_folio(page), *copy; + unsigned int flags = 0; + size_t fsize = folio_size(folio), spliced = 0; + + if (!(splice_flags & SPLICE_F_GIFT) || + fsize != PAGE_SIZE || offset != 0 || size != fsize) + goto need_copy; + + /* + * For a folio to be stealable, the caller holds a ref, the mapping + * holds a ref and the page tables hold a ref; it may or may not also + * be on the LRU. Anything else and someone else has access to it. + */ + if (folio_ref_count(folio) > 4 || folio_mapcount(folio) != 1 || + folio_maybe_dma_pinned(folio)) + goto need_copy; + + /* Try to steal. */ + folio_lock(folio); + + if (folio_ref_count(folio) > 4 || folio_mapcount(folio) != 1 || + folio_maybe_dma_pinned(folio)) + goto need_copy_unlock; + if (!folio->mapping) + goto need_copy_unlock; /* vmsplice race? */ + + /* + * Remove the folio from the process VM and then try to remove + * it from the mapping. It we can't remove it, we'll have to + * copy it instead. + */ + unmap_mapping_folio(folio); + if (remove_mapping(folio->mapping, folio)) { + folio_clear_mappedtodisk(folio); + flags |= PIPE_BUF_FLAG_LRU; + goto add_to_pipe; + } + +need_copy_unlock: + folio_unlock(folio); +need_copy: + + copy = folio_alloc(GFP_KERNEL, 0); + if (!copy) + return -ENOMEM; + + size = min(size, PAGE_SIZE - offset % PAGE_SIZE); + copy_folio_to_folio(folio, offset, copy, 0, size); + folio_mark_uptodate(copy); + folio_put(folio); + folio = copy; + offset = 0; + +add_to_pipe: + page = folio_page(folio, offset / PAGE_SIZE); + size = min(size, folio_size(folio) - offset); + offset %= PAGE_SIZE; + + while (spliced < size && + !pipe_full(pipe->head, pipe->tail, pipe->max_usage)) { + struct pipe_buffer *buf = pipe_head_buf(pipe); + size_t part = min_t(size_t, PAGE_SIZE - offset, size - spliced); + + *buf = (struct pipe_buffer) { + .ops = &default_pipe_buf_ops, + .page = page, + .offset = offset, + .len = part, + .flags = flags, + }; + folio_get(folio); + pipe->head++; + page++; + spliced += part; + offset = 0; + } + + folio_put(folio); + return spliced; +} + static int iter_to_pipe(struct iov_iter *from, struct pipe_inode_info *pipe, unsigned flags) { - struct pipe_buffer buf = { - .ops = &user_page_pipe_buf_ops, - .flags = flags - }; size_t total = 0; int ret = 0; @@ -1407,12 +1511,11 @@ static int iter_to_pipe(struct iov_iter *from, n = DIV_ROUND_UP(left + start, PAGE_SIZE); for (i = 0; i < n; i++) { - int size = min_t(int, left, PAGE_SIZE - start); + size_t part = min_t(size_t, left, + PAGE_SIZE - start % PAGE_SIZE); - buf.page = pages[i]; - buf.offset = start; - buf.len = size; - ret = add_to_pipe(pipe, &buf); + ret = splice_try_to_steal_page(pipe, pages[i], start, + part, flags); if (unlikely(ret < 0)) { iov_iter_revert(from, left); // this one got dropped by add_to_pipe() @@ -1421,7 +1524,7 @@ static int iter_to_pipe(struct iov_iter *from, goto out; } total += ret; - left -= size; + left -= part; start = 0; } } From patchwork Thu Jun 29 15:54:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 114353 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp9758552vqr; Thu, 29 Jun 2023 09:25:58 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ5ewLRgtl8+iICraA7smqpBj/+xEPjJ+7XgWtZwoQXvewrh6HHigrNkeGhsua/QN9xKC1AZ X-Received: by 2002:a05:6e02:df2:b0:345:a3d0:f0d4 with SMTP id m18-20020a056e020df200b00345a3d0f0d4mr11269751ilj.3.1688055957952; Thu, 29 Jun 2023 09:25:57 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1688055957; cv=none; d=google.com; s=arc-20160816; b=j0hTCLkaVkb4C++bSHzLquy1uSDLYjafCOZVPYCqEYIGAyAF2SHnD2zDrISYajJmRs 6rcksrnaVmh6r3MTQH/+bMLHAIQdVj3nvOIbULO6+Nh5yVpw1f6QlQD5xoL2/ZQjIY4b 6OJlQqIJuRqss1aTioB5ok7kWIdBo2j1MIRRlmjFnW1OROxCCl4KUqqKRNAh5LMptXdS 43/RBn8jryvKTjJWpGGAJSuRvYoGwdszILE/hMPLfWRe+OsHQctAvVT2hFZTtKkjxKfX DTrhSBXwyNT5XPgT95TJfLt/5T++LsIo604ByGFhG5i7jYCSp8gsVB9wiuD0ytuZYyVd A/yA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=oe2QNXSzQeZLuqUZwzrEJvS7XHDU0AAQ2jNLxvMy70I=; fh=81Bfh6eOhgY2t6o93vDzmkY3HzDGSkzgtzixc13rWB0=; b=I2izW9/vLCwkn5obqsGu8pPK216GcKbuUmMNurr+a0KEZvkW35CDvAJDDmXebdk5// 2aXhkIeV9LLEJMYNnYy3sqabJj+VQ6ZDswfSjJarPYyXaQznH36mneyY1cklqC+twdAb nf0hH8/uQA6dmwnafqHzmPEzMQYjNy3dsGBztFTxeTbxje1vwv0B0iT2WNIJdA6v7sTG SKi1yCvCsChcfOOx4QBs9Dkzinf/D+uaFl8yXTMs/kHO5GIzkGfTVD9cwC7fpwSSZytc KXTAscthKgWVHDyyoey2Yai8yJ9SrBCVEz7Jea5KyEP01aEGPH9lSvJs5ZvBxYz8RW2T wikQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=AMeGuoLS; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id h9-20020a17090aa88900b00260b6ace91dsi12687248pjq.44.2023.06.29.09.25.44; Thu, 29 Jun 2023 09:25:57 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=AMeGuoLS; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232535AbjF2P4M (ORCPT + 99 others); Thu, 29 Jun 2023 11:56:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59654 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232488AbjF2Pz4 (ORCPT ); Thu, 29 Jun 2023 11:55:56 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 46987359C for ; Thu, 29 Jun 2023 08:55:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1688054102; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=oe2QNXSzQeZLuqUZwzrEJvS7XHDU0AAQ2jNLxvMy70I=; b=AMeGuoLSxXzVn7fepOggOInoBiczes1TlWL6gdLgNb1z4tVDhNv1fRfiGaCgrIjCFNDg0P MQ47lFRmyPqgXb5cfIGI9g09fB4c06EHMv2jdEUEWAZoSa6kJRnpOUNN2FRS+1sZA6LWAp 6La3sba70yARrM9NAQJpAHpWT4/Qfso= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-551-iYBDvSvpPX2FsZBV3XlVEw-1; Thu, 29 Jun 2023 11:54:59 -0400 X-MC-Unique: iYBDvSvpPX2FsZBV3XlVEw-1 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 99C0F8C589D; Thu, 29 Jun 2023 15:54:43 +0000 (UTC) Received: from warthog.procyon.org.uk.com (unknown [10.42.28.4]) by smtp.corp.redhat.com (Postfix) with ESMTP id 15B2F429543; Thu, 29 Jun 2023 15:54:41 +0000 (UTC) From: David Howells To: netdev@vger.kernel.org Cc: David Howells , Matthew Wilcox , Dave Chinner , Matt Whitlock , Linus Torvalds , Jens Axboe , linux-fsdevel@kvack.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Christoph Hellwig , linux-fsdevel@vger.kernel.org Subject: [RFC PATCH 3/4] splice: Remove some now-unused bits Date: Thu, 29 Jun 2023 16:54:32 +0100 Message-ID: <20230629155433.4170837-4-dhowells@redhat.com> In-Reply-To: <20230629155433.4170837-1-dhowells@redhat.com> References: <20230629155433.4170837-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.9 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H5,RCVD_IN_MSPIKE_WL,SPF_HELO_NONE,SPF_NONE, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1770054964079155635?= X-GMAIL-MSGID: =?utf-8?q?1770054964079155635?= Remove some code that's no longer used as the ->confirm() op is no longer used and pages spliced in from the pagecache and process VM are now pre-stolen or copied. Signed-off-by: David Howells cc: Matthew Wilcox cc: Dave Chinner cc: Christoph Hellwig cc: Jens Axboe cc: linux-fsdevel@vger.kernel.org --- fs/fuse/dev.c | 37 --------- fs/pipe.c | 12 --- fs/splice.c | 155 +------------------------------------- include/linux/pipe_fs_i.h | 14 ---- include/linux/splice.h | 1 - mm/filemap.c | 2 +- 6 files changed, 3 insertions(+), 218 deletions(-) diff --git a/fs/fuse/dev.c b/fs/fuse/dev.c index 1a8f82f478cb..9718dce0f0d9 100644 --- a/fs/fuse/dev.c +++ b/fs/fuse/dev.c @@ -700,10 +700,6 @@ static int fuse_copy_fill(struct fuse_copy_state *cs) struct pipe_buffer *buf = cs->pipebufs; if (!cs->write) { - err = pipe_buf_confirm(cs->pipe, buf); - if (err) - return err; - BUG_ON(!cs->nr_segs); cs->currbuf = buf; cs->pg = buf->page; @@ -766,26 +762,6 @@ static int fuse_copy_do(struct fuse_copy_state *cs, void **val, unsigned *size) return ncpy; } -static int fuse_check_folio(struct folio *folio) -{ - if (folio_mapped(folio) || - folio->mapping != NULL || - (folio->flags & PAGE_FLAGS_CHECK_AT_PREP & - ~(1 << PG_locked | - 1 << PG_referenced | - 1 << PG_uptodate | - 1 << PG_lru | - 1 << PG_active | - 1 << PG_workingset | - 1 << PG_reclaim | - 1 << PG_waiters | - LRU_GEN_MASK | LRU_REFS_MASK))) { - dump_page(&folio->page, "fuse: trying to steal weird page"); - return 1; - } - return 0; -} - static int fuse_try_move_page(struct fuse_copy_state *cs, struct page **pagep) { int err; @@ -800,10 +776,6 @@ static int fuse_try_move_page(struct fuse_copy_state *cs, struct page **pagep) fuse_copy_finish(cs); - err = pipe_buf_confirm(cs->pipe, buf); - if (err) - goto out_put_old; - BUG_ON(!cs->nr_segs); cs->currbuf = buf; cs->len = buf->len; @@ -818,14 +790,6 @@ static int fuse_try_move_page(struct fuse_copy_state *cs, struct page **pagep) newfolio = page_folio(buf->page); - if (!folio_test_uptodate(newfolio)) - folio_mark_uptodate(newfolio); - - folio_clear_mappedtodisk(newfolio); - - if (fuse_check_folio(newfolio) != 0) - goto out_fallback_unlock; - /* * This is a new and locked page, it shouldn't be mapped or * have any special flags on it @@ -2020,7 +1984,6 @@ static ssize_t fuse_dev_splice_write(struct pipe_inode_info *pipe, goto out_free; *obuf = *ibuf; - obuf->flags &= ~PIPE_BUF_FLAG_GIFT; obuf->len = rem; ibuf->offset += obuf->len; ibuf->len -= obuf->len; diff --git a/fs/pipe.c b/fs/pipe.c index 2d88f73f585a..d5c86eb20f29 100644 --- a/fs/pipe.c +++ b/fs/pipe.c @@ -286,7 +286,6 @@ pipe_read(struct kiocb *iocb, struct iov_iter *to) struct pipe_buffer *buf = &pipe->bufs[tail & mask]; size_t chars = buf->len; size_t written; - int error; if (chars > total_len) { if (buf->flags & PIPE_BUF_FLAG_WHOLE) { @@ -297,13 +296,6 @@ pipe_read(struct kiocb *iocb, struct iov_iter *to) chars = total_len; } - error = pipe_buf_confirm(pipe, buf); - if (error) { - if (!ret) - ret = error; - break; - } - written = copy_page_to_iter(buf->page, buf->offset, chars, to); if (unlikely(written < chars)) { if (!ret) @@ -462,10 +454,6 @@ pipe_write(struct kiocb *iocb, struct iov_iter *from) if ((buf->flags & PIPE_BUF_FLAG_CAN_MERGE) && offset + chars <= PAGE_SIZE) { - ret = pipe_buf_confirm(pipe, buf); - if (ret) - goto out; - ret = copy_page_from_iter(buf->page, offset, chars, from); if (unlikely(ret < chars)) { ret = -EFAULT; diff --git a/fs/splice.c b/fs/splice.c index 42af642c0ff8..2b1f109a7d4f 100644 --- a/fs/splice.c +++ b/fs/splice.c @@ -56,129 +56,6 @@ static noinline void noinline pipe_clear_nowait(struct file *file) } while (!try_cmpxchg(&file->f_mode, &fmode, fmode & ~FMODE_NOWAIT)); } -/* - * Attempt to steal a page from a pipe buffer. This should perhaps go into - * a vm helper function, it's already simplified quite a bit by the - * addition of remove_mapping(). If success is returned, the caller may - * attempt to reuse this page for another destination. - */ -static bool page_cache_pipe_buf_try_steal(struct pipe_inode_info *pipe, - struct pipe_buffer *buf) -{ - struct folio *folio = page_folio(buf->page); - struct address_space *mapping; - - folio_lock(folio); - - mapping = folio_mapping(folio); - if (mapping) { - WARN_ON(!folio_test_uptodate(folio)); - - /* - * At least for ext2 with nobh option, we need to wait on - * writeback completing on this folio, since we'll remove it - * from the pagecache. Otherwise truncate wont wait on the - * folio, allowing the disk blocks to be reused by someone else - * before we actually wrote our data to them. fs corruption - * ensues. - */ - folio_wait_writeback(folio); - - if (folio_has_private(folio) && - !filemap_release_folio(folio, GFP_KERNEL)) - goto out_unlock; - - /* - * If we succeeded in removing the mapping, set LRU flag - * and return good. - */ - if (remove_mapping(mapping, folio)) { - buf->flags |= PIPE_BUF_FLAG_LRU; - return true; - } - } - - /* - * Raced with truncate or failed to remove folio from current - * address space, unlock and return failure. - */ -out_unlock: - folio_unlock(folio); - return false; -} - -static void page_cache_pipe_buf_release(struct pipe_inode_info *pipe, - struct pipe_buffer *buf) -{ - put_page(buf->page); - buf->flags &= ~PIPE_BUF_FLAG_LRU; -} - -/* - * Check whether the contents of buf is OK to access. Since the content - * is a page cache page, IO may be in flight. - */ -static int page_cache_pipe_buf_confirm(struct pipe_inode_info *pipe, - struct pipe_buffer *buf) -{ - struct page *page = buf->page; - int err; - - if (!PageUptodate(page)) { - lock_page(page); - - /* - * Page got truncated/unhashed. This will cause a 0-byte - * splice, if this is the first page. - */ - if (!page->mapping) { - err = -ENODATA; - goto error; - } - - /* - * Uh oh, read-error from disk. - */ - if (!PageUptodate(page)) { - err = -EIO; - goto error; - } - - /* - * Page is ok afterall, we are done. - */ - unlock_page(page); - } - - return 0; -error: - unlock_page(page); - return err; -} - -const struct pipe_buf_operations page_cache_pipe_buf_ops = { - .confirm = page_cache_pipe_buf_confirm, - .release = page_cache_pipe_buf_release, - .try_steal = page_cache_pipe_buf_try_steal, - .get = generic_pipe_buf_get, -}; - -static bool user_page_pipe_buf_try_steal(struct pipe_inode_info *pipe, - struct pipe_buffer *buf) -{ - if (!(buf->flags & PIPE_BUF_FLAG_GIFT)) - return false; - - buf->flags |= PIPE_BUF_FLAG_LRU; - return generic_pipe_buf_try_steal(pipe, buf); -} - -static const struct pipe_buf_operations user_page_pipe_buf_ops = { - .release = page_cache_pipe_buf_release, - .try_steal = user_page_pipe_buf_try_steal, - .get = generic_pipe_buf_get, -}; - static void wakeup_pipe_readers(struct pipe_inode_info *pipe) { smp_mb(); @@ -460,13 +337,6 @@ static int splice_from_pipe_feed(struct pipe_inode_info *pipe, struct splice_des if (sd->len > sd->total_len) sd->len = sd->total_len; - ret = pipe_buf_confirm(pipe, buf); - if (unlikely(ret)) { - if (ret == -ENODATA) - ret = 0; - return ret; - } - ret = actor(pipe, buf, sd); if (ret <= 0) return ret; @@ -723,13 +593,6 @@ iter_file_splice_write(struct pipe_inode_info *pipe, struct file *out, continue; this_len = min(this_len, left); - ret = pipe_buf_confirm(pipe, buf); - if (unlikely(ret)) { - if (ret == -ENODATA) - ret = 0; - goto done; - } - bvec_set_page(&array[n], buf->page, this_len, buf->offset); left -= this_len; @@ -764,7 +627,7 @@ iter_file_splice_write(struct pipe_inode_info *pipe, struct file *out, } } } -done: + kfree(array); splice_from_pipe_end(pipe, &sd); @@ -855,13 +718,6 @@ ssize_t splice_to_socket(struct pipe_inode_info *pipe, struct file *out, seg = min_t(size_t, remain, buf->len); - ret = pipe_buf_confirm(pipe, buf); - if (unlikely(ret)) { - if (ret == -ENODATA) - ret = 0; - break; - } - bvec_set_page(&bvec[bc++], buf->page, seg, buf->offset); remain -= seg; if (remain == 0 || bc >= ARRAY_SIZE(bvec)) @@ -1450,7 +1306,6 @@ static int splice_try_to_steal_page(struct pipe_inode_info *pipe, need_copy_unlock: folio_unlock(folio); need_copy: - copy = folio_alloc(GFP_KERNEL, 0); if (!copy) return -ENOMEM; @@ -1578,10 +1433,6 @@ static long vmsplice_to_pipe(struct file *file, struct iov_iter *iter, { struct pipe_inode_info *pipe; long ret = 0; - unsigned buf_flag = 0; - - if (flags & SPLICE_F_GIFT) - buf_flag = PIPE_BUF_FLAG_GIFT; pipe = get_pipe_info(file, true); if (!pipe) @@ -1592,7 +1443,7 @@ static long vmsplice_to_pipe(struct file *file, struct iov_iter *iter, pipe_lock(pipe); ret = wait_for_space(pipe, flags); if (!ret) - ret = iter_to_pipe(iter, pipe, buf_flag); + ret = iter_to_pipe(iter, pipe, flags); pipe_unlock(pipe); if (ret > 0) wakeup_pipe_readers(pipe); @@ -1876,7 +1727,6 @@ static int splice_pipe_to_pipe(struct pipe_inode_info *ipipe, * Don't inherit the gift and merge flags, we need to * prevent multiple steals of this page. */ - obuf->flags &= ~PIPE_BUF_FLAG_GIFT; obuf->flags &= ~PIPE_BUF_FLAG_CAN_MERGE; obuf->len = len; @@ -1968,7 +1818,6 @@ static int link_pipe(struct pipe_inode_info *ipipe, * Don't inherit the gift and merge flag, we need to prevent * multiple steals of this page. */ - obuf->flags &= ~PIPE_BUF_FLAG_GIFT; obuf->flags &= ~PIPE_BUF_FLAG_CAN_MERGE; if (obuf->len > len) diff --git a/include/linux/pipe_fs_i.h b/include/linux/pipe_fs_i.h index 02e0086b10f6..9cfbefd7ba31 100644 --- a/include/linux/pipe_fs_i.h +++ b/include/linux/pipe_fs_i.h @@ -6,7 +6,6 @@ #define PIPE_BUF_FLAG_LRU 0x01 /* page is on the LRU */ #define PIPE_BUF_FLAG_ATOMIC 0x02 /* was atomically mapped */ -#define PIPE_BUF_FLAG_GIFT 0x04 /* page is a gift */ #define PIPE_BUF_FLAG_PACKET 0x08 /* read() as a packet */ #define PIPE_BUF_FLAG_CAN_MERGE 0x10 /* can merge buffers */ #define PIPE_BUF_FLAG_WHOLE 0x20 /* read() must return entire buffer or error */ @@ -203,19 +202,6 @@ static inline void pipe_buf_release(struct pipe_inode_info *pipe, ops->release(pipe, buf); } -/** - * pipe_buf_confirm - verify contents of the pipe buffer - * @pipe: the pipe that the buffer belongs to - * @buf: the buffer to confirm - */ -static inline int pipe_buf_confirm(struct pipe_inode_info *pipe, - struct pipe_buffer *buf) -{ - if (!buf->ops->confirm) - return 0; - return buf->ops->confirm(pipe, buf); -} - /** * pipe_buf_try_steal - attempt to take ownership of a pipe_buffer * @pipe: the pipe that the buffer belongs to diff --git a/include/linux/splice.h b/include/linux/splice.h index 6c461573434d..3c5abbd49ff2 100644 --- a/include/linux/splice.h +++ b/include/linux/splice.h @@ -97,6 +97,5 @@ extern ssize_t splice_to_socket(struct pipe_inode_info *pipe, struct file *out, extern int splice_grow_spd(const struct pipe_inode_info *, struct splice_pipe_desc *); extern void splice_shrink_spd(struct splice_pipe_desc *); -extern const struct pipe_buf_operations page_cache_pipe_buf_ops; extern const struct pipe_buf_operations default_pipe_buf_ops; #endif diff --git a/mm/filemap.c b/mm/filemap.c index a002df515966..dd144b0dab69 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -2929,7 +2929,7 @@ ssize_t splice_folio_into_pipe(struct pipe_inode_info *pipe, size_t part = min_t(size_t, PAGE_SIZE - offset, size - spliced); *buf = (struct pipe_buffer) { - .ops = &page_cache_pipe_buf_ops, + .ops = &default_pipe_buf_ops, .page = page, .offset = offset, .len = part, From patchwork Thu Jun 29 15:54:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 114349 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp9753857vqr; Thu, 29 Jun 2023 09:18:33 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ4Zpt8387VYkLzwGo4c+6xJjRYWMNW36ZmqS1FmD413s+5Xeqcxd/MrQwFHO6uik1+DUNLx X-Received: by 2002:a17:90b:1bc2:b0:258:9621:913f with SMTP id oa2-20020a17090b1bc200b002589621913fmr45503105pjb.3.1688055513286; Thu, 29 Jun 2023 09:18:33 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1688055513; cv=none; d=google.com; s=arc-20160816; b=cUCqGD4si3PgNirPlOGfcryWXadtEtDS1Cj4SeSY8lPpJIwDP7IR9cbqbdtwtOq7nm 5WhJJt2zYLwdwRXX2ziA8OFKzwdVSyICG5xVzbuuvD7l/vxxxM6TjszDPJOtfpsO4xd7 l+10RZ96DZsS4OIA8IwBG64xKai3y7KuC+sswgs03Umo9iQut5GEr2XZi9J9snu6ddx8 9WePQRIywSITziRuhaMTf0vowvkayJnI7herej6Di3jK7bKhptKsZF1thhu0Dxi9gTcC mcVGLjYWz2YwvKhvuJGzoKRCQNNtuXKrfcSGq2ugbIE7CJ8zmKuJxJ4+Z17JubLSQ6JQ cePg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=TOAbgctNJNzSTZB1BB4d4WqHvbP5DttnG4JuN21nRvU=; fh=81Bfh6eOhgY2t6o93vDzmkY3HzDGSkzgtzixc13rWB0=; b=sYih3exBhDKQrypavto5f8pc5dOX33JsiXldVxVobQtYFxTYmoDz8qmfNqXw6pUz1t FMfzBgRIm+SuWn7J4es9Nm8VYPQP6t4pBu8oWwfU7ul6Pnar0QnYPcwusxDXWXOANTv2 TGhOyEsEub8kP0anXzTv98YzblhQbE46fQpRLqbdkUv1ZcrH+5NWgNHc0XwHeV0/A/+8 3U/Q+yzmdIP8KX6KXcgu4e4w3uzzDM3DXpchhk+vRFPb7lPBW5OB1MstuiS0EqVnzTm7 glzItlNcIv5FGBYzv7yCBVWMVtoPUN314kcQqsnGU14qb42NiOUO0q51oA6OpT9LETht fhkQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=io9kQ8VQ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id m7-20020a17090a3f8700b0026341fc8ae2si2855893pjc.122.2023.06.29.09.18.17; Thu, 29 Jun 2023 09:18:33 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=io9kQ8VQ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232003AbjF2P45 (ORCPT + 99 others); Thu, 29 Jun 2023 11:56:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59814 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232584AbjF2P4Q (ORCPT ); Thu, 29 Jun 2023 11:56:16 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DCADD10FB for ; Thu, 29 Jun 2023 08:55:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1688054128; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=TOAbgctNJNzSTZB1BB4d4WqHvbP5DttnG4JuN21nRvU=; b=io9kQ8VQGJ0rPoNDlBr3iv3AxLAFsNHaLVH2RI7oF5TtcRVKD2DFUFjvTBMt7h7maWhLzb PPamQyR6yahp5zgFpjVMY2e+0SIPFFVrh5tRHrzmTIQmO7j+jnqguuFeWumGA7F+BS0z0/ ocRnsWAeHj+c+TA3cdgTl8UMQqnzL7s= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-158--w3bQ03qP3q32efbcBKCww-1; Thu, 29 Jun 2023 11:55:21 -0400 X-MC-Unique: -w3bQ03qP3q32efbcBKCww-1 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 9792B1C06ED8; Thu, 29 Jun 2023 15:54:45 +0000 (UTC) Received: from warthog.procyon.org.uk.com (unknown [10.42.28.4]) by smtp.corp.redhat.com (Postfix) with ESMTP id 398D84CD0C2; Thu, 29 Jun 2023 15:54:44 +0000 (UTC) From: David Howells To: netdev@vger.kernel.org Cc: David Howells , Matthew Wilcox , Dave Chinner , Matt Whitlock , Linus Torvalds , Jens Axboe , linux-fsdevel@kvack.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Christoph Hellwig , linux-fsdevel@vger.kernel.org Subject: [RFC PATCH 4/4] splice: Record some statistics Date: Thu, 29 Jun 2023 16:54:33 +0100 Message-ID: <20230629155433.4170837-5-dhowells@redhat.com> In-Reply-To: <20230629155433.4170837-1-dhowells@redhat.com> References: <20230629155433.4170837-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.10 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H5,RCVD_IN_MSPIKE_WL,SPF_HELO_NONE,SPF_NONE, T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1770054497979424002?= X-GMAIL-MSGID: =?utf-8?q?1770054497979424002?= Add a proc file to export some statistics for debugging purposes. Signed-off-by: David Howells cc: Matthew Wilcox cc: Dave Chinner cc: Christoph Hellwig cc: Jens Axboe cc: linux-fsdevel@vger.kernel.org --- fs/splice.c | 28 ++++++++++++++++++++++++++++ include/linux/splice.h | 3 +++ mm/filemap.c | 6 +++++- 3 files changed, 36 insertions(+), 1 deletion(-) diff --git a/fs/splice.c b/fs/splice.c index 2b1f109a7d4f..831973ea6b3f 100644 --- a/fs/splice.c +++ b/fs/splice.c @@ -36,10 +36,15 @@ #include #include #include +#include #include "../mm/internal.h" #include "internal.h" +atomic_t splice_stat_filemap_copied, splice_stat_filemap_moved; +static atomic_t splice_stat_directly_copied; +static atomic_t vmsplice_stat_copied, vmsplice_stat_stole; + /* * Splice doesn't support FMODE_NOWAIT. Since pipes may set this flag to * indicate they support non-blocking reads or writes, we must clear it @@ -276,6 +281,7 @@ ssize_t copy_splice_read(struct file *in, loff_t *ppos, remain -= chunk; } + atomic_add(keep, &splice_stat_directly_copied); kfree(bv); return ret; } @@ -1299,6 +1305,7 @@ static int splice_try_to_steal_page(struct pipe_inode_info *pipe, unmap_mapping_folio(folio); if (remove_mapping(folio->mapping, folio)) { folio_clear_mappedtodisk(folio); + atomic_inc(&vmsplice_stat_stole); flags |= PIPE_BUF_FLAG_LRU; goto add_to_pipe; } @@ -1316,6 +1323,7 @@ static int splice_try_to_steal_page(struct pipe_inode_info *pipe, folio_put(folio); folio = copy; offset = 0; + atomic_inc(&vmsplice_stat_copied); add_to_pipe: page = folio_page(folio, offset / PAGE_SIZE); @@ -1905,3 +1913,23 @@ SYSCALL_DEFINE4(tee, int, fdin, int, fdout, size_t, len, unsigned int, flags) return error; } + +static int splice_stats_show(struct seq_file *m, void *data) +{ + seq_printf(m, "filemap: copied=%u moved=%u\n", + atomic_read(&splice_stat_filemap_copied), + atomic_read(&splice_stat_filemap_moved)); + seq_printf(m, "direct : copied=%u\n", + atomic_read(&splice_stat_directly_copied)); + seq_printf(m, "vmsplice: copied=%u stole=%u\n", + atomic_read(&vmsplice_stat_copied), + atomic_read(&vmsplice_stat_stole)); + return 0; +} + +static int splice_stats_init(void) +{ + proc_create_single("fs/splice", S_IFREG | 0444, NULL, splice_stats_show); + return 0; +} +late_initcall(splice_stats_init); diff --git a/include/linux/splice.h b/include/linux/splice.h index 3c5abbd49ff2..4f04dc338010 100644 --- a/include/linux/splice.h +++ b/include/linux/splice.h @@ -98,4 +98,7 @@ extern int splice_grow_spd(const struct pipe_inode_info *, struct splice_pipe_de extern void splice_shrink_spd(struct splice_pipe_desc *); extern const struct pipe_buf_operations default_pipe_buf_ops; + +extern atomic_t splice_stat_filemap_copied, splice_stat_filemap_moved; + #endif diff --git a/mm/filemap.c b/mm/filemap.c index dd144b0dab69..38d38cc826fa 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -2872,7 +2872,8 @@ ssize_t splice_folio_into_pipe(struct pipe_inode_info *pipe, struct address_space *mapping; struct folio *copy = NULL; struct page *page; - unsigned int flags = 0; + unsigned int flags = 0, count = 0; + atomic_t *stat = &splice_stat_filemap_copied; ssize_t ret; size_t spliced = 0, offset = offset_in_folio(folio, fpos); @@ -2902,6 +2903,7 @@ ssize_t splice_folio_into_pipe(struct pipe_inode_info *pipe, /* If we succeed in removing the mapping, set LRU flag and add it. */ if (remove_mapping(mapping, folio)) { folio_unlock(folio); + stat = &splice_stat_filemap_moved; flags = PIPE_BUF_FLAG_LRU; goto add_to_pipe; } @@ -2940,8 +2942,10 @@ ssize_t splice_folio_into_pipe(struct pipe_inode_info *pipe, page++; spliced += part; offset = 0; + count++; } + atomic_add(count, stat); if (copy) folio_put(copy); return spliced;