linux-next: manual merge of the block tree with the mm tree

Message ID 20230323100204.0917cecf@canb.auug.org.au
State New
Headers
Series linux-next: manual merge of the block tree with the mm tree |

Commit Message

Stephen Rothwell March 22, 2023, 11:02 p.m. UTC
  Hi all,

Today's linux-next merge of the block tree got a conflict in:

  lib/iov_iter.c

between commit:

  c4cf24ce34b7 ("iov_iter: add copy_page_to_iter_atomic()")

from the mm tree and commit:

  a53f5dee3448 ("iov_iter: Kill ITER_PIPE")

from the block tree.

I fixed it up (see below) and can carry the fix as necessary. This
is now fixed as far as linux-next is concerned, but any non trivial
conflicts should be mentioned to your upstream maintainer when your tree
is submitted for merging.  You may also want to consider cooperating
with the maintainer of the conflicting tree to minimise any particularly
complex conflicts.
  

Comments

David Howells March 22, 2023, 11:13 p.m. UTC | #1
Stephen Rothwell <sfr@canb.auug.org.au> wrote:

>  +	if (unlikely(iov_iter_is_pipe(i))) {
>  +		copied = copy_page_to_iter_pipe(page, offset, bytes, i);
>  +		goto out;
>  +	}

This bit would need to be removed from copy_page_to_iter_atomic() as the two
functions it calls should be removed by the patch in the block tree.

David
  
Jens Axboe March 22, 2023, 11:15 p.m. UTC | #2
On 3/22/23 5:13 PM, David Howells wrote:
> Stephen Rothwell <sfr@canb.auug.org.au> wrote:
> 
>>  +	if (unlikely(iov_iter_is_pipe(i))) {
>>  +		copied = copy_page_to_iter_pipe(page, offset, bytes, i);
>>  +		goto out;
>>  +	}
> 
> This bit would need to be removed from copy_page_to_iter_atomic() as the two
> functions it calls should be removed by the patch in the block tree.

Maybe it'd be better to consolidate rather than split the changes over
two trees?
  
Andrew Morton March 22, 2023, 11:26 p.m. UTC | #3
On Wed, 22 Mar 2023 17:15:48 -0600 Jens Axboe <axboe@kernel.dk> wrote:

> On 3/22/23 5:13 PM, David Howells wrote:
> > Stephen Rothwell <sfr@canb.auug.org.au> wrote:
> > 
> >>  +	if (unlikely(iov_iter_is_pipe(i))) {
> >>  +		copied = copy_page_to_iter_pipe(page, offset, bytes, i);
> >>  +		goto out;
> >>  +	}
> > 
> > This bit would need to be removed from copy_page_to_iter_atomic() as the two
> > functions it calls should be removed by the patch in the block tree.
> 
> Maybe it'd be better to consolidate rather than split the changes over
> two trees?

fyi, Lorenzo has sent out v7 of this series.  I'll be pushing this in
an hour or so.  After which I suggest Stephen removes those (now) two
lines and sends out one of his "build fix" emails which can be the
basis for Linus's resolution.

Or I can just steal "iov_iter: Kill ITER_PIPE"...





From: Lorenzo Stoakes <lstoakes@gmail.com>
Subject: iov_iter: add copy_page_to_iter_nofault()
Date: Wed, 22 Mar 2023 18:57:03 +0000

Provide a means to copy a page to user space from an iterator, aborting if
a page fault would occur.  This supports compound pages, but may be passed
a tail page with an offset extending further into the compound page, so we
cannot pass a folio.

This allows for this function to be called from atomic context and _try_
to user pages if they are faulted in, aborting if not.

The function does not use _copy_to_iter() in order to not specify
might_fault(), this is similar to copy_page_from_iter_atomic().

This is being added in order that an iteratable form of vread() can be
implemented while holding spinlocks.

Link: https://lkml.kernel.org/r/19734729defb0f498a76bdec1bef3ac48a3af3e8.1679511146.git.lstoakes@gmail.com
Signed-off-by: Lorenzo Stoakes <lstoakes@gmail.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Baoquan He <bhe@redhat.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Liu Shixin <liushixin2@huawei.com>
Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: Uladzislau Rezki (Sony) <urezki@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---


--- a/include/linux/uio.h~iov_iter-add-copy_page_to_iter_nofault
+++ a/include/linux/uio.h
@@ -173,6 +173,8 @@ static inline size_t copy_folio_to_iter(
 {
 	return copy_page_to_iter(&folio->page, offset, bytes, i);
 }
+size_t copy_page_to_iter_nofault(struct page *page, unsigned offset,
+				 size_t bytes, struct iov_iter *i);
 
 static __always_inline __must_check
 size_t copy_to_iter(const void *addr, size_t bytes, struct iov_iter *i)
--- a/lib/iov_iter.c~iov_iter-add-copy_page_to_iter_nofault
+++ a/lib/iov_iter.c
@@ -172,6 +172,18 @@ static int copyout(void __user *to, cons
 	return n;
 }
 
+static int copyout_nofault(void __user *to, const void *from, size_t n)
+{
+	long res;
+
+	if (should_fail_usercopy())
+		return n;
+
+	res = copy_to_user_nofault(to, from, n);
+
+	return res < 0 ? n : res;
+}
+
 static int copyin(void *to, const void __user *from, size_t n)
 {
 	size_t res = n;
@@ -734,6 +746,42 @@ size_t copy_page_to_iter(struct page *pa
 }
 EXPORT_SYMBOL(copy_page_to_iter);
 
+size_t copy_page_to_iter_nofault(struct page *page, unsigned offset, size_t bytes,
+				 struct iov_iter *i)
+{
+	size_t res = 0;
+
+	if (!page_copy_sane(page, offset, bytes))
+		return 0;
+	if (WARN_ON_ONCE(i->data_source))
+		return 0;
+	if (unlikely(iov_iter_is_pipe(i)))
+		return copy_page_to_iter_pipe(page, offset, bytes, i);
+	page += offset / PAGE_SIZE; // first subpage
+	offset %= PAGE_SIZE;
+	while (1) {
+		void *kaddr = kmap_local_page(page);
+		size_t n = min(bytes, (size_t)PAGE_SIZE - offset);
+
+		iterate_and_advance(i, n, base, len, off,
+			copyout_nofault(base, kaddr + offset + off, len),
+			memcpy(base, kaddr + offset + off, len)
+		)
+		kunmap_local(kaddr);
+		res += n;
+		bytes -= n;
+		if (!bytes || !n)
+			break;
+		offset += n;
+		if (offset == PAGE_SIZE) {
+			page++;
+			offset = 0;
+		}
+	}
+	return res;
+}
+EXPORT_SYMBOL(copy_page_to_iter_nofault);
+
 size_t copy_page_from_iter(struct page *page, size_t offset, size_t bytes,
 			 struct iov_iter *i)
 {
  
Stephen Rothwell March 23, 2023, 12:03 a.m. UTC | #4
Hi all,

On Wed, 22 Mar 2023 16:26:38 -0700 Andrew Morton <akpm@linux-foundation.org> wrote:
>
> fyi, Lorenzo has sent out v7 of this series.  I'll be pushing this in
> an hour or so.  After which I suggest Stephen removes those (now) two
> lines and sends out one of his "build fix" emails which can be the
> basis for Linus's resolution.

I have not picked up v7 (I will get that tomorrow), but I have gone
back in today's tree and changed the merge resolution to be as below.
  
David Howells March 23, 2023, 12:04 a.m. UTC | #5
Andrew Morton <akpm@linux-foundation.org> wrote:

> Or I can just steal "iov_iter: Kill ITER_PIPE"...

You'd need to steal the eight patches before it too.

https://git.kernel.org/pub/scm/linux/kernel/git/axboe/linux-block.git/log/?h=for-6.4/splice

	iov_iter: Kill ITER_PIPE
	cifs: Use generic_file_splice_read()	
	splice: Do splice read from a file without using ITER_PIPE
	tty, proc, kernfs, random: Use direct_splice_read()	
	coda: Implement splice-read	
	overlayfs: Implement splice-read
	shmem: Implement splice-read	
	splice: Make do_splice_to() generic and export it
	splice: Clean up direct_splice_read() a bit

David
  
Jens Axboe March 23, 2023, 12:49 a.m. UTC | #6
On 3/22/23 5:26?PM, Andrew Morton wrote:
> On Wed, 22 Mar 2023 17:15:48 -0600 Jens Axboe <axboe@kernel.dk> wrote:
> 
>> On 3/22/23 5:13?PM, David Howells wrote:
>>> Stephen Rothwell <sfr@canb.auug.org.au> wrote:
>>>
>>>>  +	if (unlikely(iov_iter_is_pipe(i))) {
>>>>  +		copied = copy_page_to_iter_pipe(page, offset, bytes, i);
>>>>  +		goto out;
>>>>  +	}
>>>
>>> This bit would need to be removed from copy_page_to_iter_atomic() as the two
>>> functions it calls should be removed by the patch in the block tree.
>>
>> Maybe it'd be better to consolidate rather than split the changes over
>> two trees?
> 
> fyi, Lorenzo has sent out v7 of this series.  I'll be pushing this in
> an hour or so.  After which I suggest Stephen removes those (now) two
> lines and sends out one of his "build fix" emails which can be the
> basis for Linus's resolution.
> 
> Or I can just steal "iov_iter: Kill ITER_PIPE"...

Or how about we just make sure to queue up items that touch them same
stuff in the same tree? I've already had this queued for a week, and
seems pretty silly to shuffle things around because some thing got
posted in 4 iterations today and then queued up right after.

Plus the build is now broken, so maybe a bit more diligence would be
required here than the drive-by-merging.
  

Patch

diff --cc lib/iov_iter.c
index 48ca1c5dfc04,fad95e4cf372..000000000000
--- a/lib/iov_iter.c