[v5,8/9] iov_iter, block: Make bio structs pin pages rather than ref'ing if appropriate
Commit Message
Convert the block layer's bio code to use iov_iter_extract_pages() instead
of iov_iter_get_pages(). This will pin pages or leave them unaltered
rather than getting a ref on them as appropriate to the source iterator.
The pages need to be pinned for DIO-read rather than having refs taken on
them to prevent VM copy-on-write from malfunctioning during a concurrent
fork() (the result of the I/O would otherwise end up only visible to the
child process and not the parent).
To implement this:
(1) If the BIO_PAGE_REFFED flag is set, this causes attached pages to be
passed to put_page() during cleanup.
(2) A BIO_PAGE_PINNED flag is provided. If set, this causes attached
pages to be passed to unpin_user_page() during cleanup.
(3) BIO_PAGE_REFFED is set by default and BIO_PAGE_PINNED is cleared by
default when the bio is (re-)initialised.
(4) If iov_iter_extract_pages() indicates FOLL_GET, this causes
BIO_PAGE_REFFED to be set and if FOLL_PIN is indicated, this causes
BIO_PAGE_PINNED to be set. If it returns neither FOLL_* flag, then
both BIO_PAGE_* flags will be cleared.
Mixing sets of pages with different clean up modes is not supported.
(5) Cloned bio structs have both flags cleared.
(6) bio_release_pages() will do the release if either BIO_PAGE_* flag is
set.
[!] Note that this is tested a bit with ext4, but nothing else.
Changes
=======
ver #5)
- Transcribe the FOLL_* flags returned by iov_iter_extract_pages() to
BIO_* flags and got rid of bi_cleanup_mode.
- Replaced BIO_NO_PAGE_REF to BIO_PAGE_REFFED in the preceding patch.
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Al Viro <viro@zeniv.linux.org.uk>
cc: Jens Axboe <axboe@kernel.dk>
cc: Jan Kara <jack@suse.cz>
cc: Christoph Hellwig <hch@lst.de>
cc: Matthew Wilcox <willy@infradead.org>
cc: Logan Gunthorpe <logang@deltatee.com>
cc: linux-block@vger.kernel.org
Link: https://lore.kernel.org/r/167305166150.1521586.10220949115402059720.stgit@warthog.procyon.org.uk/ # v4
---
block/bio.c | 54 ++++++++++++++++++++++++++++++++-------------
include/linux/bio.h | 3 ++-
include/linux/blk_types.h | 1 +
3 files changed, 41 insertions(+), 17 deletions(-)
Comments
On Wed, Jan 11, 2023 at 02:28:35PM +0000, David Howells wrote:
> [!] Note that this is tested a bit with ext4, but nothing else.
You probably want to also at least test it with block device I/O
as that is a slightly different I/O path from iomap. More file systems
also never hurt, but aren't quite as important.
> +/*
> + * Clean up a page appropriately, where the page may be pinned, may have a
> + * ref taken on it or neither.
> + */
> +static void bio_release_page(struct bio *bio, struct page *page)
> +{
> + if (bio_flagged(bio, BIO_PAGE_PINNED))
> + unpin_user_page(page);
> + if (bio_flagged(bio, BIO_PAGE_REFFED))
> + put_page(page);
> +}
> +
> void __bio_release_pages(struct bio *bio, bool mark_dirty)
> {
> struct bvec_iter_all iter_all;
> @@ -1183,7 +1197,7 @@ void __bio_release_pages(struct bio *bio, bool mark_dirty)
> bio_for_each_segment_all(bvec, bio, iter_all) {
> if (mark_dirty && !PageCompound(bvec->bv_page))
> set_page_dirty_lock(bvec->bv_page);
> - put_page(bvec->bv_page);
> + bio_release_page(bio, bvec->bv_page);
So this does look correc an sensible, but given that the new pin/unpin
path has a significantly higher overhead I wonder if this might be a
good time to switch to folios here as soon as possible in a follow on
patch.
> + size = iov_iter_extract_pages(iter, &pages,
> + UINT_MAX - bio->bi_iter.bi_size,
> + nr_pages, gup_flags,
> + &offset, &cleanup_mode);
> if (unlikely(size <= 0))
> return size ? size : -EFAULT;
>
> + bio_clear_flag(bio, BIO_PAGE_REFFED);
> + bio_clear_flag(bio, BIO_PAGE_PINNED);
> + if (cleanup_mode & FOLL_GET)
> + bio_set_flag(bio, BIO_PAGE_REFFED);
> + if (cleanup_mode & FOLL_PIN)
> + bio_set_flag(bio, BIO_PAGE_PINNED);
The flags here must not change from one invocation to another, so
clearing and resetting them on every iteration seems dangerous.
This should probably be a:
if (cleanup_mode & FOLL_GET) {
WARN_ON_ONCE(bio_test_flag(bio, BIO_PAGE_PINNED));
bio_set_flag(bio, BIO_PAGE_REFFED);
}
if (cleanup_mode & FOLL_PIN) {
WARN_ON_ONCE(bio_test_flag(bio, BIO_PAGE_REFFED));
bio_set_flag(bio, BIO_PAGE_PINNED);
}
Christoph Hellwig <hch@infradead.org> wrote:
> if (cleanup_mode & FOLL_GET) {
> WARN_ON_ONCE(bio_test_flag(bio, BIO_PAGE_PINNED));
> bio_set_flag(bio, BIO_PAGE_REFFED);
> }
> if (cleanup_mode & FOLL_PIN) {
> WARN_ON_ONCE(bio_test_flag(bio, BIO_PAGE_REFFED));
> bio_set_flag(bio, BIO_PAGE_PINNED);
> }
That won't necessarily work as you might get back cleanup_mode == 0, in which
case both flags are cleared - and neither warning will trip on the next
addition.
I could change it so that rather than using a pair of flags, it uses a
four-state variable (which can be stored in bi_flags): BIO_PAGE_DEFAULT,
BIO_PAGE_REFFED, BIO_PAGE_PINNED, BIO_PAGE_NO_CLEANUP, say.
Or I could add an extra flag to say that the setting is locked. Or we could
just live with the scenario I outlined possibly happening.
David
On Thu, Jan 12, 2023 at 10:28:41AM +0000, David Howells wrote:
> Christoph Hellwig <hch@infradead.org> wrote:
>
> > if (cleanup_mode & FOLL_GET) {
> > WARN_ON_ONCE(bio_test_flag(bio, BIO_PAGE_PINNED));
> > bio_set_flag(bio, BIO_PAGE_REFFED);
> > }
> > if (cleanup_mode & FOLL_PIN) {
> > WARN_ON_ONCE(bio_test_flag(bio, BIO_PAGE_REFFED));
> > bio_set_flag(bio, BIO_PAGE_PINNED);
> > }
>
> That won't necessarily work as you might get back cleanup_mode == 0, in which
> case both flags are cleared - and neither warning will trip on the next
> addition.
Well, it will work for the intended use case even with
cleanup_mode == 0, we just won't get the debug check. Or am I missing
something fundamental?
On Thu, Jan 12, 2023 at 06:09:16AM -0800, Christoph Hellwig wrote:
> On Thu, Jan 12, 2023 at 10:28:41AM +0000, David Howells wrote:
> > Christoph Hellwig <hch@infradead.org> wrote:
> >
> > > if (cleanup_mode & FOLL_GET) {
> > > WARN_ON_ONCE(bio_test_flag(bio, BIO_PAGE_PINNED));
> > > bio_set_flag(bio, BIO_PAGE_REFFED);
> > > }
> > > if (cleanup_mode & FOLL_PIN) {
> > > WARN_ON_ONCE(bio_test_flag(bio, BIO_PAGE_REFFED));
> > > bio_set_flag(bio, BIO_PAGE_PINNED);
> > > }
> >
> > That won't necessarily work as you might get back cleanup_mode == 0, in which
> > case both flags are cleared - and neither warning will trip on the next
> > addition.
>
> Well, it will work for the intended use case even with
> cleanup_mode == 0, we just won't get the debug check. Or am I missing
> something fundamental?
In fact looking at the code we can debug check that case too by doing:
if (cleanup_mode & FOLL_GET) {
WARN_ON_ONCE(bio_test_flag(bio, BIO_PAGE_PINNED));
bio_set_flag(bio, BIO_PAGE_REFFED);
} else if (cleanup_mode & FOLL_PIN) {
WARN_ON_ONCE(bio_test_flag(bio, BIO_PAGE_REFFED));
bio_set_flag(bio, BIO_PAGE_PINNED);
} else {
WARN_ON_ONCE(bio_test_flag(bio, BIO_PAGE_PINNED) ||
bio_test_flag(bio, BIO_PAGE_REFFED));
}
But given that all calls for the same iter type return the same
cleanup_mode by defintion I'm not even sure we need any of this
debug checking, and might as well just do:
if (cleanup_mode & FOLL_GET)
bio_set_flag(bio, BIO_PAGE_REFFED);
else if (cleanup_mode & FOLL_PIN)
bio_set_flag(bio, BIO_PAGE_PINNED);
Christoph Hellwig <hch@infradead.org> wrote:
> But given that all calls for the same iter type return the same
> cleanup_mode by defintion I'm not even sure we need any of this
> debug checking, and might as well just do:
>
> if (cleanup_mode & FOLL_GET)
> bio_set_flag(bio, BIO_PAGE_REFFED);
> else if (cleanup_mode & FOLL_PIN)
> bio_set_flag(bio, BIO_PAGE_PINNED);
That's kind of what I'm doing - though I've left out the else just in case the
VM decides to indicate back both FOLL_GET and FOLL_PIN. I'm not sure why it
would but...
David
On Thu, Jan 12, 2023 at 02:58:49PM +0000, David Howells wrote:
> That's kind of what I'm doing - though I've left out the else just in case the
> VM decides to indicate back both FOLL_GET and FOLL_PIN. I'm not sure why it
> would but...
It really can't - they are exclusive. Maybe we need an assert for that
somewhere, but we surely shouldn't try to deal with it.
@@ -245,8 +245,9 @@ static void bio_free(struct bio *bio)
* when IO has completed, or when the bio is released.
*
* We set the initial assumption that pages attached to the bio will be
- * released with put_page() by setting BIO_PAGE_REFFED; if the pages
- * should not be put, this flag should be cleared.
+ * released with put_page() by setting BIO_PAGE_REFFED, but this should be set
+ * to BIO_PAGE_PINNED if the page should be unpinned instead; if the pages
+ * should not be put or unpinned, these flags should be cleared.
*/
void bio_init(struct bio *bio, struct block_device *bdev, struct bio_vec *table,
unsigned short max_vecs, blk_opf_t opf)
@@ -819,6 +820,7 @@ static int __bio_clone(struct bio *bio, struct bio *bio_src, gfp_t gfp)
{
bio_set_flag(bio, BIO_CLONED);
bio_clear_flag(bio, BIO_PAGE_REFFED);
+ bio_clear_flag(bio, BIO_PAGE_PINNED);
bio->bi_ioprio = bio_src->bi_ioprio;
bio->bi_iter = bio_src->bi_iter;
@@ -1175,6 +1177,18 @@ bool bio_add_folio(struct bio *bio, struct folio *folio, size_t len,
return bio_add_page(bio, &folio->page, len, off) > 0;
}
+/*
+ * Clean up a page appropriately, where the page may be pinned, may have a
+ * ref taken on it or neither.
+ */
+static void bio_release_page(struct bio *bio, struct page *page)
+{
+ if (bio_flagged(bio, BIO_PAGE_PINNED))
+ unpin_user_page(page);
+ if (bio_flagged(bio, BIO_PAGE_REFFED))
+ put_page(page);
+}
+
void __bio_release_pages(struct bio *bio, bool mark_dirty)
{
struct bvec_iter_all iter_all;
@@ -1183,7 +1197,7 @@ void __bio_release_pages(struct bio *bio, bool mark_dirty)
bio_for_each_segment_all(bvec, bio, iter_all) {
if (mark_dirty && !PageCompound(bvec->bv_page))
set_page_dirty_lock(bvec->bv_page);
- put_page(bvec->bv_page);
+ bio_release_page(bio, bvec->bv_page);
}
}
EXPORT_SYMBOL_GPL(__bio_release_pages);
@@ -1220,7 +1234,7 @@ static int bio_iov_add_page(struct bio *bio, struct page *page,
}
if (same_page)
- put_page(page);
+ bio_release_page(bio, page);
return 0;
}
@@ -1234,7 +1248,7 @@ static int bio_iov_add_zone_append_page(struct bio *bio, struct page *page,
queue_max_zone_append_sectors(q), &same_page) != len)
return -EINVAL;
if (same_page)
- put_page(page);
+ bio_release_page(bio, page);
return 0;
}
@@ -1245,10 +1259,10 @@ static int bio_iov_add_zone_append_page(struct bio *bio, struct page *page,
* @bio: bio to add pages to
* @iter: iov iterator describing the region to be mapped
*
- * Pins pages from *iter and appends them to @bio's bvec array. The
- * pages will have to be released using put_page() when done.
- * For multi-segment *iter, this function only adds pages from the
- * next non-empty segment of the iov iterator.
+ * Extracts pages from *iter and appends them to @bio's bvec array. The pages
+ * will have to be cleaned up in the way indicated by the BIO_PAGE_REFFED and
+ * BIO_PAGE_PINNED flags. For a multi-segment *iter, this function only adds
+ * pages from the next non-empty segment of the iov iterator.
*/
static int __bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter)
{
@@ -1256,7 +1270,7 @@ static int __bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter)
unsigned short entries_left = bio->bi_max_vecs - bio->bi_vcnt;
struct bio_vec *bv = bio->bi_io_vec + bio->bi_vcnt;
struct page **pages = (struct page **)bv;
- unsigned int gup_flags = 0;
+ unsigned int gup_flags = 0, cleanup_mode;
ssize_t size, left;
unsigned len, i = 0;
size_t offset, trim;
@@ -1280,12 +1294,20 @@ static int __bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter)
* result to ensure the bio's total size is correct. The remainder of
* the iov data will be picked up in the next bio iteration.
*/
- size = iov_iter_get_pages(iter, pages,
- UINT_MAX - bio->bi_iter.bi_size,
- nr_pages, &offset, gup_flags);
+ size = iov_iter_extract_pages(iter, &pages,
+ UINT_MAX - bio->bi_iter.bi_size,
+ nr_pages, gup_flags,
+ &offset, &cleanup_mode);
if (unlikely(size <= 0))
return size ? size : -EFAULT;
+ bio_clear_flag(bio, BIO_PAGE_REFFED);
+ bio_clear_flag(bio, BIO_PAGE_PINNED);
+ if (cleanup_mode & FOLL_GET)
+ bio_set_flag(bio, BIO_PAGE_REFFED);
+ if (cleanup_mode & FOLL_PIN)
+ bio_set_flag(bio, BIO_PAGE_PINNED);
+
nr_pages = DIV_ROUND_UP(offset + size, PAGE_SIZE);
trim = size & (bdev_logical_block_size(bio->bi_bdev) - 1);
@@ -1315,7 +1337,7 @@ static int __bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter)
iov_iter_revert(iter, left);
out:
while (i < nr_pages)
- put_page(pages[i++]);
+ bio_release_page(bio, pages[i++]);
return ret;
}
@@ -1496,8 +1518,8 @@ void bio_set_pages_dirty(struct bio *bio)
* the BIO and re-dirty the pages in process context.
*
* It is expected that bio_check_pages_dirty() will wholly own the BIO from
- * here on. It will run one put_page() against each page and will run one
- * bio_put() against the BIO.
+ * here on. It will run one put_page() or unpin_user_page() against each page
+ * and will run one bio_put() against the BIO.
*/
static void bio_dirty_fn(struct work_struct *work);
@@ -482,7 +482,8 @@ void zero_fill_bio(struct bio *bio);
static inline void bio_release_pages(struct bio *bio, bool mark_dirty)
{
- if (bio_flagged(bio, BIO_PAGE_REFFED))
+ if (bio_flagged(bio, BIO_PAGE_REFFED) ||
+ bio_flagged(bio, BIO_PAGE_PINNED))
__bio_release_pages(bio, mark_dirty);
}
@@ -319,6 +319,7 @@ struct bio {
*/
enum {
BIO_PAGE_REFFED, /* Pages need refs putting (see FOLL_GET) */
+ BIO_PAGE_PINNED, /* Pages need pins unpinning (see FOLL_PIN) */
BIO_CLONED, /* doesn't own data */
BIO_BOUNCED, /* bio is a bounce bio */
BIO_QUIET, /* Make BIO Quiet */