[RFC,4/4] ublk_drv: support splice based read/write zero copy
Commit Message
Pass ublk block IO request pages to kernel backend IO handling code via
pipe, and request page copy can be avoided. So far, the existed
pipe/splice mechanism works for handling write request only.
The initial idea of using splice for zero copy is from Miklos and Stefan.
Read request's zero copy requires pipe's change to allow one read end to
produce buffers for another read end to consume. The added SPLICE_F_READ_TO_READ
flag is for supporting this feature.
READ is handled by sending IORING_OP_SPLICE with SPLICE_F_DIRECT |
SPLICE_F_READ_TO_READ. WRITE is handled by sending IORING_OP_SPLICE with
SPLICE_F_DIRECT. Kernel internal pipe is used for simplifying userspace,
meantime potential info leak could be avoided.
Suggested-by: Miklos Szeredi <mszeredi@redhat.com>
Suggested-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
drivers/block/ublk_drv.c | 151 +++++++++++++++++++++++++++++++++-
include/uapi/linux/ublk_cmd.h | 34 +++++++-
2 files changed, 182 insertions(+), 3 deletions(-)
Comments
On 11/3/22 09:50, Ming Lei wrote:
> Pass ublk block IO request pages to kernel backend IO handling code via
> pipe, and request page copy can be avoided. So far, the existed
> pipe/splice mechanism works for handling write request only.
>
> The initial idea of using splice for zero copy is from Miklos and Stefan.
>
> Read request's zero copy requires pipe's change to allow one read end to
> produce buffers for another read end to consume. The added SPLICE_F_READ_TO_READ
> flag is for supporting this feature.
>
> READ is handled by sending IORING_OP_SPLICE with SPLICE_F_DIRECT |
> SPLICE_F_READ_TO_READ. WRITE is handled by sending IORING_OP_SPLICE with
> SPLICE_F_DIRECT. Kernel internal pipe is used for simplifying userspace,
> meantime potential info leak could be avoided.
Sorry to ask, do you have an ublk branch that gives an example how to
use this?
I still have several things to fix in my branches, but I got basic fuse
uring with copies working. Adding back splice would be next after
posting rfc patches. My initial assumption was that I needed to
duplicate everything splice does into the fuse .uring_cmd handler -
obviously there is a better way with your patches.
This week I have a few days off, by end of next week or the week after I
might have patches in an rfc state (one thing I'm going to ask about is
how do I know what is the next CQE in the kernel handler - ublk does
this with tags through mq, but I don't understand yet where the tag is
increased and what the relation between tag and right CQE order is).
This got modeled a bit after ublk, but then diverged a bit.
https://github.com/aakefbs/linux/tree/fuse-uring
https://github.com/aakefbs/libfuse/tree/uring
(Again, the branches are not ready by any means for review yet).
Thanks,
Bernd
On Thu, Nov 03, 2022 at 11:28:29PM +0100, Bernd Schubert wrote:
>
>
> On 11/3/22 09:50, Ming Lei wrote:
> > Pass ublk block IO request pages to kernel backend IO handling code via
> > pipe, and request page copy can be avoided. So far, the existed
> > pipe/splice mechanism works for handling write request only.
> >
> > The initial idea of using splice for zero copy is from Miklos and Stefan.
> >
> > Read request's zero copy requires pipe's change to allow one read end to
> > produce buffers for another read end to consume. The added SPLICE_F_READ_TO_READ
> > flag is for supporting this feature.
> >
> > READ is handled by sending IORING_OP_SPLICE with SPLICE_F_DIRECT |
> > SPLICE_F_READ_TO_READ. WRITE is handled by sending IORING_OP_SPLICE with
> > SPLICE_F_DIRECT. Kernel internal pipe is used for simplifying userspace,
> > meantime potential info leak could be avoided.
>
>
> Sorry to ask, do you have an ublk branch that gives an example how to use
> this?
Follows the ublk splice-zc branch:
https://github.com/ming1/ubdsrv/commits/splice-zc
which is mentioned in cover letter, but I guess it should be added to
here too, sorry for that, so far only ublk-loop supports it by:
ublk add -t loop -f $BACKING -z
without '-z', ublk-loop is created with zero copy disabled.
>
> I still have several things to fix in my branches, but I got basic fuse
> uring with copies working. Adding back splice would be next after posting
> rfc patches. My initial assumption was that I needed to duplicate everything
> splice does into the fuse .uring_cmd handler - obviously there is a better
> way with your patches.
>
> This week I have a few days off, by end of next week or the week after I
> might have patches in an rfc state (one thing I'm going to ask about is how
> do I know what is the next CQE in the kernel handler - ublk does this with
> tags through mq, but I don't understand yet where the tag is increased and
> what the relation between tag and right CQE order is).
tag is one attribute of io request, which is originated from ublk
driver, and it is unique for each request among one queue. So ublksrv
won't change it at all, just use it, and ublk driver guarantees that
it is unique.
In ublkserv implementation, the tag info is set in cqe->user_data, so
we can retrieve the io request via tag part of cqe->user_data.
Also I may not understand your question of 'the relation between tag and right
CQE order', io_uring provides IOSQE_IO_DRAIN/IOSQE_IO_LINK for ordering
SQE, and ublksrv only applies IOSQE_IO_LINK in ublk-qcow2, so care to
explain it in a bit details about the "the relation between tag and right
CQE order"?
Thanks,
Ming
On 11/4/22 01:44, Ming Lei wrote:
> On Thu, Nov 03, 2022 at 11:28:29PM +0100, Bernd Schubert wrote:
>>
>>
>> On 11/3/22 09:50, Ming Lei wrote:
>>> Pass ublk block IO request pages to kernel backend IO handling code via
>>> pipe, and request page copy can be avoided. So far, the existed
>>> pipe/splice mechanism works for handling write request only.
>>>
>>> The initial idea of using splice for zero copy is from Miklos and Stefan.
>>>
>>> Read request's zero copy requires pipe's change to allow one read end to
>>> produce buffers for another read end to consume. The added SPLICE_F_READ_TO_READ
>>> flag is for supporting this feature.
>>>
>>> READ is handled by sending IORING_OP_SPLICE with SPLICE_F_DIRECT |
>>> SPLICE_F_READ_TO_READ. WRITE is handled by sending IORING_OP_SPLICE with
>>> SPLICE_F_DIRECT. Kernel internal pipe is used for simplifying userspace,
>>> meantime potential info leak could be avoided.
>>
>>
>> Sorry to ask, do you have an ublk branch that gives an example how to use
>> this?
>
> Follows the ublk splice-zc branch:
>
> https://github.com/ming1/ubdsrv/commits/splice-zc
>
> which is mentioned in cover letter, but I guess it should be added to
> here too, sorry for that, so far only ublk-loop supports it by:
>
> ublk add -t loop -f $BACKING -z
>
> without '-z', ublk-loop is created with zero copy disabled.
Ah, thanks a lot! And sorry, I had missed this part in the cover letter.
I will take a look on your new zero copy code on Monday.
>
>>
>> I still have several things to fix in my branches, but I got basic fuse
>> uring with copies working. Adding back splice would be next after posting
>> rfc patches. My initial assumption was that I needed to duplicate everything
>> splice does into the fuse .uring_cmd handler - obviously there is a better
>> way with your patches.
>>
>> This week I have a few days off, by end of next week or the week after I
>> might have patches in an rfc state (one thing I'm going to ask about is how
>> do I know what is the next CQE in the kernel handler - ublk does this with
>> tags through mq, but I don't understand yet where the tag is increased and
>> what the relation between tag and right CQE order is).
>
> tag is one attribute of io request, which is originated from ublk
> driver, and it is unique for each request among one queue. So ublksrv
> won't change it at all, just use it, and ublk driver guarantees that
> it is unique.
>
> In ublkserv implementation, the tag info is set in cqe->user_data, so
> we can retrieve the io request via tag part of cqe->user_data.
Yeah, this is the easy part I understood. At least I hope so :)
>
> Also I may not understand your question of 'the relation between tag and right
> CQE order', io_uring provides IOSQE_IO_DRAIN/IOSQE_IO_LINK for ordering
> SQE, and ublksrv only applies IOSQE_IO_LINK in ublk-qcow2, so care to
> explain it in a bit details about the "the relation between tag and right
> CQE order"?
For fuse (kernel) a vfs request comes in and I need to choose a command
in the ring queue. Right now this is just an atomic counter % queue_size
fuse_request_alloc_ring()
req_cnt = atomic_inc_return(&queue->req_cnt);
tag = req_cnt & (fc->ring.queue_depth - 1); /* cnt % queue_depth */
ring_req = &queue->ring_req[tag];
I might be wrong, but I think that can be compared a bit to
ublk_queue_rq(). Looks like ublk_queue_rq gets called in blk-mq context
and blk-mq seems to provide rq->tag, which then determines the command
in the ring queue - completion of commands is done in tag-order provided
by blk-mq? The part I didn't figure out yet is where the tag value gets set.
Also interesting is that there is no handler if the ring is already full
- like the ublk_io command is currently busy in ublksrv (user space).
Handled auto-magically with blk-mq?
This is one of the parts not handled in my fuse code yet and my current
plan is to have a request queue on top of the (per core) ring queues.
Similar to the existing fuse request queue, just not one, but per ring
queue and processed by the ring queue. Unless there is a better way -
which is another reason to understand how ublk handles this.
Thanks,
Bernd
On Sat, Nov 05, 2022 at 12:37:21AM +0100, Bernd Schubert wrote:
>
>
> On 11/4/22 01:44, Ming Lei wrote:
> > On Thu, Nov 03, 2022 at 11:28:29PM +0100, Bernd Schubert wrote:
> > >
> > >
> > > On 11/3/22 09:50, Ming Lei wrote:
> > > > Pass ublk block IO request pages to kernel backend IO handling code via
> > > > pipe, and request page copy can be avoided. So far, the existed
> > > > pipe/splice mechanism works for handling write request only.
> > > >
> > > > The initial idea of using splice for zero copy is from Miklos and Stefan.
> > > >
> > > > Read request's zero copy requires pipe's change to allow one read end to
> > > > produce buffers for another read end to consume. The added SPLICE_F_READ_TO_READ
> > > > flag is for supporting this feature.
> > > >
> > > > READ is handled by sending IORING_OP_SPLICE with SPLICE_F_DIRECT |
> > > > SPLICE_F_READ_TO_READ. WRITE is handled by sending IORING_OP_SPLICE with
> > > > SPLICE_F_DIRECT. Kernel internal pipe is used for simplifying userspace,
> > > > meantime potential info leak could be avoided.
> > >
> > >
> > > Sorry to ask, do you have an ublk branch that gives an example how to use
> > > this?
> >
> > Follows the ublk splice-zc branch:
> >
> > https://github.com/ming1/ubdsrv/commits/splice-zc
> >
> > which is mentioned in cover letter, but I guess it should be added to
> > here too, sorry for that, so far only ublk-loop supports it by:
> >
> > ublk add -t loop -f $BACKING -z
> >
> > without '-z', ublk-loop is created with zero copy disabled.
>
> Ah, thanks a lot! And sorry, I had missed this part in the cover letter.
>
> I will take a look on your new zero copy code on Monday.
>
>
> >
> > >
> > > I still have several things to fix in my branches, but I got basic fuse
> > > uring with copies working. Adding back splice would be next after posting
> > > rfc patches. My initial assumption was that I needed to duplicate everything
> > > splice does into the fuse .uring_cmd handler - obviously there is a better
> > > way with your patches.
> > >
> > > This week I have a few days off, by end of next week or the week after I
> > > might have patches in an rfc state (one thing I'm going to ask about is how
> > > do I know what is the next CQE in the kernel handler - ublk does this with
> > > tags through mq, but I don't understand yet where the tag is increased and
> > > what the relation between tag and right CQE order is).
> >
> > tag is one attribute of io request, which is originated from ublk
> > driver, and it is unique for each request among one queue. So ublksrv
> > won't change it at all, just use it, and ublk driver guarantees that
> > it is unique.
> >
> > In ublkserv implementation, the tag info is set in cqe->user_data, so
> > we can retrieve the io request via tag part of cqe->user_data.
>
> Yeah, this is the easy part I understood. At least I hope so :)
>
> >
> > Also I may not understand your question of 'the relation between tag and right
> > CQE order', io_uring provides IOSQE_IO_DRAIN/IOSQE_IO_LINK for ordering
> > SQE, and ublksrv only applies IOSQE_IO_LINK in ublk-qcow2, so care to
> > explain it in a bit details about the "the relation between tag and right
> > CQE order"?
>
>
> For fuse (kernel) a vfs request comes in and I need to choose a command in
> the ring queue. Right now this is just an atomic counter % queue_size
>
> fuse_request_alloc_ring()
> req_cnt = atomic_inc_return(&queue->req_cnt);
> tag = req_cnt & (fc->ring.queue_depth - 1); /* cnt % queue_depth */
>
> ring_req = &queue->ring_req[tag];
>
>
>
> I might be wrong, but I think that can be compared a bit to ublk_queue_rq().
> Looks like ublk_queue_rq gets called in blk-mq context and blk-mq seems to
> provide rq->tag, which then determines the command in the ring queue -
> completion of commands is done in tag-order provided by blk-mq? The part I
The two are not related, blk-mq tag number means nothing wrt. io
handling order:
- tag is allocated via sbitmap, which may return tag number in any
order, you may think the returned number is just random
- blk-mq may re-order requests and dispatch them with any order
- once requests are issued to io_uring, userspace may handles these IOs
with any order
- after backend io is queued via io_uring or libaio or whatever to kernel, it
could be completed at any order
> didn't figure out yet is where the tag value gets set.
> Also interesting is that there is no handler if the ring is already full -
> like the ublk_io command is currently busy in ublksrv (user space). Handled
> auto-magically with blk-mq?
For ublk, the queue has fixed depth, so the pre-allocated io_uring size is
enough, and blk-mq can throttle IOs from the beginning if the max queue depth is
reached, so ublk needn't to worry about io_uring size/depth.
But fuse may have to consider request throttle.
Thanks,
Ming
@@ -42,6 +42,8 @@
#include <linux/mm.h>
#include <asm/page.h>
#include <linux/task_work.h>
+#include <linux/pipe_fs_i.h>
+#include <linux/splice.h>
#include <uapi/linux/ublk_cmd.h>
#define UBLK_MINORS (1U << MINORBITS)
@@ -51,7 +53,8 @@
| UBLK_F_URING_CMD_COMP_IN_TASK \
| UBLK_F_NEED_GET_DATA \
| UBLK_F_USER_RECOVERY \
- | UBLK_F_USER_RECOVERY_REISSUE)
+ | UBLK_F_USER_RECOVERY_REISSUE \
+ | UBLK_F_SPLICE_ZC)
/* All UBLK_PARAM_TYPE_* should be included here */
#define UBLK_PARAM_TYPE_ALL (UBLK_PARAM_TYPE_BASIC | UBLK_PARAM_TYPE_DISCARD)
@@ -61,6 +64,7 @@ struct ublk_rq_data {
struct callback_head work;
struct llist_node node;
};
+ atomic_t handled;
};
struct ublk_uring_cmd_pdu {
@@ -480,6 +484,9 @@ static int ublk_map_io(const struct ublk_queue *ubq, const struct request *req,
if (req_op(req) != REQ_OP_WRITE && req_op(req) != REQ_OP_FLUSH)
return rq_bytes;
+ if (ubq->flags & UBLK_F_SPLICE_ZC)
+ return rq_bytes;
+
if (ublk_rq_has_data(req)) {
struct ublk_map_data data = {
.ubq = ubq,
@@ -501,6 +508,9 @@ static int ublk_unmap_io(const struct ublk_queue *ubq,
{
const unsigned int rq_bytes = blk_rq_bytes(req);
+ if (ubq->flags & UBLK_F_SPLICE_ZC)
+ return rq_bytes;
+
if (req_op(req) == REQ_OP_READ && ublk_rq_has_data(req)) {
struct ublk_map_data data = {
.ubq = ubq,
@@ -858,6 +868,19 @@ static blk_status_t ublk_queue_rq(struct blk_mq_hw_ctx *hctx,
if (ublk_queue_can_use_recovery(ubq) && unlikely(ubq->force_abort))
return BLK_STS_IOERR;
+ if (ubq->flags & UBLK_F_SPLICE_ZC) {
+ struct ublk_rq_data *data = blk_mq_rq_to_pdu(rq);
+
+ atomic_set(&data->handled, 0);
+
+ /*
+ * Order write ->handled and write rq->state in
+ * blk_mq_start_request, the pair barrier is the one
+ * implied in atomic_inc_return() in ublk_splice_read
+ */
+ smp_wmb();
+ }
+
blk_mq_start_request(bd->rq);
if (unlikely(ubq_daemon_is_dying(ubq))) {
@@ -1299,13 +1322,137 @@ static int ublk_ch_uring_cmd(struct io_uring_cmd *cmd, unsigned int issue_flags)
return -EIOCBQUEUED;
}
+static void ublk_pipe_buf_release(struct pipe_inode_info *pipe,
+ struct pipe_buffer *buf)
+{
+}
+
+static const struct pipe_buf_operations ublk_pipe_buf_ops = {
+ .release = ublk_pipe_buf_release,
+};
+
+/*
+ * Pass request page reference to kernel backend IO handler via pipe
+ *
+ * ublk server has to handle backend IO via splice()
+ */
+static ssize_t ublk_splice_read(struct file *in, loff_t *ppos,
+ struct pipe_inode_info *pipe,
+ size_t len, unsigned int flags)
+{
+ struct ublk_device *ub = in->private_data;
+ struct req_iterator rq_iter;
+ struct bio_vec bv;
+ struct request *req;
+ struct ublk_queue *ubq;
+ u16 tag, q_id;
+ unsigned int done;
+ int ret, buf_offset;
+ struct ublk_rq_data *data;
+
+ if (!(flags & SPLICE_F_DIRECT))
+ return -EPERM;
+
+ /* No, we have to be the in side */
+ if (pipe->consumed_by_read)
+ return -EINVAL;
+
+ if (!ub)
+ return -EPERM;
+
+ tag = ublk_pos_to_tag(*ppos);
+ q_id = ublk_pos_to_hwq(*ppos);
+ buf_offset = ublk_pos_to_buf_offset(*ppos);
+
+ if (q_id >= ub->dev_info.nr_hw_queues)
+ return -EINVAL;
+
+ ubq = ublk_get_queue(ub, q_id);
+ if (!ubq)
+ return -EINVAL;
+
+ if (!(ubq->flags & UBLK_F_SPLICE_ZC))
+ return -EINVAL;
+
+ if (tag >= ubq->q_depth)
+ return -EINVAL;
+
+ req = blk_mq_tag_to_rq(ub->tag_set.tags[q_id], tag);
+ if (!req || !blk_mq_request_started(req))
+ return -EINVAL;
+
+ data = blk_mq_rq_to_pdu(req);
+ if (atomic_add_return(len, &data->handled) > blk_rq_bytes(req) || !len)
+ return -EACCES;
+
+ ret = -EINVAL;
+ if (!ublk_rq_has_data(req))
+ goto exit;
+
+ pr_devel("%s: qid %d tag %u offset %x, request bytes %u, len %llu\n",
+ __func__, tag, q_id, buf_offset, blk_rq_bytes(req),
+ (unsigned long long)len);
+
+ if (buf_offset + len > blk_rq_bytes(req))
+ goto exit;
+
+ if ((req_op(req) == REQ_OP_READ) &&
+ !(flags & SPLICE_F_READ_TO_READ))
+ goto exit;
+
+ if ((req_op(req) != REQ_OP_READ) &&
+ (flags & SPLICE_F_READ_TO_READ))
+ goto exit;
+
+ done = ret = 0;
+ /* todo: is iov_iter ready for handling multipage bvec? */
+ rq_for_each_segment(bv, req, rq_iter) {
+ struct pipe_buffer buf = {
+ .ops = &ublk_pipe_buf_ops,
+ .flags = 0,
+ .page = bv.bv_page,
+ .offset = bv.bv_offset,
+ .len = bv.bv_len,
+ };
+
+ if (buf_offset > 0) {
+ if (buf_offset >= bv.bv_len) {
+ buf_offset -= bv.bv_len;
+ continue;
+ } else {
+ buf.offset += buf_offset;
+ buf.len -= buf_offset;
+ buf_offset = 0;
+ }
+ }
+
+ ret = add_to_pipe(pipe, &buf);
+ if (unlikely(ret < 0))
+ break;
+ done += ret;
+ }
+
+ if (flags & SPLICE_F_READ_TO_READ)
+ pipe->consumed_by_read = true;
+
+ WARN_ON_ONCE(done > len);
+
+ if (done) {
+ *ppos += done;
+ ret = done;
+ }
+exit:
+ return ret;
+}
+
static const struct file_operations ublk_ch_fops = {
.owner = THIS_MODULE,
.open = ublk_ch_open,
.release = ublk_ch_release,
- .llseek = no_llseek,
+ .llseek = noop_llseek,
.uring_cmd = ublk_ch_uring_cmd,
.mmap = ublk_ch_mmap,
+ .splice_read = ublk_splice_read,
};
static void ublk_deinit_queue(struct ublk_device *ub, int q_id)
@@ -52,7 +52,36 @@
#define UBLKSRV_IO_BUF_OFFSET 0x80000000
/* tag bit is 12bit, so at most 4096 IOs for each queue */
-#define UBLK_MAX_QUEUE_DEPTH 4096
+#define UBLK_TAG_BITS 12
+#define UBLK_MAX_QUEUE_DEPTH (1U << UBLK_TAG_BITS)
+
+/* used in ->splice_read for supporting zero-copy */
+#define UBLK_BUFS_SIZE_BITS 42
+#define UBLK_BUFS_SIZE_MASK ((1ULL << UBLK_BUFS_SIZE_BITS) - 1)
+#define UBLK_BUF_SIZE_BITS (UBLK_BUFS_SIZE_BITS - UBLK_TAG_BITS)
+#define UBLK_BUF_MAX_SIZE (1ULL << UBLK_BUF_SIZE_BITS)
+
+static inline __u16 ublk_pos_to_hwq(__u64 pos)
+{
+ return pos >> UBLK_BUFS_SIZE_BITS;
+}
+
+static inline __u32 ublk_pos_to_buf_offset(__u64 pos)
+{
+ return (pos & UBLK_BUFS_SIZE_MASK) & (UBLK_BUF_MAX_SIZE - 1);
+}
+
+static inline __u16 ublk_pos_to_tag(__u64 pos)
+{
+ return (pos & UBLK_BUFS_SIZE_MASK) >> UBLK_BUF_SIZE_BITS;
+}
+
+/* offset of single buffer, which has to be < UBLK_BUX_MAX_SIZE */
+static inline __u64 ublk_pos(__u16 q_id, __u16 tag, __u32 offset)
+{
+ return (((__u64)q_id) << UBLK_BUFS_SIZE_BITS) |
+ ((((__u64)tag) << UBLK_BUF_SIZE_BITS) + offset);
+}
/*
* zero copy requires 4k block size, and can remap ublk driver's io
@@ -79,6 +108,9 @@
#define UBLK_F_USER_RECOVERY_REISSUE (1UL << 4)
+/* slice based write zero copy */
+#define UBLK_F_SPLICE_ZC (1UL << 5)
+
/* device state */
#define UBLK_S_DEV_DEAD 0
#define UBLK_S_DEV_LIVE 1