Message ID | 20230413-b4-vsock-dgram-v5-11-581bd37fdb26@bytedance.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:c923:0:b0:3e4:2afc:c1 with SMTP id j3csp2111513vqt; Tue, 18 Jul 2023 17:52:10 -0700 (PDT) X-Google-Smtp-Source: APBJJlFjhuJSjjhBeYKpOm2oyrZEIaj0CVplbAPB9x+Ljsk63I2BmIFKior4faNBffpY7c/ScCu0 X-Received: by 2002:a05:6a20:4291:b0:11e:e940:441e with SMTP id o17-20020a056a20429100b0011ee940441emr15637053pzj.25.1689727930589; Tue, 18 Jul 2023 17:52:10 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1689727930; cv=none; d=google.com; s=arc-20160816; b=lX1QAHHUFgyE9nnhLdJukjqWAtmVjfJ90hcUk+2pZ2pUoo7Vv/ZFpZRMgo0TawQ94J n6kkFFLyZ0FrVkoJzhQO3K7IQG/l/CE6KzLUqAviWSMa8pzhXX1RyGAXzHWX9dpLI6nw uPZAomerrgI5VUPuOLz+6Go83MEnzAGIotrb/Pd6E4t1lEA0p6oWTCh27nobd8KvmKbZ VRrcvHi+bhHomujJfRywo6/fEaDOahGXgir2DVtqfWe3Sv9Y+8eFcDzvpDuVkXO9rGZS rgPHexrQfzFAe5qQ2WDG98BO98izRr5spewlQitykCK8+ck0sx+LXdSJk6oq/pJ+Jgdf pPfg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:in-reply-to:references:message-id :content-transfer-encoding:mime-version:subject:date:from :dkim-signature; bh=sPwhBmtD9OJVg+4DzOYL256xxaoXc/AZRhsCaKboFfQ=; fh=jejeGpHkr9vVlEyvdVBCSLFut92eNP6vAI+O1tx9vp4=; b=wCoF/HUMj6vteZQ66DiiSlV+J7po6FZY0vIpXwRhTjJTrcbC/rmNJF4DYzl4qpmggF Y3fRHpmb9v815d2/UMVKOfvjy8RiJpJ6bdfeC+fdFsbGS1l9EWUCt9SOJxW35K6YFgwY JGY54dT+kx03kr8jueFNTEnFaWxEcKbnBSSQx9LVizsX+99QfxTvqz7cK0MW8wl3ayTr YdINl1RBOkdwCUDtjzlWVyhmpiu2lqbcx8vQFThNmIKQt9tVjpHNhKFKIDCeqpv4131N JBIagugrY4DDEKKoguh9V9CHkSlc8YwZV0u/Umn5ReTwKBu7pru8SS7WDEWV/UPboE65 RL1Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@bytedance.com header.s=google header.b=Zfsg98WI; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=bytedance.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id cn11-20020a056a00340b00b00668873710d9si2309529pfb.162.2023.07.18.17.51.57; Tue, 18 Jul 2023 17:52:10 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@bytedance.com header.s=google header.b=Zfsg98WI; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=bytedance.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229915AbjGSAut (ORCPT <rfc822;assdfgzxcv4@gmail.com> + 99 others); Tue, 18 Jul 2023 20:50:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47054 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229918AbjGSAuj (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Tue, 18 Jul 2023 20:50:39 -0400 Received: from mail-qk1-x734.google.com (mail-qk1-x734.google.com [IPv6:2607:f8b0:4864:20::734]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D41551BF7 for <linux-kernel@vger.kernel.org>; Tue, 18 Jul 2023 17:50:19 -0700 (PDT) Received: by mail-qk1-x734.google.com with SMTP id af79cd13be357-7672303c831so582747685a.2 for <linux-kernel@vger.kernel.org>; Tue, 18 Jul 2023 17:50:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1689727818; x=1692319818; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=sPwhBmtD9OJVg+4DzOYL256xxaoXc/AZRhsCaKboFfQ=; b=Zfsg98WIgY3dgqQOhdM9OAVFqPKJRz3yNitUZviHPLo8OICH8/ZXeMciH1E2lvVC1/ WQvvNe6PKKqimbaiIqj3LDH7lfrsKB75ErVF7KEisUxO81NDcKvEP1ksOG0gS0q1uina NKwbA9SMEAQ2Shk1tQNxk3aIFrV6Cb5K3YTvQcaifZDHat7ikr/1J3s/lnHkOZg7/7k0 uAqN7l2ohxoCr4PLTpodnRAgI7ZI8bktML3sVuatZHFX0PZII/lQpXB/KCJ69kPptn+Q Y7BvncJSgJ3X7J0wjOrbBcMg96cQK58DPadsZuENr1qj2c7auxRvhplDNYUOWhSiijFv mdew== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1689727818; x=1692319818; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=sPwhBmtD9OJVg+4DzOYL256xxaoXc/AZRhsCaKboFfQ=; b=AkoHdXeQGzhWJcrUCnn454SxXspQgdQzUL75YbZF7a/clvAahfKPEhMxGW3OPruJKV f/tU9Q8Ga2KOnUZZ/I9C4RdRxZq3ZuV9stce121g18N0X0tkJAyqbCsTJrr8IidceBLw xKSXrOAZ88qAFa4/WSTK8Ed1yKhqdXLrN+Md8Y2NxXl0Aa8SU/dcg6ReIw28lEtUbEyI MfhSaH+o/2N0AUTFx5zx9yi5d5QOv4QYE46qNbqCYjLmhmC28o0aABwEjzjKpVRuBtee 0XS1VcrDpAdgnnfaTfaYo4rDZtHqIK902UYv+9xDEikI+3TCEc/PtDX1a+yM7e4PfjJB UQEg== X-Gm-Message-State: ABy/qLYnwbXV8iOlxoeBc+sc5eRJzjFNS/gnOMp3uHrbSRBAWWVWSaUL BHjW5V0fctMu8TZo21GKwQhuvA== X-Received: by 2002:a05:620a:2447:b0:767:1293:f43e with SMTP id h7-20020a05620a244700b007671293f43emr22685654qkn.49.1689727818622; Tue, 18 Jul 2023 17:50:18 -0700 (PDT) Received: from [172.17.0.7] ([130.44.212.112]) by smtp.gmail.com with ESMTPSA id c5-20020a05620a11a500b0076738337cd1sm968696qkk.1.2023.07.18.17.50.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 18 Jul 2023 17:50:18 -0700 (PDT) From: Bobby Eshleman <bobby.eshleman@bytedance.com> Date: Wed, 19 Jul 2023 00:50:15 +0000 Subject: [PATCH RFC net-next v5 11/14] vhost/vsock: implement datagram support MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Message-Id: <20230413-b4-vsock-dgram-v5-11-581bd37fdb26@bytedance.com> References: <20230413-b4-vsock-dgram-v5-0-581bd37fdb26@bytedance.com> In-Reply-To: <20230413-b4-vsock-dgram-v5-0-581bd37fdb26@bytedance.com> To: Stefan Hajnoczi <stefanha@redhat.com>, Stefano Garzarella <sgarzare@redhat.com>, "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>, Xuan Zhuo <xuanzhuo@linux.alibaba.com>, "David S. Miller" <davem@davemloft.net>, Eric Dumazet <edumazet@google.com>, Jakub Kicinski <kuba@kernel.org>, Paolo Abeni <pabeni@redhat.com>, "K. Y. Srinivasan" <kys@microsoft.com>, Haiyang Zhang <haiyangz@microsoft.com>, Wei Liu <wei.liu@kernel.org>, Dexuan Cui <decui@microsoft.com>, Bryan Tan <bryantan@vmware.com>, Vishnu Dasa <vdasa@vmware.com>, VMware PV-Drivers Reviewers <pv-drivers@vmware.com> Cc: Dan Carpenter <dan.carpenter@linaro.org>, Simon Horman <simon.horman@corigine.com>, Krasnov Arseniy <oxffffaa@gmail.com>, kvm@vger.kernel.org, virtualization@lists.linux-foundation.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-hyperv@vger.kernel.org, bpf@vger.kernel.org, Bobby Eshleman <bobby.eshleman@bytedance.com> X-Mailer: b4 0.12.2 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1771808154599805622 X-GMAIL-MSGID: 1771808154599805622 |
Series |
virtio/vsock: support datagrams
|
|
Commit Message
Bobby Eshleman
July 19, 2023, 12:50 a.m. UTC
This commit implements datagram support for vhost/vsock by teaching
vhost to use the common virtio transport datagram functions.
If the virtio RX buffer is too small, then the transmission is
abandoned, the packet dropped, and EHOSTUNREACH is added to the socket's
error queue.
Signed-off-by: Bobby Eshleman <bobby.eshleman@bytedance.com>
---
drivers/vhost/vsock.c | 62 +++++++++++++++++++++++++++++++++++++++++++++---
net/vmw_vsock/af_vsock.c | 5 +++-
2 files changed, 63 insertions(+), 4 deletions(-)
Comments
On 19.07.2023 03:50, Bobby Eshleman wrote: > This commit implements datagram support for vhost/vsock by teaching > vhost to use the common virtio transport datagram functions. > > If the virtio RX buffer is too small, then the transmission is > abandoned, the packet dropped, and EHOSTUNREACH is added to the socket's > error queue. > > Signed-off-by: Bobby Eshleman <bobby.eshleman@bytedance.com> > --- > drivers/vhost/vsock.c | 62 +++++++++++++++++++++++++++++++++++++++++++++--- > net/vmw_vsock/af_vsock.c | 5 +++- > 2 files changed, 63 insertions(+), 4 deletions(-) > > diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c > index d5d6a3c3f273..da14260c6654 100644 > --- a/drivers/vhost/vsock.c > +++ b/drivers/vhost/vsock.c > @@ -8,6 +8,7 @@ > */ > #include <linux/miscdevice.h> > #include <linux/atomic.h> > +#include <linux/errqueue.h> > #include <linux/module.h> > #include <linux/mutex.h> > #include <linux/vmalloc.h> > @@ -32,7 +33,8 @@ > enum { > VHOST_VSOCK_FEATURES = VHOST_FEATURES | > (1ULL << VIRTIO_F_ACCESS_PLATFORM) | > - (1ULL << VIRTIO_VSOCK_F_SEQPACKET) > + (1ULL << VIRTIO_VSOCK_F_SEQPACKET) | > + (1ULL << VIRTIO_VSOCK_F_DGRAM) > }; > > enum { > @@ -56,6 +58,7 @@ struct vhost_vsock { > atomic_t queued_replies; > > u32 guest_cid; > + bool dgram_allow; > bool seqpacket_allow; > }; > > @@ -86,6 +89,32 @@ static struct vhost_vsock *vhost_vsock_get(u32 guest_cid) > return NULL; > } > > +/* Claims ownership of the skb, do not free the skb after calling! */ > +static void > +vhost_transport_error(struct sk_buff *skb, int err) > +{ > + struct sock_exterr_skb *serr; > + struct sock *sk = skb->sk; > + struct sk_buff *clone; > + > + serr = SKB_EXT_ERR(skb); > + memset(serr, 0, sizeof(*serr)); > + serr->ee.ee_errno = err; > + serr->ee.ee_origin = SO_EE_ORIGIN_NONE; > + > + clone = skb_clone(skb, GFP_KERNEL); May for skb which is error carrier we can use 'sock_omalloc()', not 'skb_clone()' ? TCP uses skb allocated by this function as carriers of error structure. I guess 'skb_clone()' also clones data of origin, but i think that there is no need in data as we insert it to error queue of the socket. What do You think? > + if (!clone) > + return; What will happen here 'if (!clone)' ? skb will leak as it was removed from queue? > + > + if (sock_queue_err_skb(sk, clone)) > + kfree_skb(clone); > + > + sk->sk_err = err; > + sk_error_report(sk); > + > + kfree_skb(skb); > +} > + > static void > vhost_transport_do_send_pkt(struct vhost_vsock *vsock, > struct vhost_virtqueue *vq) > @@ -160,9 +189,15 @@ vhost_transport_do_send_pkt(struct vhost_vsock *vsock, > hdr = virtio_vsock_hdr(skb); > > /* If the packet is greater than the space available in the > - * buffer, we split it using multiple buffers. > + * buffer, we split it using multiple buffers for connectible > + * sockets and drop the packet for datagram sockets. > */ > if (payload_len > iov_len - sizeof(*hdr)) { > + if (le16_to_cpu(hdr->type) == VIRTIO_VSOCK_TYPE_DGRAM) { > + vhost_transport_error(skb, EHOSTUNREACH); > + continue; > + } > + > payload_len = iov_len - sizeof(*hdr); > > /* As we are copying pieces of large packet's buffer to > @@ -394,6 +429,7 @@ static bool vhost_vsock_more_replies(struct vhost_vsock *vsock) > return val < vq->num; > } > > +static bool vhost_transport_dgram_allow(u32 cid, u32 port); > static bool vhost_transport_seqpacket_allow(u32 remote_cid); > > static struct virtio_transport vhost_transport = { > @@ -410,7 +446,8 @@ static struct virtio_transport vhost_transport = { > .cancel_pkt = vhost_transport_cancel_pkt, > > .dgram_enqueue = virtio_transport_dgram_enqueue, > - .dgram_allow = virtio_transport_dgram_allow, > + .dgram_allow = vhost_transport_dgram_allow, > + .dgram_addr_init = virtio_transport_dgram_addr_init, > > .stream_enqueue = virtio_transport_stream_enqueue, > .stream_dequeue = virtio_transport_stream_dequeue, > @@ -443,6 +480,22 @@ static struct virtio_transport vhost_transport = { > .send_pkt = vhost_transport_send_pkt, > }; > > +static bool vhost_transport_dgram_allow(u32 cid, u32 port) > +{ > + struct vhost_vsock *vsock; > + bool dgram_allow = false; > + > + rcu_read_lock(); > + vsock = vhost_vsock_get(cid); > + > + if (vsock) > + dgram_allow = vsock->dgram_allow; > + > + rcu_read_unlock(); > + > + return dgram_allow; > +} > + > static bool vhost_transport_seqpacket_allow(u32 remote_cid) > { > struct vhost_vsock *vsock; > @@ -799,6 +852,9 @@ static int vhost_vsock_set_features(struct vhost_vsock *vsock, u64 features) > if (features & (1ULL << VIRTIO_VSOCK_F_SEQPACKET)) > vsock->seqpacket_allow = true; > > + if (features & (1ULL << VIRTIO_VSOCK_F_DGRAM)) > + vsock->dgram_allow = true; > + > for (i = 0; i < ARRAY_SIZE(vsock->vqs); i++) { > vq = &vsock->vqs[i]; > mutex_lock(&vq->mutex); > diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c > index e73f3b2c52f1..449ed63ac2b0 100644 > --- a/net/vmw_vsock/af_vsock.c > +++ b/net/vmw_vsock/af_vsock.c > @@ -1427,9 +1427,12 @@ int vsock_dgram_recvmsg(struct socket *sock, struct msghdr *msg, > return prot->recvmsg(sk, msg, len, flags, NULL); > #endif > > - if (flags & MSG_OOB || flags & MSG_ERRQUEUE) > + if (unlikely(flags & MSG_OOB)) > return -EOPNOTSUPP; > > + if (unlikely(flags & MSG_ERRQUEUE)) > + return sock_recv_errqueue(sk, msg, len, SOL_VSOCK, 0); > + Sorry, but I get build error here, because SOL_VSOCK in undefined. I think it should be added to include/linux/socket.h and to uapi files also for future use in userspace. Also Stefano Garzarella <sgarzare@redhat.com> suggested to add define something like VSOCK_RECVERR, in the same way as IP_RECVERR, and use it as last parameter of 'sock_recv_errqueue()'. > transport = vsk->transport; > > /* Retrieve the head sk_buff from the socket's receive queue. */ > Thanks, Arseniy
On Sat, Jul 22, 2023 at 11:42:38AM +0300, Arseniy Krasnov wrote: > > > On 19.07.2023 03:50, Bobby Eshleman wrote: > > This commit implements datagram support for vhost/vsock by teaching > > vhost to use the common virtio transport datagram functions. > > > > If the virtio RX buffer is too small, then the transmission is > > abandoned, the packet dropped, and EHOSTUNREACH is added to the socket's > > error queue. > > > > Signed-off-by: Bobby Eshleman <bobby.eshleman@bytedance.com> > > --- > > drivers/vhost/vsock.c | 62 +++++++++++++++++++++++++++++++++++++++++++++--- > > net/vmw_vsock/af_vsock.c | 5 +++- > > 2 files changed, 63 insertions(+), 4 deletions(-) > > > > diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c > > index d5d6a3c3f273..da14260c6654 100644 > > --- a/drivers/vhost/vsock.c > > +++ b/drivers/vhost/vsock.c > > @@ -8,6 +8,7 @@ > > */ > > #include <linux/miscdevice.h> > > #include <linux/atomic.h> > > +#include <linux/errqueue.h> > > #include <linux/module.h> > > #include <linux/mutex.h> > > #include <linux/vmalloc.h> > > @@ -32,7 +33,8 @@ > > enum { > > VHOST_VSOCK_FEATURES = VHOST_FEATURES | > > (1ULL << VIRTIO_F_ACCESS_PLATFORM) | > > - (1ULL << VIRTIO_VSOCK_F_SEQPACKET) > > + (1ULL << VIRTIO_VSOCK_F_SEQPACKET) | > > + (1ULL << VIRTIO_VSOCK_F_DGRAM) > > }; > > > > enum { > > @@ -56,6 +58,7 @@ struct vhost_vsock { > > atomic_t queued_replies; > > > > u32 guest_cid; > > + bool dgram_allow; > > bool seqpacket_allow; > > }; > > > > @@ -86,6 +89,32 @@ static struct vhost_vsock *vhost_vsock_get(u32 guest_cid) > > return NULL; > > } > > > > +/* Claims ownership of the skb, do not free the skb after calling! */ > > +static void > > +vhost_transport_error(struct sk_buff *skb, int err) > > +{ > > + struct sock_exterr_skb *serr; > > + struct sock *sk = skb->sk; > > + struct sk_buff *clone; > > + > > + serr = SKB_EXT_ERR(skb); > > + memset(serr, 0, sizeof(*serr)); > > + serr->ee.ee_errno = err; > > + serr->ee.ee_origin = SO_EE_ORIGIN_NONE; > > + > > + clone = skb_clone(skb, GFP_KERNEL); > > May for skb which is error carrier we can use 'sock_omalloc()', not 'skb_clone()' ? TCP uses skb > allocated by this function as carriers of error structure. I guess 'skb_clone()' also clones data of origin, > but i think that there is no need in data as we insert it to error queue of the socket. > > What do You think? IIUC skb_clone() is often used in this scenario so that the user can retrieve the error-causing packet from the error queue. Is there some reason we shouldn't do this? I'm seeing that the serr bits need to occur on the clone here, not the original. I didn't realize the SKB_EXT_ERR() is a skb->cb cast. I'm not actually sure how this passes the test case since ->cb isn't cloned. > > > + if (!clone) > > + return; > > What will happen here 'if (!clone)' ? skb will leak as it was removed from queue? > Ah yes, true. > > + > > + if (sock_queue_err_skb(sk, clone)) > > + kfree_skb(clone); > > + > > + sk->sk_err = err; > > + sk_error_report(sk); > > + > > + kfree_skb(skb); > > +} > > + > > static void > > vhost_transport_do_send_pkt(struct vhost_vsock *vsock, > > struct vhost_virtqueue *vq) > > @@ -160,9 +189,15 @@ vhost_transport_do_send_pkt(struct vhost_vsock *vsock, > > hdr = virtio_vsock_hdr(skb); > > > > /* If the packet is greater than the space available in the > > - * buffer, we split it using multiple buffers. > > + * buffer, we split it using multiple buffers for connectible > > + * sockets and drop the packet for datagram sockets. > > */ > > if (payload_len > iov_len - sizeof(*hdr)) { > > + if (le16_to_cpu(hdr->type) == VIRTIO_VSOCK_TYPE_DGRAM) { > > + vhost_transport_error(skb, EHOSTUNREACH); > > + continue; > > + } > > + > > payload_len = iov_len - sizeof(*hdr); > > > > /* As we are copying pieces of large packet's buffer to > > @@ -394,6 +429,7 @@ static bool vhost_vsock_more_replies(struct vhost_vsock *vsock) > > return val < vq->num; > > } > > > > +static bool vhost_transport_dgram_allow(u32 cid, u32 port); > > static bool vhost_transport_seqpacket_allow(u32 remote_cid); > > > > static struct virtio_transport vhost_transport = { > > @@ -410,7 +446,8 @@ static struct virtio_transport vhost_transport = { > > .cancel_pkt = vhost_transport_cancel_pkt, > > > > .dgram_enqueue = virtio_transport_dgram_enqueue, > > - .dgram_allow = virtio_transport_dgram_allow, > > + .dgram_allow = vhost_transport_dgram_allow, > > + .dgram_addr_init = virtio_transport_dgram_addr_init, > > > > .stream_enqueue = virtio_transport_stream_enqueue, > > .stream_dequeue = virtio_transport_stream_dequeue, > > @@ -443,6 +480,22 @@ static struct virtio_transport vhost_transport = { > > .send_pkt = vhost_transport_send_pkt, > > }; > > > > +static bool vhost_transport_dgram_allow(u32 cid, u32 port) > > +{ > > + struct vhost_vsock *vsock; > > + bool dgram_allow = false; > > + > > + rcu_read_lock(); > > + vsock = vhost_vsock_get(cid); > > + > > + if (vsock) > > + dgram_allow = vsock->dgram_allow; > > + > > + rcu_read_unlock(); > > + > > + return dgram_allow; > > +} > > + > > static bool vhost_transport_seqpacket_allow(u32 remote_cid) > > { > > struct vhost_vsock *vsock; > > @@ -799,6 +852,9 @@ static int vhost_vsock_set_features(struct vhost_vsock *vsock, u64 features) > > if (features & (1ULL << VIRTIO_VSOCK_F_SEQPACKET)) > > vsock->seqpacket_allow = true; > > > > + if (features & (1ULL << VIRTIO_VSOCK_F_DGRAM)) > > + vsock->dgram_allow = true; > > + > > for (i = 0; i < ARRAY_SIZE(vsock->vqs); i++) { > > vq = &vsock->vqs[i]; > > mutex_lock(&vq->mutex); > > diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c > > index e73f3b2c52f1..449ed63ac2b0 100644 > > --- a/net/vmw_vsock/af_vsock.c > > +++ b/net/vmw_vsock/af_vsock.c > > @@ -1427,9 +1427,12 @@ int vsock_dgram_recvmsg(struct socket *sock, struct msghdr *msg, > > return prot->recvmsg(sk, msg, len, flags, NULL); > > #endif > > > > - if (flags & MSG_OOB || flags & MSG_ERRQUEUE) > > + if (unlikely(flags & MSG_OOB)) > > return -EOPNOTSUPP; > > > > + if (unlikely(flags & MSG_ERRQUEUE)) > > + return sock_recv_errqueue(sk, msg, len, SOL_VSOCK, 0); > > + > > Sorry, but I get build error here, because SOL_VSOCK in undefined. I think it should be added to > include/linux/socket.h and to uapi files also for future use in userspace. > Strange, I built each patch individually without issue. My base is netdev/main with your SOL_VSOCK patch applied. I will look today and see if I'm missing something. > Also Stefano Garzarella <sgarzare@redhat.com> suggested to add define something like VSOCK_RECVERR, > in the same way as IP_RECVERR, and use it as last parameter of 'sock_recv_errqueue()'. > Got it, thanks. > > transport = vsk->transport; > > > > /* Retrieve the head sk_buff from the socket's receive queue. */ > > > > Thanks, Arseniy Thanks, Bobby
On Wed, Jul 19, 2023 at 12:50:15AM +0000, Bobby Eshleman wrote: > This commit implements datagram support for vhost/vsock by teaching > vhost to use the common virtio transport datagram functions. > > If the virtio RX buffer is too small, then the transmission is > abandoned, the packet dropped, and EHOSTUNREACH is added to the socket's > error queue. > > Signed-off-by: Bobby Eshleman <bobby.eshleman@bytedance.com> EHOSTUNREACH? > --- > drivers/vhost/vsock.c | 62 +++++++++++++++++++++++++++++++++++++++++++++--- > net/vmw_vsock/af_vsock.c | 5 +++- > 2 files changed, 63 insertions(+), 4 deletions(-) > > diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c > index d5d6a3c3f273..da14260c6654 100644 > --- a/drivers/vhost/vsock.c > +++ b/drivers/vhost/vsock.c > @@ -8,6 +8,7 @@ > */ > #include <linux/miscdevice.h> > #include <linux/atomic.h> > +#include <linux/errqueue.h> > #include <linux/module.h> > #include <linux/mutex.h> > #include <linux/vmalloc.h> > @@ -32,7 +33,8 @@ > enum { > VHOST_VSOCK_FEATURES = VHOST_FEATURES | > (1ULL << VIRTIO_F_ACCESS_PLATFORM) | > - (1ULL << VIRTIO_VSOCK_F_SEQPACKET) > + (1ULL << VIRTIO_VSOCK_F_SEQPACKET) | > + (1ULL << VIRTIO_VSOCK_F_DGRAM) > }; > > enum { > @@ -56,6 +58,7 @@ struct vhost_vsock { > atomic_t queued_replies; > > u32 guest_cid; > + bool dgram_allow; > bool seqpacket_allow; > }; > > @@ -86,6 +89,32 @@ static struct vhost_vsock *vhost_vsock_get(u32 guest_cid) > return NULL; > } > > +/* Claims ownership of the skb, do not free the skb after calling! */ > +static void > +vhost_transport_error(struct sk_buff *skb, int err) > +{ > + struct sock_exterr_skb *serr; > + struct sock *sk = skb->sk; > + struct sk_buff *clone; > + > + serr = SKB_EXT_ERR(skb); > + memset(serr, 0, sizeof(*serr)); > + serr->ee.ee_errno = err; > + serr->ee.ee_origin = SO_EE_ORIGIN_NONE; > + > + clone = skb_clone(skb, GFP_KERNEL); > + if (!clone) > + return; > + > + if (sock_queue_err_skb(sk, clone)) > + kfree_skb(clone); > + > + sk->sk_err = err; > + sk_error_report(sk); > + > + kfree_skb(skb); > +} > + > static void > vhost_transport_do_send_pkt(struct vhost_vsock *vsock, > struct vhost_virtqueue *vq) > @@ -160,9 +189,15 @@ vhost_transport_do_send_pkt(struct vhost_vsock *vsock, > hdr = virtio_vsock_hdr(skb); > > /* If the packet is greater than the space available in the > - * buffer, we split it using multiple buffers. > + * buffer, we split it using multiple buffers for connectible > + * sockets and drop the packet for datagram sockets. > */ won't this break things like recently proposed zerocopy? I think splitup has to be supported for all types. > if (payload_len > iov_len - sizeof(*hdr)) { > + if (le16_to_cpu(hdr->type) == VIRTIO_VSOCK_TYPE_DGRAM) { > + vhost_transport_error(skb, EHOSTUNREACH); > + continue; > + } > + > payload_len = iov_len - sizeof(*hdr); > > /* As we are copying pieces of large packet's buffer to > @@ -394,6 +429,7 @@ static bool vhost_vsock_more_replies(struct vhost_vsock *vsock) > return val < vq->num; > } > > +static bool vhost_transport_dgram_allow(u32 cid, u32 port); > static bool vhost_transport_seqpacket_allow(u32 remote_cid); > > static struct virtio_transport vhost_transport = { > @@ -410,7 +446,8 @@ static struct virtio_transport vhost_transport = { > .cancel_pkt = vhost_transport_cancel_pkt, > > .dgram_enqueue = virtio_transport_dgram_enqueue, > - .dgram_allow = virtio_transport_dgram_allow, > + .dgram_allow = vhost_transport_dgram_allow, > + .dgram_addr_init = virtio_transport_dgram_addr_init, > > .stream_enqueue = virtio_transport_stream_enqueue, > .stream_dequeue = virtio_transport_stream_dequeue, > @@ -443,6 +480,22 @@ static struct virtio_transport vhost_transport = { > .send_pkt = vhost_transport_send_pkt, > }; > > +static bool vhost_transport_dgram_allow(u32 cid, u32 port) > +{ > + struct vhost_vsock *vsock; > + bool dgram_allow = false; > + > + rcu_read_lock(); > + vsock = vhost_vsock_get(cid); > + > + if (vsock) > + dgram_allow = vsock->dgram_allow; > + > + rcu_read_unlock(); > + > + return dgram_allow; > +} > + > static bool vhost_transport_seqpacket_allow(u32 remote_cid) > { > struct vhost_vsock *vsock; > @@ -799,6 +852,9 @@ static int vhost_vsock_set_features(struct vhost_vsock *vsock, u64 features) > if (features & (1ULL << VIRTIO_VSOCK_F_SEQPACKET)) > vsock->seqpacket_allow = true; > > + if (features & (1ULL << VIRTIO_VSOCK_F_DGRAM)) > + vsock->dgram_allow = true; > + > for (i = 0; i < ARRAY_SIZE(vsock->vqs); i++) { > vq = &vsock->vqs[i]; > mutex_lock(&vq->mutex); > diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c > index e73f3b2c52f1..449ed63ac2b0 100644 > --- a/net/vmw_vsock/af_vsock.c > +++ b/net/vmw_vsock/af_vsock.c > @@ -1427,9 +1427,12 @@ int vsock_dgram_recvmsg(struct socket *sock, struct msghdr *msg, > return prot->recvmsg(sk, msg, len, flags, NULL); > #endif > > - if (flags & MSG_OOB || flags & MSG_ERRQUEUE) > + if (unlikely(flags & MSG_OOB)) > return -EOPNOTSUPP; > > + if (unlikely(flags & MSG_ERRQUEUE)) > + return sock_recv_errqueue(sk, msg, len, SOL_VSOCK, 0); > + > transport = vsk->transport; > > /* Retrieve the head sk_buff from the socket's receive queue. */ > > -- > 2.30.2
On 26.07.2023 20:55, Bobby Eshleman wrote: > On Sat, Jul 22, 2023 at 11:42:38AM +0300, Arseniy Krasnov wrote: >> >> >> On 19.07.2023 03:50, Bobby Eshleman wrote: >>> This commit implements datagram support for vhost/vsock by teaching >>> vhost to use the common virtio transport datagram functions. >>> >>> If the virtio RX buffer is too small, then the transmission is >>> abandoned, the packet dropped, and EHOSTUNREACH is added to the socket's >>> error queue. >>> >>> Signed-off-by: Bobby Eshleman <bobby.eshleman@bytedance.com> >>> --- >>> drivers/vhost/vsock.c | 62 +++++++++++++++++++++++++++++++++++++++++++++--- >>> net/vmw_vsock/af_vsock.c | 5 +++- >>> 2 files changed, 63 insertions(+), 4 deletions(-) >>> >>> diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c >>> index d5d6a3c3f273..da14260c6654 100644 >>> --- a/drivers/vhost/vsock.c >>> +++ b/drivers/vhost/vsock.c >>> @@ -8,6 +8,7 @@ >>> */ >>> #include <linux/miscdevice.h> >>> #include <linux/atomic.h> >>> +#include <linux/errqueue.h> >>> #include <linux/module.h> >>> #include <linux/mutex.h> >>> #include <linux/vmalloc.h> >>> @@ -32,7 +33,8 @@ >>> enum { >>> VHOST_VSOCK_FEATURES = VHOST_FEATURES | >>> (1ULL << VIRTIO_F_ACCESS_PLATFORM) | >>> - (1ULL << VIRTIO_VSOCK_F_SEQPACKET) >>> + (1ULL << VIRTIO_VSOCK_F_SEQPACKET) | >>> + (1ULL << VIRTIO_VSOCK_F_DGRAM) >>> }; >>> >>> enum { >>> @@ -56,6 +58,7 @@ struct vhost_vsock { >>> atomic_t queued_replies; >>> >>> u32 guest_cid; >>> + bool dgram_allow; >>> bool seqpacket_allow; >>> }; >>> >>> @@ -86,6 +89,32 @@ static struct vhost_vsock *vhost_vsock_get(u32 guest_cid) >>> return NULL; >>> } >>> >>> +/* Claims ownership of the skb, do not free the skb after calling! */ >>> +static void >>> +vhost_transport_error(struct sk_buff *skb, int err) >>> +{ >>> + struct sock_exterr_skb *serr; >>> + struct sock *sk = skb->sk; >>> + struct sk_buff *clone; >>> + >>> + serr = SKB_EXT_ERR(skb); >>> + memset(serr, 0, sizeof(*serr)); >>> + serr->ee.ee_errno = err; >>> + serr->ee.ee_origin = SO_EE_ORIGIN_NONE; >>> + >>> + clone = skb_clone(skb, GFP_KERNEL); >> >> May for skb which is error carrier we can use 'sock_omalloc()', not 'skb_clone()' ? TCP uses skb >> allocated by this function as carriers of error structure. I guess 'skb_clone()' also clones data of origin, >> but i think that there is no need in data as we insert it to error queue of the socket. >> >> What do You think? > > IIUC skb_clone() is often used in this scenario so that the user can > retrieve the error-causing packet from the error queue. Is there some > reason we shouldn't do this? > > I'm seeing that the serr bits need to occur on the clone here, not the > original. I didn't realize the SKB_EXT_ERR() is a skb->cb cast. I'm not > actually sure how this passes the test case since ->cb isn't cloned. Ah yes, sorry, You are right, I just confused this case with zerocopy completion handling - there we allocate "empty" skb which carries completion metadata in its 'cb' field. Hm, but can't we just reinsert current skb (update it's 'cb' as 'sock_exterr_skb') to error queue of the socket without cloning it ? Thanks, Arseniy > >> >>> + if (!clone) >>> + return; >> >> What will happen here 'if (!clone)' ? skb will leak as it was removed from queue? >> > > Ah yes, true. > >>> + >>> + if (sock_queue_err_skb(sk, clone)) >>> + kfree_skb(clone); >>> + >>> + sk->sk_err = err; >>> + sk_error_report(sk); >>> + >>> + kfree_skb(skb); >>> +} >>> + >>> static void >>> vhost_transport_do_send_pkt(struct vhost_vsock *vsock, >>> struct vhost_virtqueue *vq) >>> @@ -160,9 +189,15 @@ vhost_transport_do_send_pkt(struct vhost_vsock *vsock, >>> hdr = virtio_vsock_hdr(skb); >>> >>> /* If the packet is greater than the space available in the >>> - * buffer, we split it using multiple buffers. >>> + * buffer, we split it using multiple buffers for connectible >>> + * sockets and drop the packet for datagram sockets. >>> */ >>> if (payload_len > iov_len - sizeof(*hdr)) { >>> + if (le16_to_cpu(hdr->type) == VIRTIO_VSOCK_TYPE_DGRAM) { >>> + vhost_transport_error(skb, EHOSTUNREACH); >>> + continue; >>> + } >>> + >>> payload_len = iov_len - sizeof(*hdr); >>> >>> /* As we are copying pieces of large packet's buffer to >>> @@ -394,6 +429,7 @@ static bool vhost_vsock_more_replies(struct vhost_vsock *vsock) >>> return val < vq->num; >>> } >>> >>> +static bool vhost_transport_dgram_allow(u32 cid, u32 port); >>> static bool vhost_transport_seqpacket_allow(u32 remote_cid); >>> >>> static struct virtio_transport vhost_transport = { >>> @@ -410,7 +446,8 @@ static struct virtio_transport vhost_transport = { >>> .cancel_pkt = vhost_transport_cancel_pkt, >>> >>> .dgram_enqueue = virtio_transport_dgram_enqueue, >>> - .dgram_allow = virtio_transport_dgram_allow, >>> + .dgram_allow = vhost_transport_dgram_allow, >>> + .dgram_addr_init = virtio_transport_dgram_addr_init, >>> >>> .stream_enqueue = virtio_transport_stream_enqueue, >>> .stream_dequeue = virtio_transport_stream_dequeue, >>> @@ -443,6 +480,22 @@ static struct virtio_transport vhost_transport = { >>> .send_pkt = vhost_transport_send_pkt, >>> }; >>> >>> +static bool vhost_transport_dgram_allow(u32 cid, u32 port) >>> +{ >>> + struct vhost_vsock *vsock; >>> + bool dgram_allow = false; >>> + >>> + rcu_read_lock(); >>> + vsock = vhost_vsock_get(cid); >>> + >>> + if (vsock) >>> + dgram_allow = vsock->dgram_allow; >>> + >>> + rcu_read_unlock(); >>> + >>> + return dgram_allow; >>> +} >>> + >>> static bool vhost_transport_seqpacket_allow(u32 remote_cid) >>> { >>> struct vhost_vsock *vsock; >>> @@ -799,6 +852,9 @@ static int vhost_vsock_set_features(struct vhost_vsock *vsock, u64 features) >>> if (features & (1ULL << VIRTIO_VSOCK_F_SEQPACKET)) >>> vsock->seqpacket_allow = true; >>> >>> + if (features & (1ULL << VIRTIO_VSOCK_F_DGRAM)) >>> + vsock->dgram_allow = true; >>> + >>> for (i = 0; i < ARRAY_SIZE(vsock->vqs); i++) { >>> vq = &vsock->vqs[i]; >>> mutex_lock(&vq->mutex); >>> diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c >>> index e73f3b2c52f1..449ed63ac2b0 100644 >>> --- a/net/vmw_vsock/af_vsock.c >>> +++ b/net/vmw_vsock/af_vsock.c >>> @@ -1427,9 +1427,12 @@ int vsock_dgram_recvmsg(struct socket *sock, struct msghdr *msg, >>> return prot->recvmsg(sk, msg, len, flags, NULL); >>> #endif >>> >>> - if (flags & MSG_OOB || flags & MSG_ERRQUEUE) >>> + if (unlikely(flags & MSG_OOB)) >>> return -EOPNOTSUPP; >>> >>> + if (unlikely(flags & MSG_ERRQUEUE)) >>> + return sock_recv_errqueue(sk, msg, len, SOL_VSOCK, 0); >>> + >> >> Sorry, but I get build error here, because SOL_VSOCK in undefined. I think it should be added to >> include/linux/socket.h and to uapi files also for future use in userspace. >> > > Strange, I built each patch individually without issue. My base is > netdev/main with your SOL_VSOCK patch applied. I will look today and see > if I'm missing something. > >> Also Stefano Garzarella <sgarzare@redhat.com> suggested to add define something like VSOCK_RECVERR, >> in the same way as IP_RECVERR, and use it as last parameter of 'sock_recv_errqueue()'. >> > > Got it, thanks. > >>> transport = vsk->transport; >>> >>> /* Retrieve the head sk_buff from the socket's receive queue. */ >>> >> >> Thanks, Arseniy > > Thanks, > Bobby
On 26.07.2023 20:55, Bobby Eshleman wrote: > On Sat, Jul 22, 2023 at 11:42:38AM +0300, Arseniy Krasnov wrote: >> >> >> On 19.07.2023 03:50, Bobby Eshleman wrote: >>> This commit implements datagram support for vhost/vsock by teaching >>> vhost to use the common virtio transport datagram functions. >>> >>> If the virtio RX buffer is too small, then the transmission is >>> abandoned, the packet dropped, and EHOSTUNREACH is added to the socket's >>> error queue. >>> >>> Signed-off-by: Bobby Eshleman <bobby.eshleman@bytedance.com> >>> --- >>> drivers/vhost/vsock.c | 62 +++++++++++++++++++++++++++++++++++++++++++++--- >>> net/vmw_vsock/af_vsock.c | 5 +++- >>> 2 files changed, 63 insertions(+), 4 deletions(-) >>> >>> diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c >>> index d5d6a3c3f273..da14260c6654 100644 >>> --- a/drivers/vhost/vsock.c >>> +++ b/drivers/vhost/vsock.c >>> @@ -8,6 +8,7 @@ >>> */ >>> #include <linux/miscdevice.h> >>> #include <linux/atomic.h> >>> +#include <linux/errqueue.h> >>> #include <linux/module.h> >>> #include <linux/mutex.h> >>> #include <linux/vmalloc.h> >>> @@ -32,7 +33,8 @@ >>> enum { >>> VHOST_VSOCK_FEATURES = VHOST_FEATURES | >>> (1ULL << VIRTIO_F_ACCESS_PLATFORM) | >>> - (1ULL << VIRTIO_VSOCK_F_SEQPACKET) >>> + (1ULL << VIRTIO_VSOCK_F_SEQPACKET) | >>> + (1ULL << VIRTIO_VSOCK_F_DGRAM) >>> }; >>> >>> enum { >>> @@ -56,6 +58,7 @@ struct vhost_vsock { >>> atomic_t queued_replies; >>> >>> u32 guest_cid; >>> + bool dgram_allow; >>> bool seqpacket_allow; >>> }; >>> >>> @@ -86,6 +89,32 @@ static struct vhost_vsock *vhost_vsock_get(u32 guest_cid) >>> return NULL; >>> } >>> >>> +/* Claims ownership of the skb, do not free the skb after calling! */ >>> +static void >>> +vhost_transport_error(struct sk_buff *skb, int err) >>> +{ >>> + struct sock_exterr_skb *serr; >>> + struct sock *sk = skb->sk; >>> + struct sk_buff *clone; >>> + >>> + serr = SKB_EXT_ERR(skb); >>> + memset(serr, 0, sizeof(*serr)); >>> + serr->ee.ee_errno = err; >>> + serr->ee.ee_origin = SO_EE_ORIGIN_NONE; >>> + >>> + clone = skb_clone(skb, GFP_KERNEL); >> >> May for skb which is error carrier we can use 'sock_omalloc()', not 'skb_clone()' ? TCP uses skb >> allocated by this function as carriers of error structure. I guess 'skb_clone()' also clones data of origin, >> but i think that there is no need in data as we insert it to error queue of the socket. >> >> What do You think? > > IIUC skb_clone() is often used in this scenario so that the user can > retrieve the error-causing packet from the error queue. Is there some > reason we shouldn't do this? > > I'm seeing that the serr bits need to occur on the clone here, not the > original. I didn't realize the SKB_EXT_ERR() is a skb->cb cast. I'm not > actually sure how this passes the test case since ->cb isn't cloned. > >> >>> + if (!clone) >>> + return; >> >> What will happen here 'if (!clone)' ? skb will leak as it was removed from queue? >> > > Ah yes, true. > >>> + >>> + if (sock_queue_err_skb(sk, clone)) >>> + kfree_skb(clone); >>> + >>> + sk->sk_err = err; >>> + sk_error_report(sk); >>> + >>> + kfree_skb(skb); >>> +} >>> + >>> static void >>> vhost_transport_do_send_pkt(struct vhost_vsock *vsock, >>> struct vhost_virtqueue *vq) >>> @@ -160,9 +189,15 @@ vhost_transport_do_send_pkt(struct vhost_vsock *vsock, >>> hdr = virtio_vsock_hdr(skb); >>> >>> /* If the packet is greater than the space available in the >>> - * buffer, we split it using multiple buffers. >>> + * buffer, we split it using multiple buffers for connectible >>> + * sockets and drop the packet for datagram sockets. >>> */ >>> if (payload_len > iov_len - sizeof(*hdr)) { >>> + if (le16_to_cpu(hdr->type) == VIRTIO_VSOCK_TYPE_DGRAM) { >>> + vhost_transport_error(skb, EHOSTUNREACH); >>> + continue; >>> + } >>> + >>> payload_len = iov_len - sizeof(*hdr); >>> >>> /* As we are copying pieces of large packet's buffer to >>> @@ -394,6 +429,7 @@ static bool vhost_vsock_more_replies(struct vhost_vsock *vsock) >>> return val < vq->num; >>> } >>> >>> +static bool vhost_transport_dgram_allow(u32 cid, u32 port); >>> static bool vhost_transport_seqpacket_allow(u32 remote_cid); >>> >>> static struct virtio_transport vhost_transport = { >>> @@ -410,7 +446,8 @@ static struct virtio_transport vhost_transport = { >>> .cancel_pkt = vhost_transport_cancel_pkt, >>> >>> .dgram_enqueue = virtio_transport_dgram_enqueue, >>> - .dgram_allow = virtio_transport_dgram_allow, >>> + .dgram_allow = vhost_transport_dgram_allow, >>> + .dgram_addr_init = virtio_transport_dgram_addr_init, >>> >>> .stream_enqueue = virtio_transport_stream_enqueue, >>> .stream_dequeue = virtio_transport_stream_dequeue, >>> @@ -443,6 +480,22 @@ static struct virtio_transport vhost_transport = { >>> .send_pkt = vhost_transport_send_pkt, >>> }; >>> >>> +static bool vhost_transport_dgram_allow(u32 cid, u32 port) >>> +{ >>> + struct vhost_vsock *vsock; >>> + bool dgram_allow = false; >>> + >>> + rcu_read_lock(); >>> + vsock = vhost_vsock_get(cid); >>> + >>> + if (vsock) >>> + dgram_allow = vsock->dgram_allow; >>> + >>> + rcu_read_unlock(); >>> + >>> + return dgram_allow; >>> +} >>> + >>> static bool vhost_transport_seqpacket_allow(u32 remote_cid) >>> { >>> struct vhost_vsock *vsock; >>> @@ -799,6 +852,9 @@ static int vhost_vsock_set_features(struct vhost_vsock *vsock, u64 features) >>> if (features & (1ULL << VIRTIO_VSOCK_F_SEQPACKET)) >>> vsock->seqpacket_allow = true; >>> >>> + if (features & (1ULL << VIRTIO_VSOCK_F_DGRAM)) >>> + vsock->dgram_allow = true; >>> + >>> for (i = 0; i < ARRAY_SIZE(vsock->vqs); i++) { >>> vq = &vsock->vqs[i]; >>> mutex_lock(&vq->mutex); >>> diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c >>> index e73f3b2c52f1..449ed63ac2b0 100644 >>> --- a/net/vmw_vsock/af_vsock.c >>> +++ b/net/vmw_vsock/af_vsock.c >>> @@ -1427,9 +1427,12 @@ int vsock_dgram_recvmsg(struct socket *sock, struct msghdr *msg, >>> return prot->recvmsg(sk, msg, len, flags, NULL); >>> #endif >>> >>> - if (flags & MSG_OOB || flags & MSG_ERRQUEUE) >>> + if (unlikely(flags & MSG_OOB)) >>> return -EOPNOTSUPP; >>> >>> + if (unlikely(flags & MSG_ERRQUEUE)) >>> + return sock_recv_errqueue(sk, msg, len, SOL_VSOCK, 0); >>> + >> >> Sorry, but I get build error here, because SOL_VSOCK in undefined. I think it should be added to >> include/linux/socket.h and to uapi files also for future use in userspace. >> > > Strange, I built each patch individually without issue. My base is > netdev/main with your SOL_VSOCK patch applied. I will look today and see > if I'm missing something. I see, this is difference, because i'm trying to run this patchset on the last net-next (as it is supposed to be merged to net-next). I guess You should add this define anyway when You be ready to be merged to net-next (I really don't know which SOL_VSOCK will be merged first - "Your" or "my" :) ) Thanks, Arseniy > >> Also Stefano Garzarella <sgarzare@redhat.com> suggested to add define something like VSOCK_RECVERR, >> in the same way as IP_RECVERR, and use it as last parameter of 'sock_recv_errqueue()'. >> > > Got it, thanks. > >>> transport = vsk->transport; >>> >>> /* Retrieve the head sk_buff from the socket's receive queue. */ >>> >> >> Thanks, Arseniy > > Thanks, > Bobby
On Wed, Jul 26, 2023 at 02:40:22PM -0400, Michael S. Tsirkin wrote: > On Wed, Jul 19, 2023 at 12:50:15AM +0000, Bobby Eshleman wrote: > > This commit implements datagram support for vhost/vsock by teaching > > vhost to use the common virtio transport datagram functions. > > > > If the virtio RX buffer is too small, then the transmission is > > abandoned, the packet dropped, and EHOSTUNREACH is added to the socket's > > error queue. > > > > Signed-off-by: Bobby Eshleman <bobby.eshleman@bytedance.com> > > EHOSTUNREACH? > Yes, in the v4 thread we decided to try to mimic UDP/ICMP behavior when IP packets are lost. If an IP packet is dropped and the full UDP segment is not assembled, then ICMP_TIME_EXCEEDED ICMP_EXC_FRAGTIME is sent. The sending stack propagates this up the socket as EHOSTUNREACH. ENOBUFS/ENOMEM is already used for local buffers, so EHOSTUNREACH distinctly points to the remote end of the flow as well. > > > --- > > drivers/vhost/vsock.c | 62 +++++++++++++++++++++++++++++++++++++++++++++--- > > net/vmw_vsock/af_vsock.c | 5 +++- > > 2 files changed, 63 insertions(+), 4 deletions(-) > > > > diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c > > index d5d6a3c3f273..da14260c6654 100644 > > --- a/drivers/vhost/vsock.c > > +++ b/drivers/vhost/vsock.c > > @@ -8,6 +8,7 @@ > > */ > > #include <linux/miscdevice.h> > > #include <linux/atomic.h> > > +#include <linux/errqueue.h> > > #include <linux/module.h> > > #include <linux/mutex.h> > > #include <linux/vmalloc.h> > > @@ -32,7 +33,8 @@ > > enum { > > VHOST_VSOCK_FEATURES = VHOST_FEATURES | > > (1ULL << VIRTIO_F_ACCESS_PLATFORM) | > > - (1ULL << VIRTIO_VSOCK_F_SEQPACKET) > > + (1ULL << VIRTIO_VSOCK_F_SEQPACKET) | > > + (1ULL << VIRTIO_VSOCK_F_DGRAM) > > }; > > > > enum { > > @@ -56,6 +58,7 @@ struct vhost_vsock { > > atomic_t queued_replies; > > > > u32 guest_cid; > > + bool dgram_allow; > > bool seqpacket_allow; > > }; > > > > @@ -86,6 +89,32 @@ static struct vhost_vsock *vhost_vsock_get(u32 guest_cid) > > return NULL; > > } > > > > +/* Claims ownership of the skb, do not free the skb after calling! */ > > +static void > > +vhost_transport_error(struct sk_buff *skb, int err) > > +{ > > + struct sock_exterr_skb *serr; > > + struct sock *sk = skb->sk; > > + struct sk_buff *clone; > > + > > + serr = SKB_EXT_ERR(skb); > > + memset(serr, 0, sizeof(*serr)); > > + serr->ee.ee_errno = err; > > + serr->ee.ee_origin = SO_EE_ORIGIN_NONE; > > + > > + clone = skb_clone(skb, GFP_KERNEL); > > + if (!clone) > > + return; > > + > > + if (sock_queue_err_skb(sk, clone)) > > + kfree_skb(clone); > > + > > + sk->sk_err = err; > > + sk_error_report(sk); > > + > > + kfree_skb(skb); > > +} > > + > > static void > > vhost_transport_do_send_pkt(struct vhost_vsock *vsock, > > struct vhost_virtqueue *vq) > > @@ -160,9 +189,15 @@ vhost_transport_do_send_pkt(struct vhost_vsock *vsock, > > hdr = virtio_vsock_hdr(skb); > > > > /* If the packet is greater than the space available in the > > - * buffer, we split it using multiple buffers. > > + * buffer, we split it using multiple buffers for connectible > > + * sockets and drop the packet for datagram sockets. > > */ > > won't this break things like recently proposed zerocopy? > I think splitup has to be supported for all types. > > > > if (payload_len > iov_len - sizeof(*hdr)) { > > + if (le16_to_cpu(hdr->type) == VIRTIO_VSOCK_TYPE_DGRAM) { > > + vhost_transport_error(skb, EHOSTUNREACH); > > + continue; > > + } > > + > > payload_len = iov_len - sizeof(*hdr); > > > > /* As we are copying pieces of large packet's buffer to > > @@ -394,6 +429,7 @@ static bool vhost_vsock_more_replies(struct vhost_vsock *vsock) > > return val < vq->num; > > } > > > > +static bool vhost_transport_dgram_allow(u32 cid, u32 port); > > static bool vhost_transport_seqpacket_allow(u32 remote_cid); > > > > static struct virtio_transport vhost_transport = { > > @@ -410,7 +446,8 @@ static struct virtio_transport vhost_transport = { > > .cancel_pkt = vhost_transport_cancel_pkt, > > > > .dgram_enqueue = virtio_transport_dgram_enqueue, > > - .dgram_allow = virtio_transport_dgram_allow, > > + .dgram_allow = vhost_transport_dgram_allow, > > + .dgram_addr_init = virtio_transport_dgram_addr_init, > > > > .stream_enqueue = virtio_transport_stream_enqueue, > > .stream_dequeue = virtio_transport_stream_dequeue, > > @@ -443,6 +480,22 @@ static struct virtio_transport vhost_transport = { > > .send_pkt = vhost_transport_send_pkt, > > }; > > > > +static bool vhost_transport_dgram_allow(u32 cid, u32 port) > > +{ > > + struct vhost_vsock *vsock; > > + bool dgram_allow = false; > > + > > + rcu_read_lock(); > > + vsock = vhost_vsock_get(cid); > > + > > + if (vsock) > > + dgram_allow = vsock->dgram_allow; > > + > > + rcu_read_unlock(); > > + > > + return dgram_allow; > > +} > > + > > static bool vhost_transport_seqpacket_allow(u32 remote_cid) > > { > > struct vhost_vsock *vsock; > > @@ -799,6 +852,9 @@ static int vhost_vsock_set_features(struct vhost_vsock *vsock, u64 features) > > if (features & (1ULL << VIRTIO_VSOCK_F_SEQPACKET)) > > vsock->seqpacket_allow = true; > > > > + if (features & (1ULL << VIRTIO_VSOCK_F_DGRAM)) > > + vsock->dgram_allow = true; > > + > > for (i = 0; i < ARRAY_SIZE(vsock->vqs); i++) { > > vq = &vsock->vqs[i]; > > mutex_lock(&vq->mutex); > > diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c > > index e73f3b2c52f1..449ed63ac2b0 100644 > > --- a/net/vmw_vsock/af_vsock.c > > +++ b/net/vmw_vsock/af_vsock.c > > @@ -1427,9 +1427,12 @@ int vsock_dgram_recvmsg(struct socket *sock, struct msghdr *msg, > > return prot->recvmsg(sk, msg, len, flags, NULL); > > #endif > > > > - if (flags & MSG_OOB || flags & MSG_ERRQUEUE) > > + if (unlikely(flags & MSG_OOB)) > > return -EOPNOTSUPP; > > > > + if (unlikely(flags & MSG_ERRQUEUE)) > > + return sock_recv_errqueue(sk, msg, len, SOL_VSOCK, 0); > > + > > transport = vsk->transport; > > > > /* Retrieve the head sk_buff from the socket's receive queue. */ > > > > -- > > 2.30.2 > > _______________________________________________ > Virtualization mailing list > Virtualization@lists.linux-foundation.org > https://lists.linuxfoundation.org/mailman/listinfo/virtualization
On Wed, Jul 26, 2023 at 02:40:22PM -0400, Michael S. Tsirkin wrote: > On Wed, Jul 19, 2023 at 12:50:15AM +0000, Bobby Eshleman wrote: > > This commit implements datagram support for vhost/vsock by teaching > > vhost to use the common virtio transport datagram functions. > > > > If the virtio RX buffer is too small, then the transmission is > > abandoned, the packet dropped, and EHOSTUNREACH is added to the socket's > > error queue. > > > > Signed-off-by: Bobby Eshleman <bobby.eshleman@bytedance.com> > > EHOSTUNREACH? > > > > --- > > drivers/vhost/vsock.c | 62 +++++++++++++++++++++++++++++++++++++++++++++--- > > net/vmw_vsock/af_vsock.c | 5 +++- > > 2 files changed, 63 insertions(+), 4 deletions(-) > > > > diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c > > index d5d6a3c3f273..da14260c6654 100644 > > --- a/drivers/vhost/vsock.c > > +++ b/drivers/vhost/vsock.c > > @@ -8,6 +8,7 @@ > > */ > > #include <linux/miscdevice.h> > > #include <linux/atomic.h> > > +#include <linux/errqueue.h> > > #include <linux/module.h> > > #include <linux/mutex.h> > > #include <linux/vmalloc.h> > > @@ -32,7 +33,8 @@ > > enum { > > VHOST_VSOCK_FEATURES = VHOST_FEATURES | > > (1ULL << VIRTIO_F_ACCESS_PLATFORM) | > > - (1ULL << VIRTIO_VSOCK_F_SEQPACKET) > > + (1ULL << VIRTIO_VSOCK_F_SEQPACKET) | > > + (1ULL << VIRTIO_VSOCK_F_DGRAM) > > }; > > > > enum { > > @@ -56,6 +58,7 @@ struct vhost_vsock { > > atomic_t queued_replies; > > > > u32 guest_cid; > > + bool dgram_allow; > > bool seqpacket_allow; > > }; > > > > @@ -86,6 +89,32 @@ static struct vhost_vsock *vhost_vsock_get(u32 guest_cid) > > return NULL; > > } > > > > +/* Claims ownership of the skb, do not free the skb after calling! */ > > +static void > > +vhost_transport_error(struct sk_buff *skb, int err) > > +{ > > + struct sock_exterr_skb *serr; > > + struct sock *sk = skb->sk; > > + struct sk_buff *clone; > > + > > + serr = SKB_EXT_ERR(skb); > > + memset(serr, 0, sizeof(*serr)); > > + serr->ee.ee_errno = err; > > + serr->ee.ee_origin = SO_EE_ORIGIN_NONE; > > + > > + clone = skb_clone(skb, GFP_KERNEL); > > + if (!clone) > > + return; > > + > > + if (sock_queue_err_skb(sk, clone)) > > + kfree_skb(clone); > > + > > + sk->sk_err = err; > > + sk_error_report(sk); > > + > > + kfree_skb(skb); > > +} > > + > > static void > > vhost_transport_do_send_pkt(struct vhost_vsock *vsock, > > struct vhost_virtqueue *vq) > > @@ -160,9 +189,15 @@ vhost_transport_do_send_pkt(struct vhost_vsock *vsock, > > hdr = virtio_vsock_hdr(skb); > > > > /* If the packet is greater than the space available in the > > - * buffer, we split it using multiple buffers. > > + * buffer, we split it using multiple buffers for connectible > > + * sockets and drop the packet for datagram sockets. > > */ > > won't this break things like recently proposed zerocopy? > I think splitup has to be supported for all types. > Could you elaborate? Is there something about zerocopy that would prohibit the transport from dropping a datagram? > > > if (payload_len > iov_len - sizeof(*hdr)) { > > + if (le16_to_cpu(hdr->type) == VIRTIO_VSOCK_TYPE_DGRAM) { > > + vhost_transport_error(skb, EHOSTUNREACH); > > + continue; > > + } > > + > > payload_len = iov_len - sizeof(*hdr); > > > > /* As we are copying pieces of large packet's buffer to > > @@ -394,6 +429,7 @@ static bool vhost_vsock_more_replies(struct vhost_vsock *vsock) > > return val < vq->num; > > } > > > > +static bool vhost_transport_dgram_allow(u32 cid, u32 port); > > static bool vhost_transport_seqpacket_allow(u32 remote_cid); > > > > static struct virtio_transport vhost_transport = { > > @@ -410,7 +446,8 @@ static struct virtio_transport vhost_transport = { > > .cancel_pkt = vhost_transport_cancel_pkt, > > > > .dgram_enqueue = virtio_transport_dgram_enqueue, > > - .dgram_allow = virtio_transport_dgram_allow, > > + .dgram_allow = vhost_transport_dgram_allow, > > + .dgram_addr_init = virtio_transport_dgram_addr_init, > > > > .stream_enqueue = virtio_transport_stream_enqueue, > > .stream_dequeue = virtio_transport_stream_dequeue, > > @@ -443,6 +480,22 @@ static struct virtio_transport vhost_transport = { > > .send_pkt = vhost_transport_send_pkt, > > }; > > > > +static bool vhost_transport_dgram_allow(u32 cid, u32 port) > > +{ > > + struct vhost_vsock *vsock; > > + bool dgram_allow = false; > > + > > + rcu_read_lock(); > > + vsock = vhost_vsock_get(cid); > > + > > + if (vsock) > > + dgram_allow = vsock->dgram_allow; > > + > > + rcu_read_unlock(); > > + > > + return dgram_allow; > > +} > > + > > static bool vhost_transport_seqpacket_allow(u32 remote_cid) > > { > > struct vhost_vsock *vsock; > > @@ -799,6 +852,9 @@ static int vhost_vsock_set_features(struct vhost_vsock *vsock, u64 features) > > if (features & (1ULL << VIRTIO_VSOCK_F_SEQPACKET)) > > vsock->seqpacket_allow = true; > > > > + if (features & (1ULL << VIRTIO_VSOCK_F_DGRAM)) > > + vsock->dgram_allow = true; > > + > > for (i = 0; i < ARRAY_SIZE(vsock->vqs); i++) { > > vq = &vsock->vqs[i]; > > mutex_lock(&vq->mutex); > > diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c > > index e73f3b2c52f1..449ed63ac2b0 100644 > > --- a/net/vmw_vsock/af_vsock.c > > +++ b/net/vmw_vsock/af_vsock.c > > @@ -1427,9 +1427,12 @@ int vsock_dgram_recvmsg(struct socket *sock, struct msghdr *msg, > > return prot->recvmsg(sk, msg, len, flags, NULL); > > #endif > > > > - if (flags & MSG_OOB || flags & MSG_ERRQUEUE) > > + if (unlikely(flags & MSG_OOB)) > > return -EOPNOTSUPP; > > > > + if (unlikely(flags & MSG_ERRQUEUE)) > > + return sock_recv_errqueue(sk, msg, len, SOL_VSOCK, 0); > > + > > transport = vsk->transport; > > > > /* Retrieve the head sk_buff from the socket's receive queue. */ > > > > -- > > 2.30.2 > > _______________________________________________ > Virtualization mailing list > Virtualization@lists.linux-foundation.org > https://lists.linuxfoundation.org/mailman/listinfo/virtualization
On Thu, Jul 27, 2023 at 11:00:55AM +0300, Arseniy Krasnov wrote: > > > On 26.07.2023 20:55, Bobby Eshleman wrote: > > On Sat, Jul 22, 2023 at 11:42:38AM +0300, Arseniy Krasnov wrote: > >> > >> > >> On 19.07.2023 03:50, Bobby Eshleman wrote: > >>> This commit implements datagram support for vhost/vsock by teaching > >>> vhost to use the common virtio transport datagram functions. > >>> > >>> If the virtio RX buffer is too small, then the transmission is > >>> abandoned, the packet dropped, and EHOSTUNREACH is added to the socket's > >>> error queue. > >>> > >>> Signed-off-by: Bobby Eshleman <bobby.eshleman@bytedance.com> > >>> --- > >>> drivers/vhost/vsock.c | 62 +++++++++++++++++++++++++++++++++++++++++++++--- > >>> net/vmw_vsock/af_vsock.c | 5 +++- > >>> 2 files changed, 63 insertions(+), 4 deletions(-) > >>> > >>> diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c > >>> index d5d6a3c3f273..da14260c6654 100644 > >>> --- a/drivers/vhost/vsock.c > >>> +++ b/drivers/vhost/vsock.c > >>> @@ -8,6 +8,7 @@ > >>> */ > >>> #include <linux/miscdevice.h> > >>> #include <linux/atomic.h> > >>> +#include <linux/errqueue.h> > >>> #include <linux/module.h> > >>> #include <linux/mutex.h> > >>> #include <linux/vmalloc.h> > >>> @@ -32,7 +33,8 @@ > >>> enum { > >>> VHOST_VSOCK_FEATURES = VHOST_FEATURES | > >>> (1ULL << VIRTIO_F_ACCESS_PLATFORM) | > >>> - (1ULL << VIRTIO_VSOCK_F_SEQPACKET) > >>> + (1ULL << VIRTIO_VSOCK_F_SEQPACKET) | > >>> + (1ULL << VIRTIO_VSOCK_F_DGRAM) > >>> }; > >>> > >>> enum { > >>> @@ -56,6 +58,7 @@ struct vhost_vsock { > >>> atomic_t queued_replies; > >>> > >>> u32 guest_cid; > >>> + bool dgram_allow; > >>> bool seqpacket_allow; > >>> }; > >>> > >>> @@ -86,6 +89,32 @@ static struct vhost_vsock *vhost_vsock_get(u32 guest_cid) > >>> return NULL; > >>> } > >>> > >>> +/* Claims ownership of the skb, do not free the skb after calling! */ > >>> +static void > >>> +vhost_transport_error(struct sk_buff *skb, int err) > >>> +{ > >>> + struct sock_exterr_skb *serr; > >>> + struct sock *sk = skb->sk; > >>> + struct sk_buff *clone; > >>> + > >>> + serr = SKB_EXT_ERR(skb); > >>> + memset(serr, 0, sizeof(*serr)); > >>> + serr->ee.ee_errno = err; > >>> + serr->ee.ee_origin = SO_EE_ORIGIN_NONE; > >>> + > >>> + clone = skb_clone(skb, GFP_KERNEL); > >> > >> May for skb which is error carrier we can use 'sock_omalloc()', not 'skb_clone()' ? TCP uses skb > >> allocated by this function as carriers of error structure. I guess 'skb_clone()' also clones data of origin, > >> but i think that there is no need in data as we insert it to error queue of the socket. > >> > >> What do You think? > > > > IIUC skb_clone() is often used in this scenario so that the user can > > retrieve the error-causing packet from the error queue. Is there some > > reason we shouldn't do this? > > > > I'm seeing that the serr bits need to occur on the clone here, not the > > original. I didn't realize the SKB_EXT_ERR() is a skb->cb cast. I'm not > > actually sure how this passes the test case since ->cb isn't cloned. > > Ah yes, sorry, You are right, I just confused this case with zerocopy completion > handling - there we allocate "empty" skb which carries completion metadata in its > 'cb' field. > > Hm, but can't we just reinsert current skb (update it's 'cb' as 'sock_exterr_skb') > to error queue of the socket without cloning it ? > > Thanks, Arseniy > I just assumed other socket types used skb_clone() for some reason unknown to me and I didn't want to deviate. If it is fine to just use the skb directly, then I am happy to make that change. Best, Bobby > > > >> > >>> + if (!clone) > >>> + return; > >> > >> What will happen here 'if (!clone)' ? skb will leak as it was removed from queue? > >> > > > > Ah yes, true. > > > >>> + > >>> + if (sock_queue_err_skb(sk, clone)) > >>> + kfree_skb(clone); > >>> + > >>> + sk->sk_err = err; > >>> + sk_error_report(sk); > >>> + > >>> + kfree_skb(skb); > >>> +} > >>> + > >>> static void > >>> vhost_transport_do_send_pkt(struct vhost_vsock *vsock, > >>> struct vhost_virtqueue *vq) > >>> @@ -160,9 +189,15 @@ vhost_transport_do_send_pkt(struct vhost_vsock *vsock, > >>> hdr = virtio_vsock_hdr(skb); > >>> > >>> /* If the packet is greater than the space available in the > >>> - * buffer, we split it using multiple buffers. > >>> + * buffer, we split it using multiple buffers for connectible > >>> + * sockets and drop the packet for datagram sockets. > >>> */ > >>> if (payload_len > iov_len - sizeof(*hdr)) { > >>> + if (le16_to_cpu(hdr->type) == VIRTIO_VSOCK_TYPE_DGRAM) { > >>> + vhost_transport_error(skb, EHOSTUNREACH); > >>> + continue; > >>> + } > >>> + > >>> payload_len = iov_len - sizeof(*hdr); > >>> > >>> /* As we are copying pieces of large packet's buffer to > >>> @@ -394,6 +429,7 @@ static bool vhost_vsock_more_replies(struct vhost_vsock *vsock) > >>> return val < vq->num; > >>> } > >>> > >>> +static bool vhost_transport_dgram_allow(u32 cid, u32 port); > >>> static bool vhost_transport_seqpacket_allow(u32 remote_cid); > >>> > >>> static struct virtio_transport vhost_transport = { > >>> @@ -410,7 +446,8 @@ static struct virtio_transport vhost_transport = { > >>> .cancel_pkt = vhost_transport_cancel_pkt, > >>> > >>> .dgram_enqueue = virtio_transport_dgram_enqueue, > >>> - .dgram_allow = virtio_transport_dgram_allow, > >>> + .dgram_allow = vhost_transport_dgram_allow, > >>> + .dgram_addr_init = virtio_transport_dgram_addr_init, > >>> > >>> .stream_enqueue = virtio_transport_stream_enqueue, > >>> .stream_dequeue = virtio_transport_stream_dequeue, > >>> @@ -443,6 +480,22 @@ static struct virtio_transport vhost_transport = { > >>> .send_pkt = vhost_transport_send_pkt, > >>> }; > >>> > >>> +static bool vhost_transport_dgram_allow(u32 cid, u32 port) > >>> +{ > >>> + struct vhost_vsock *vsock; > >>> + bool dgram_allow = false; > >>> + > >>> + rcu_read_lock(); > >>> + vsock = vhost_vsock_get(cid); > >>> + > >>> + if (vsock) > >>> + dgram_allow = vsock->dgram_allow; > >>> + > >>> + rcu_read_unlock(); > >>> + > >>> + return dgram_allow; > >>> +} > >>> + > >>> static bool vhost_transport_seqpacket_allow(u32 remote_cid) > >>> { > >>> struct vhost_vsock *vsock; > >>> @@ -799,6 +852,9 @@ static int vhost_vsock_set_features(struct vhost_vsock *vsock, u64 features) > >>> if (features & (1ULL << VIRTIO_VSOCK_F_SEQPACKET)) > >>> vsock->seqpacket_allow = true; > >>> > >>> + if (features & (1ULL << VIRTIO_VSOCK_F_DGRAM)) > >>> + vsock->dgram_allow = true; > >>> + > >>> for (i = 0; i < ARRAY_SIZE(vsock->vqs); i++) { > >>> vq = &vsock->vqs[i]; > >>> mutex_lock(&vq->mutex); > >>> diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c > >>> index e73f3b2c52f1..449ed63ac2b0 100644 > >>> --- a/net/vmw_vsock/af_vsock.c > >>> +++ b/net/vmw_vsock/af_vsock.c > >>> @@ -1427,9 +1427,12 @@ int vsock_dgram_recvmsg(struct socket *sock, struct msghdr *msg, > >>> return prot->recvmsg(sk, msg, len, flags, NULL); > >>> #endif > >>> > >>> - if (flags & MSG_OOB || flags & MSG_ERRQUEUE) > >>> + if (unlikely(flags & MSG_OOB)) > >>> return -EOPNOTSUPP; > >>> > >>> + if (unlikely(flags & MSG_ERRQUEUE)) > >>> + return sock_recv_errqueue(sk, msg, len, SOL_VSOCK, 0); > >>> + > >> > >> Sorry, but I get build error here, because SOL_VSOCK in undefined. I think it should be added to > >> include/linux/socket.h and to uapi files also for future use in userspace. > >> > > > > Strange, I built each patch individually without issue. My base is > > netdev/main with your SOL_VSOCK patch applied. I will look today and see > > if I'm missing something. > > > >> Also Stefano Garzarella <sgarzare@redhat.com> suggested to add define something like VSOCK_RECVERR, > >> in the same way as IP_RECVERR, and use it as last parameter of 'sock_recv_errqueue()'. > >> > > > > Got it, thanks. > > > >>> transport = vsk->transport; > >>> > >>> /* Retrieve the head sk_buff from the socket's receive queue. */ > >>> > >> > >> Thanks, Arseniy > > > > Thanks, > > Bobby
On 03.08.2023 00:23, Bobby Eshleman wrote: > On Thu, Jul 27, 2023 at 11:00:55AM +0300, Arseniy Krasnov wrote: >> >> >> On 26.07.2023 20:55, Bobby Eshleman wrote: >>> On Sat, Jul 22, 2023 at 11:42:38AM +0300, Arseniy Krasnov wrote: >>>> >>>> >>>> On 19.07.2023 03:50, Bobby Eshleman wrote: >>>>> This commit implements datagram support for vhost/vsock by teaching >>>>> vhost to use the common virtio transport datagram functions. >>>>> >>>>> If the virtio RX buffer is too small, then the transmission is >>>>> abandoned, the packet dropped, and EHOSTUNREACH is added to the socket's >>>>> error queue. >>>>> >>>>> Signed-off-by: Bobby Eshleman <bobby.eshleman@bytedance.com> >>>>> --- >>>>> drivers/vhost/vsock.c | 62 +++++++++++++++++++++++++++++++++++++++++++++--- >>>>> net/vmw_vsock/af_vsock.c | 5 +++- >>>>> 2 files changed, 63 insertions(+), 4 deletions(-) >>>>> >>>>> diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c >>>>> index d5d6a3c3f273..da14260c6654 100644 >>>>> --- a/drivers/vhost/vsock.c >>>>> +++ b/drivers/vhost/vsock.c >>>>> @@ -8,6 +8,7 @@ >>>>> */ >>>>> #include <linux/miscdevice.h> >>>>> #include <linux/atomic.h> >>>>> +#include <linux/errqueue.h> >>>>> #include <linux/module.h> >>>>> #include <linux/mutex.h> >>>>> #include <linux/vmalloc.h> >>>>> @@ -32,7 +33,8 @@ >>>>> enum { >>>>> VHOST_VSOCK_FEATURES = VHOST_FEATURES | >>>>> (1ULL << VIRTIO_F_ACCESS_PLATFORM) | >>>>> - (1ULL << VIRTIO_VSOCK_F_SEQPACKET) >>>>> + (1ULL << VIRTIO_VSOCK_F_SEQPACKET) | >>>>> + (1ULL << VIRTIO_VSOCK_F_DGRAM) >>>>> }; >>>>> >>>>> enum { >>>>> @@ -56,6 +58,7 @@ struct vhost_vsock { >>>>> atomic_t queued_replies; >>>>> >>>>> u32 guest_cid; >>>>> + bool dgram_allow; >>>>> bool seqpacket_allow; >>>>> }; >>>>> >>>>> @@ -86,6 +89,32 @@ static struct vhost_vsock *vhost_vsock_get(u32 guest_cid) >>>>> return NULL; >>>>> } >>>>> >>>>> +/* Claims ownership of the skb, do not free the skb after calling! */ >>>>> +static void >>>>> +vhost_transport_error(struct sk_buff *skb, int err) >>>>> +{ >>>>> + struct sock_exterr_skb *serr; >>>>> + struct sock *sk = skb->sk; >>>>> + struct sk_buff *clone; >>>>> + >>>>> + serr = SKB_EXT_ERR(skb); >>>>> + memset(serr, 0, sizeof(*serr)); >>>>> + serr->ee.ee_errno = err; >>>>> + serr->ee.ee_origin = SO_EE_ORIGIN_NONE; >>>>> + >>>>> + clone = skb_clone(skb, GFP_KERNEL); >>>> >>>> May for skb which is error carrier we can use 'sock_omalloc()', not 'skb_clone()' ? TCP uses skb >>>> allocated by this function as carriers of error structure. I guess 'skb_clone()' also clones data of origin, >>>> but i think that there is no need in data as we insert it to error queue of the socket. >>>> >>>> What do You think? >>> >>> IIUC skb_clone() is often used in this scenario so that the user can >>> retrieve the error-causing packet from the error queue. Is there some >>> reason we shouldn't do this? >>> >>> I'm seeing that the serr bits need to occur on the clone here, not the >>> original. I didn't realize the SKB_EXT_ERR() is a skb->cb cast. I'm not >>> actually sure how this passes the test case since ->cb isn't cloned. >> >> Ah yes, sorry, You are right, I just confused this case with zerocopy completion >> handling - there we allocate "empty" skb which carries completion metadata in its >> 'cb' field. >> >> Hm, but can't we just reinsert current skb (update it's 'cb' as 'sock_exterr_skb') >> to error queue of the socket without cloning it ? >> >> Thanks, Arseniy >> > > I just assumed other socket types used skb_clone() for some reason > unknown to me and I didn't want to deviate. > > If it is fine to just use the skb directly, then I am happy to make that > change. Agree, it is better to use behaviour from already implemented sockets. I also found, that ICMP clones skb in this way: https://elixir.bootlin.com/linux/latest/source/net/ipv4/ip_sockglue.c#L412 skb = skb_clone(skb, GFP_ATOMIC); I guess there is some sense beyond 'skb = skb_clone(skb)'... Thanks, Arseniy > > Best, > Bobby > >>> >>>> >>>>> + if (!clone) >>>>> + return; >>>> >>>> What will happen here 'if (!clone)' ? skb will leak as it was removed from queue? >>>> >>> >>> Ah yes, true. >>> >>>>> + >>>>> + if (sock_queue_err_skb(sk, clone)) >>>>> + kfree_skb(clone); >>>>> + >>>>> + sk->sk_err = err; >>>>> + sk_error_report(sk); >>>>> + >>>>> + kfree_skb(skb); >>>>> +} >>>>> + >>>>> static void >>>>> vhost_transport_do_send_pkt(struct vhost_vsock *vsock, >>>>> struct vhost_virtqueue *vq) >>>>> @@ -160,9 +189,15 @@ vhost_transport_do_send_pkt(struct vhost_vsock *vsock, >>>>> hdr = virtio_vsock_hdr(skb); >>>>> >>>>> /* If the packet is greater than the space available in the >>>>> - * buffer, we split it using multiple buffers. >>>>> + * buffer, we split it using multiple buffers for connectible >>>>> + * sockets and drop the packet for datagram sockets. >>>>> */ >>>>> if (payload_len > iov_len - sizeof(*hdr)) { >>>>> + if (le16_to_cpu(hdr->type) == VIRTIO_VSOCK_TYPE_DGRAM) { >>>>> + vhost_transport_error(skb, EHOSTUNREACH); >>>>> + continue; >>>>> + } >>>>> + >>>>> payload_len = iov_len - sizeof(*hdr); >>>>> >>>>> /* As we are copying pieces of large packet's buffer to >>>>> @@ -394,6 +429,7 @@ static bool vhost_vsock_more_replies(struct vhost_vsock *vsock) >>>>> return val < vq->num; >>>>> } >>>>> >>>>> +static bool vhost_transport_dgram_allow(u32 cid, u32 port); >>>>> static bool vhost_transport_seqpacket_allow(u32 remote_cid); >>>>> >>>>> static struct virtio_transport vhost_transport = { >>>>> @@ -410,7 +446,8 @@ static struct virtio_transport vhost_transport = { >>>>> .cancel_pkt = vhost_transport_cancel_pkt, >>>>> >>>>> .dgram_enqueue = virtio_transport_dgram_enqueue, >>>>> - .dgram_allow = virtio_transport_dgram_allow, >>>>> + .dgram_allow = vhost_transport_dgram_allow, >>>>> + .dgram_addr_init = virtio_transport_dgram_addr_init, >>>>> >>>>> .stream_enqueue = virtio_transport_stream_enqueue, >>>>> .stream_dequeue = virtio_transport_stream_dequeue, >>>>> @@ -443,6 +480,22 @@ static struct virtio_transport vhost_transport = { >>>>> .send_pkt = vhost_transport_send_pkt, >>>>> }; >>>>> >>>>> +static bool vhost_transport_dgram_allow(u32 cid, u32 port) >>>>> +{ >>>>> + struct vhost_vsock *vsock; >>>>> + bool dgram_allow = false; >>>>> + >>>>> + rcu_read_lock(); >>>>> + vsock = vhost_vsock_get(cid); >>>>> + >>>>> + if (vsock) >>>>> + dgram_allow = vsock->dgram_allow; >>>>> + >>>>> + rcu_read_unlock(); >>>>> + >>>>> + return dgram_allow; >>>>> +} >>>>> + >>>>> static bool vhost_transport_seqpacket_allow(u32 remote_cid) >>>>> { >>>>> struct vhost_vsock *vsock; >>>>> @@ -799,6 +852,9 @@ static int vhost_vsock_set_features(struct vhost_vsock *vsock, u64 features) >>>>> if (features & (1ULL << VIRTIO_VSOCK_F_SEQPACKET)) >>>>> vsock->seqpacket_allow = true; >>>>> >>>>> + if (features & (1ULL << VIRTIO_VSOCK_F_DGRAM)) >>>>> + vsock->dgram_allow = true; >>>>> + >>>>> for (i = 0; i < ARRAY_SIZE(vsock->vqs); i++) { >>>>> vq = &vsock->vqs[i]; >>>>> mutex_lock(&vq->mutex); >>>>> diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c >>>>> index e73f3b2c52f1..449ed63ac2b0 100644 >>>>> --- a/net/vmw_vsock/af_vsock.c >>>>> +++ b/net/vmw_vsock/af_vsock.c >>>>> @@ -1427,9 +1427,12 @@ int vsock_dgram_recvmsg(struct socket *sock, struct msghdr *msg, >>>>> return prot->recvmsg(sk, msg, len, flags, NULL); >>>>> #endif >>>>> >>>>> - if (flags & MSG_OOB || flags & MSG_ERRQUEUE) >>>>> + if (unlikely(flags & MSG_OOB)) >>>>> return -EOPNOTSUPP; >>>>> >>>>> + if (unlikely(flags & MSG_ERRQUEUE)) >>>>> + return sock_recv_errqueue(sk, msg, len, SOL_VSOCK, 0); >>>>> + >>>> >>>> Sorry, but I get build error here, because SOL_VSOCK in undefined. I think it should be added to >>>> include/linux/socket.h and to uapi files also for future use in userspace. >>>> >>> >>> Strange, I built each patch individually without issue. My base is >>> netdev/main with your SOL_VSOCK patch applied. I will look today and see >>> if I'm missing something. >>> >>>> Also Stefano Garzarella <sgarzare@redhat.com> suggested to add define something like VSOCK_RECVERR, >>>> in the same way as IP_RECVERR, and use it as last parameter of 'sock_recv_errqueue()'. >>>> >>> >>> Got it, thanks. >>> >>>>> transport = vsk->transport; >>>>> >>>>> /* Retrieve the head sk_buff from the socket's receive queue. */ >>>>> >>>> >>>> Thanks, Arseniy >>> >>> Thanks, >>> Bobby
diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c index d5d6a3c3f273..da14260c6654 100644 --- a/drivers/vhost/vsock.c +++ b/drivers/vhost/vsock.c @@ -8,6 +8,7 @@ */ #include <linux/miscdevice.h> #include <linux/atomic.h> +#include <linux/errqueue.h> #include <linux/module.h> #include <linux/mutex.h> #include <linux/vmalloc.h> @@ -32,7 +33,8 @@ enum { VHOST_VSOCK_FEATURES = VHOST_FEATURES | (1ULL << VIRTIO_F_ACCESS_PLATFORM) | - (1ULL << VIRTIO_VSOCK_F_SEQPACKET) + (1ULL << VIRTIO_VSOCK_F_SEQPACKET) | + (1ULL << VIRTIO_VSOCK_F_DGRAM) }; enum { @@ -56,6 +58,7 @@ struct vhost_vsock { atomic_t queued_replies; u32 guest_cid; + bool dgram_allow; bool seqpacket_allow; }; @@ -86,6 +89,32 @@ static struct vhost_vsock *vhost_vsock_get(u32 guest_cid) return NULL; } +/* Claims ownership of the skb, do not free the skb after calling! */ +static void +vhost_transport_error(struct sk_buff *skb, int err) +{ + struct sock_exterr_skb *serr; + struct sock *sk = skb->sk; + struct sk_buff *clone; + + serr = SKB_EXT_ERR(skb); + memset(serr, 0, sizeof(*serr)); + serr->ee.ee_errno = err; + serr->ee.ee_origin = SO_EE_ORIGIN_NONE; + + clone = skb_clone(skb, GFP_KERNEL); + if (!clone) + return; + + if (sock_queue_err_skb(sk, clone)) + kfree_skb(clone); + + sk->sk_err = err; + sk_error_report(sk); + + kfree_skb(skb); +} + static void vhost_transport_do_send_pkt(struct vhost_vsock *vsock, struct vhost_virtqueue *vq) @@ -160,9 +189,15 @@ vhost_transport_do_send_pkt(struct vhost_vsock *vsock, hdr = virtio_vsock_hdr(skb); /* If the packet is greater than the space available in the - * buffer, we split it using multiple buffers. + * buffer, we split it using multiple buffers for connectible + * sockets and drop the packet for datagram sockets. */ if (payload_len > iov_len - sizeof(*hdr)) { + if (le16_to_cpu(hdr->type) == VIRTIO_VSOCK_TYPE_DGRAM) { + vhost_transport_error(skb, EHOSTUNREACH); + continue; + } + payload_len = iov_len - sizeof(*hdr); /* As we are copying pieces of large packet's buffer to @@ -394,6 +429,7 @@ static bool vhost_vsock_more_replies(struct vhost_vsock *vsock) return val < vq->num; } +static bool vhost_transport_dgram_allow(u32 cid, u32 port); static bool vhost_transport_seqpacket_allow(u32 remote_cid); static struct virtio_transport vhost_transport = { @@ -410,7 +446,8 @@ static struct virtio_transport vhost_transport = { .cancel_pkt = vhost_transport_cancel_pkt, .dgram_enqueue = virtio_transport_dgram_enqueue, - .dgram_allow = virtio_transport_dgram_allow, + .dgram_allow = vhost_transport_dgram_allow, + .dgram_addr_init = virtio_transport_dgram_addr_init, .stream_enqueue = virtio_transport_stream_enqueue, .stream_dequeue = virtio_transport_stream_dequeue, @@ -443,6 +480,22 @@ static struct virtio_transport vhost_transport = { .send_pkt = vhost_transport_send_pkt, }; +static bool vhost_transport_dgram_allow(u32 cid, u32 port) +{ + struct vhost_vsock *vsock; + bool dgram_allow = false; + + rcu_read_lock(); + vsock = vhost_vsock_get(cid); + + if (vsock) + dgram_allow = vsock->dgram_allow; + + rcu_read_unlock(); + + return dgram_allow; +} + static bool vhost_transport_seqpacket_allow(u32 remote_cid) { struct vhost_vsock *vsock; @@ -799,6 +852,9 @@ static int vhost_vsock_set_features(struct vhost_vsock *vsock, u64 features) if (features & (1ULL << VIRTIO_VSOCK_F_SEQPACKET)) vsock->seqpacket_allow = true; + if (features & (1ULL << VIRTIO_VSOCK_F_DGRAM)) + vsock->dgram_allow = true; + for (i = 0; i < ARRAY_SIZE(vsock->vqs); i++) { vq = &vsock->vqs[i]; mutex_lock(&vq->mutex); diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c index e73f3b2c52f1..449ed63ac2b0 100644 --- a/net/vmw_vsock/af_vsock.c +++ b/net/vmw_vsock/af_vsock.c @@ -1427,9 +1427,12 @@ int vsock_dgram_recvmsg(struct socket *sock, struct msghdr *msg, return prot->recvmsg(sk, msg, len, flags, NULL); #endif - if (flags & MSG_OOB || flags & MSG_ERRQUEUE) + if (unlikely(flags & MSG_OOB)) return -EOPNOTSUPP; + if (unlikely(flags & MSG_ERRQUEUE)) + return sock_recv_errqueue(sk, msg, len, SOL_VSOCK, 0); + transport = vsk->transport; /* Retrieve the head sk_buff from the socket's receive queue. */