From patchwork Sat Jun 10 00:58:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bobby Eshleman X-Patchwork-Id: 105869 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp1288433vqr; Fri, 9 Jun 2023 18:22:42 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ4fdS8DkuQGTtCamhpUrlk/gPj6Ge3lhHxZf06M1LgKxd/Bdv9b/mhxFTFNoFZk/zV6aPXO X-Received: by 2002:a17:90a:9742:b0:255:83b6:2d0b with SMTP id i2-20020a17090a974200b0025583b62d0bmr2437980pjw.17.1686360162436; Fri, 09 Jun 2023 18:22:42 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1686360162; cv=none; d=google.com; s=arc-20160816; b=H5GAQ4a3oLFrbbeI28MOCDi+I+tpFHWTHijzYwmPtA9NQu10f/dvvdh4LsNmHxVcr5 9DNLqG3sNaELtsSXzxVT73Bp9D2eld7zGxouM3f6PBIpnNtOeaMNRD2aeGinSKWe4MYn qgLtQiBSGifbe1RSAXrnNJmzMGu7ZSJ/SSG5LlbSEUb1aV3fxQ2xSzJeibx/MwYYfdwS r/oivpApdacaQ7M86vyTrKUchRBOWMj5tfaw7DtVqYyr5+UqeAI1saEtUBwJ9AUz/gNj yRM01s9YTXB2drgeAkIELYb1xuEf2oevxOIU6/9OvpSVItq3tXu444tqcA5Ee9CqvSmy VpDg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:in-reply-to:references:message-id :content-transfer-encoding:mime-version:subject:date:from :dkim-signature; bh=vXOrOkD9lXLIYQjcPepId4rJsMkbHM3+c/X7NTw0b1Q=; b=pS2MZ3yeGojE5Xlcj4W2IGHjz9csw3u5C1CDgHXFGtq9t/G00DRvJJZijQ1p8Xvdtz aYr54Ymm5LvciBAPcEZtDAC/hfakyeHPAmw92mzaJ3PQlvxvrWxsjtp3WmBlgpXG325M y/157745ucRMxKzicyTsjC1/iXsT6cVRiGWOg0lUms4xwgbtGUK9W4sRYeVT1tDfMFeS EaaccH6svn8ejuChP4NOnVjEdRUznZ9vRzBYSBJPafUhLIYynG0cFlXy6CcuTLB2Eqm1 86CAsvwOoPTUel2QkAgWpY6CVCLrsYchxo/WFwduedICdVChCo/EJjK1RXrj0n30zDyI hgmg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@bytedance.com header.s=google header.b=QnLc0Kr4; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=bytedance.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id w17-20020a17090a8a1100b00259b7912a53si5199889pjn.185.2023.06.09.18.22.29; Fri, 09 Jun 2023 18:22:42 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@bytedance.com header.s=google header.b=QnLc0Kr4; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=bytedance.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233258AbjFJA7V (ORCPT + 99 others); Fri, 9 Jun 2023 20:59:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55184 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232735AbjFJA7N (ORCPT ); Fri, 9 Jun 2023 20:59:13 -0400 Received: from mail-qv1-xf35.google.com (mail-qv1-xf35.google.com [IPv6:2607:f8b0:4864:20::f35]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 48CA72D68 for ; Fri, 9 Jun 2023 17:58:46 -0700 (PDT) Received: by mail-qv1-xf35.google.com with SMTP id 6a1803df08f44-628f267aa5aso16379996d6.1 for ; Fri, 09 Jun 2023 17:58:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1686358725; x=1688950725; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=vXOrOkD9lXLIYQjcPepId4rJsMkbHM3+c/X7NTw0b1Q=; b=QnLc0Kr4f7WyJkk8NaU9ymXQMkhMF4DA380k4BiKXLH38kmYCmkLR9r77pwjGb09Ea Bi/wjMREH4yj+vErrcq8N80Vl/iRUfWddyL3rsuLY9+Ii0+lqAB+Fsl89oSzLQJ14RJN UX/GyvlR49DHWIXCczzDkN0SwWZ/vFzeNRdH0jrWCBDrVHQ3I2n664v/fAEBr3B+jSSs hWhNzzhNjDX3HDziPlzZqDwWuCJ5VJ7W9y8Qda4ce+Rmm8p99AETKjTswA6m1K/+lZJI sgFxRtwXQ+yQu1xs9lSYX/Jf6/KHb8+UjVjtkqmj3DQmBUAdk57UV5b9iq0JQ/MMoJTA 2u0A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1686358725; x=1688950725; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=vXOrOkD9lXLIYQjcPepId4rJsMkbHM3+c/X7NTw0b1Q=; b=LsIBPbonkIRAK21fMyeq7ShzIFuFxx3MBPMtKMjuU31uLu77IJrlUaXYqlF7YEH9rT vwv2uBYHAO66QCIfaIMffgHr7mI6L1/xBLErgq+VrgFwHJC70q1NSwBGVOhTwopPBd9i urgICFL3UXCZG8qZVa6vra9l0aNkNI88KgyFYVVm+OG9igNQBMeCtiPODFs5hUSFeGOv S08d+TMGH8y0dK+273C6ezYri8RkrFmyDKE6eHS4nsH8oFAJL75h8hAMIwFhjl5YfzFZ 62sEahBZhW3yEEg5cMRa+QmUTd7dAiFP8Jy34AIabfmG6cyoA9TfBHKvJFz+YjxsGoVU fR9g== X-Gm-Message-State: AC+VfDxtG+2Ss+HZ+nrW96nlNovblfROBIl1PA1yUAYHAlSjgtU9YJX1 OrhkRSq/r3Gq44lvFQMxMZilPg== X-Received: by 2002:a05:6214:4018:b0:629:78ae:80f8 with SMTP id kd24-20020a056214401800b0062978ae80f8mr3109606qvb.10.1686358725361; Fri, 09 Jun 2023 17:58:45 -0700 (PDT) Received: from [172.17.0.4] ([130.44.212.126]) by smtp.gmail.com with ESMTPSA id x17-20020a0ce251000000b00606750abaf9sm1504075qvl.136.2023.06.09.17.58.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 09 Jun 2023 17:58:45 -0700 (PDT) From: Bobby Eshleman Date: Sat, 10 Jun 2023 00:58:28 +0000 Subject: [PATCH RFC net-next v4 1/8] vsock/dgram: generalize recvmsg and drop transport->dgram_dequeue MIME-Version: 1.0 Message-Id: <20230413-b4-vsock-dgram-v4-1-0cebbb2ae899@bytedance.com> References: <20230413-b4-vsock-dgram-v4-0-0cebbb2ae899@bytedance.com> In-Reply-To: <20230413-b4-vsock-dgram-v4-0-0cebbb2ae899@bytedance.com> To: Stefan Hajnoczi , Stefano Garzarella , "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , "K. Y. Srinivasan" , Haiyang Zhang , Wei Liu , Dexuan Cui , Bryan Tan , Vishnu Dasa , VMware PV-Drivers Reviewers Cc: Dan Carpenter , Simon Horman , Krasnov Arseniy , kvm@vger.kernel.org, virtualization@lists.linux-foundation.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-hyperv@vger.kernel.org, bpf@vger.kernel.org, Bobby Eshleman X-Mailer: b4 0.12.2 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1768276793830181454?= X-GMAIL-MSGID: =?utf-8?q?1768276793830181454?= This commit drops the transport->dgram_dequeue callback and makes vsock_dgram_recvmsg() generic. It also adds additional transport callbacks for use by the generic vsock_dgram_recvmsg(), such as for parsing skbs for CID/port which vary in format per transport. Signed-off-by: Bobby Eshleman --- drivers/vhost/vsock.c | 4 +- include/linux/virtio_vsock.h | 3 ++ include/net/af_vsock.h | 13 ++++++- net/vmw_vsock/af_vsock.c | 51 ++++++++++++++++++++++++- net/vmw_vsock/hyperv_transport.c | 17 +++++++-- net/vmw_vsock/virtio_transport.c | 4 +- net/vmw_vsock/virtio_transport_common.c | 18 +++++++++ net/vmw_vsock/vmci_transport.c | 68 +++++++++++++-------------------- net/vmw_vsock/vsock_loopback.c | 4 +- 9 files changed, 132 insertions(+), 50 deletions(-) diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c index 6578db78f0ae..c8201c070b4b 100644 --- a/drivers/vhost/vsock.c +++ b/drivers/vhost/vsock.c @@ -410,9 +410,11 @@ static struct virtio_transport vhost_transport = { .cancel_pkt = vhost_transport_cancel_pkt, .dgram_enqueue = virtio_transport_dgram_enqueue, - .dgram_dequeue = virtio_transport_dgram_dequeue, .dgram_bind = virtio_transport_dgram_bind, .dgram_allow = virtio_transport_dgram_allow, + .dgram_get_cid = virtio_transport_dgram_get_cid, + .dgram_get_port = virtio_transport_dgram_get_port, + .dgram_get_length = virtio_transport_dgram_get_length, .stream_enqueue = virtio_transport_stream_enqueue, .stream_dequeue = virtio_transport_stream_dequeue, diff --git a/include/linux/virtio_vsock.h b/include/linux/virtio_vsock.h index c58453699ee9..23521a318cf0 100644 --- a/include/linux/virtio_vsock.h +++ b/include/linux/virtio_vsock.h @@ -219,6 +219,9 @@ bool virtio_transport_stream_allow(u32 cid, u32 port); int virtio_transport_dgram_bind(struct vsock_sock *vsk, struct sockaddr_vm *addr); bool virtio_transport_dgram_allow(u32 cid, u32 port); +int virtio_transport_dgram_get_cid(struct sk_buff *skb, unsigned int *cid); +int virtio_transport_dgram_get_port(struct sk_buff *skb, unsigned int *port); +int virtio_transport_dgram_get_length(struct sk_buff *skb, size_t *len); int virtio_transport_connect(struct vsock_sock *vsk); diff --git a/include/net/af_vsock.h b/include/net/af_vsock.h index 0e7504a42925..7bedb9ee7e3e 100644 --- a/include/net/af_vsock.h +++ b/include/net/af_vsock.h @@ -120,11 +120,20 @@ struct vsock_transport { /* DGRAM. */ int (*dgram_bind)(struct vsock_sock *, struct sockaddr_vm *); - int (*dgram_dequeue)(struct vsock_sock *vsk, struct msghdr *msg, - size_t len, int flags); int (*dgram_enqueue)(struct vsock_sock *, struct sockaddr_vm *, struct msghdr *, size_t len); bool (*dgram_allow)(u32 cid, u32 port); + int (*dgram_get_cid)(struct sk_buff *skb, unsigned int *cid); + int (*dgram_get_port)(struct sk_buff *skb, unsigned int *port); + int (*dgram_get_length)(struct sk_buff *skb, size_t *length); + + /* The number of bytes into the buffer at which the payload starts, as + * first seen by the receiving socket layer. For example, if the + * transport presets the skb pointers using skb_pull(sizeof(header)) + * than this would be zero, otherwise it would be the size of the + * header. + */ + const size_t dgram_payload_offset; /* STREAM. */ /* TODO: stream_bind() */ diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c index efb8a0937a13..ffb4dd8b6ea7 100644 --- a/net/vmw_vsock/af_vsock.c +++ b/net/vmw_vsock/af_vsock.c @@ -1271,11 +1271,15 @@ static int vsock_dgram_connect(struct socket *sock, int vsock_dgram_recvmsg(struct socket *sock, struct msghdr *msg, size_t len, int flags) { + const struct vsock_transport *transport; #ifdef CONFIG_BPF_SYSCALL const struct proto *prot; #endif struct vsock_sock *vsk; + struct sk_buff *skb; + size_t payload_len; struct sock *sk; + int err; sk = sock->sk; vsk = vsock_sk(sk); @@ -1286,7 +1290,52 @@ int vsock_dgram_recvmsg(struct socket *sock, struct msghdr *msg, return prot->recvmsg(sk, msg, len, flags, NULL); #endif - return vsk->transport->dgram_dequeue(vsk, msg, len, flags); + if (flags & MSG_OOB || flags & MSG_ERRQUEUE) + return -EOPNOTSUPP; + + transport = vsk->transport; + + /* Retrieve the head sk_buff from the socket's receive queue. */ + err = 0; + skb = skb_recv_datagram(sk_vsock(vsk), flags, &err); + if (!skb) + return err; + + err = transport->dgram_get_length(skb, &payload_len); + if (err) + goto out; + + if (payload_len > len) { + payload_len = len; + msg->msg_flags |= MSG_TRUNC; + } + + /* Place the datagram payload in the user's iovec. */ + err = skb_copy_datagram_msg(skb, transport->dgram_payload_offset, msg, payload_len); + if (err) + goto out; + + if (msg->msg_name) { + /* Provide the address of the sender. */ + DECLARE_SOCKADDR(struct sockaddr_vm *, vm_addr, msg->msg_name); + unsigned int cid, port; + + err = transport->dgram_get_cid(skb, &cid); + if (err) + goto out; + + err = transport->dgram_get_port(skb, &port); + if (err) + goto out; + + vsock_addr_init(vm_addr, cid, port); + msg->msg_namelen = sizeof(*vm_addr); + } + err = payload_len; + +out: + skb_free_datagram(&vsk->sk, skb); + return err; } EXPORT_SYMBOL_GPL(vsock_dgram_recvmsg); diff --git a/net/vmw_vsock/hyperv_transport.c b/net/vmw_vsock/hyperv_transport.c index 7cb1a9d2cdb4..ff6e87e25fa0 100644 --- a/net/vmw_vsock/hyperv_transport.c +++ b/net/vmw_vsock/hyperv_transport.c @@ -556,8 +556,17 @@ static int hvs_dgram_bind(struct vsock_sock *vsk, struct sockaddr_vm *addr) return -EOPNOTSUPP; } -static int hvs_dgram_dequeue(struct vsock_sock *vsk, struct msghdr *msg, - size_t len, int flags) +static int hvs_dgram_get_cid(struct sk_buff *skb, unsigned int *cid) +{ + return -EOPNOTSUPP; +} + +static int hvs_dgram_get_port(struct sk_buff *skb, unsigned int *port) +{ + return -EOPNOTSUPP; +} + +static int hvs_dgram_get_length(struct sk_buff *skb, size_t *len) { return -EOPNOTSUPP; } @@ -833,7 +842,9 @@ static struct vsock_transport hvs_transport = { .shutdown = hvs_shutdown, .dgram_bind = hvs_dgram_bind, - .dgram_dequeue = hvs_dgram_dequeue, + .dgram_get_cid = hvs_dgram_get_cid, + .dgram_get_port = hvs_dgram_get_port, + .dgram_get_length = hvs_dgram_get_length, .dgram_enqueue = hvs_dgram_enqueue, .dgram_allow = hvs_dgram_allow, diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c index e95df847176b..5763cdf13804 100644 --- a/net/vmw_vsock/virtio_transport.c +++ b/net/vmw_vsock/virtio_transport.c @@ -429,9 +429,11 @@ static struct virtio_transport virtio_transport = { .cancel_pkt = virtio_transport_cancel_pkt, .dgram_bind = virtio_transport_dgram_bind, - .dgram_dequeue = virtio_transport_dgram_dequeue, .dgram_enqueue = virtio_transport_dgram_enqueue, .dgram_allow = virtio_transport_dgram_allow, + .dgram_get_cid = virtio_transport_dgram_get_cid, + .dgram_get_port = virtio_transport_dgram_get_port, + .dgram_get_length = virtio_transport_dgram_get_length, .stream_dequeue = virtio_transport_stream_dequeue, .stream_enqueue = virtio_transport_stream_enqueue, diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio_transport_common.c index b769fc258931..e6903c719964 100644 --- a/net/vmw_vsock/virtio_transport_common.c +++ b/net/vmw_vsock/virtio_transport_common.c @@ -797,6 +797,24 @@ int virtio_transport_dgram_bind(struct vsock_sock *vsk, } EXPORT_SYMBOL_GPL(virtio_transport_dgram_bind); +int virtio_transport_dgram_get_cid(struct sk_buff *skb, unsigned int *cid) +{ + return -EOPNOTSUPP; +} +EXPORT_SYMBOL_GPL(virtio_transport_dgram_get_cid); + +int virtio_transport_dgram_get_port(struct sk_buff *skb, unsigned int *port) +{ + return -EOPNOTSUPP; +} +EXPORT_SYMBOL_GPL(virtio_transport_dgram_get_port); + +int virtio_transport_dgram_get_length(struct sk_buff *skb, size_t *len) +{ + return -EOPNOTSUPP; +} +EXPORT_SYMBOL_GPL(virtio_transport_dgram_get_length); + bool virtio_transport_dgram_allow(u32 cid, u32 port) { return false; diff --git a/net/vmw_vsock/vmci_transport.c b/net/vmw_vsock/vmci_transport.c index b370070194fa..bbc63826bf48 100644 --- a/net/vmw_vsock/vmci_transport.c +++ b/net/vmw_vsock/vmci_transport.c @@ -1731,57 +1731,40 @@ static int vmci_transport_dgram_enqueue( return err - sizeof(*dg); } -static int vmci_transport_dgram_dequeue(struct vsock_sock *vsk, - struct msghdr *msg, size_t len, - int flags) +static int vmci_transport_dgram_get_cid(struct sk_buff *skb, unsigned int *cid) { - int err; struct vmci_datagram *dg; - size_t payload_len; - struct sk_buff *skb; - if (flags & MSG_OOB || flags & MSG_ERRQUEUE) - return -EOPNOTSUPP; + dg = (struct vmci_datagram *)skb->data; + if (!dg) + return -EINVAL; - /* Retrieve the head sk_buff from the socket's receive queue. */ - err = 0; - skb = skb_recv_datagram(&vsk->sk, flags, &err); - if (!skb) - return err; + *cid = dg->src.context; + return 0; +} + +static int vmci_transport_dgram_get_port(struct sk_buff *skb, unsigned int *port) +{ + struct vmci_datagram *dg; dg = (struct vmci_datagram *)skb->data; if (!dg) - /* err is 0, meaning we read zero bytes. */ - goto out; - - payload_len = dg->payload_size; - /* Ensure the sk_buff matches the payload size claimed in the packet. */ - if (payload_len != skb->len - sizeof(*dg)) { - err = -EINVAL; - goto out; - } + return -EINVAL; - if (payload_len > len) { - payload_len = len; - msg->msg_flags |= MSG_TRUNC; - } + *port = dg->src.resource; + return 0; +} - /* Place the datagram payload in the user's iovec. */ - err = skb_copy_datagram_msg(skb, sizeof(*dg), msg, payload_len); - if (err) - goto out; +static int vmci_transport_dgram_get_length(struct sk_buff *skb, size_t *len) +{ + struct vmci_datagram *dg; - if (msg->msg_name) { - /* Provide the address of the sender. */ - DECLARE_SOCKADDR(struct sockaddr_vm *, vm_addr, msg->msg_name); - vsock_addr_init(vm_addr, dg->src.context, dg->src.resource); - msg->msg_namelen = sizeof(*vm_addr); - } - err = payload_len; + dg = (struct vmci_datagram *)skb->data; + if (!dg) + return -EINVAL; -out: - skb_free_datagram(&vsk->sk, skb); - return err; + *len = dg->payload_size; + return 0; } static bool vmci_transport_dgram_allow(u32 cid, u32 port) @@ -2040,9 +2023,12 @@ static struct vsock_transport vmci_transport = { .release = vmci_transport_release, .connect = vmci_transport_connect, .dgram_bind = vmci_transport_dgram_bind, - .dgram_dequeue = vmci_transport_dgram_dequeue, .dgram_enqueue = vmci_transport_dgram_enqueue, .dgram_allow = vmci_transport_dgram_allow, + .dgram_get_cid = vmci_transport_dgram_get_cid, + .dgram_get_port = vmci_transport_dgram_get_port, + .dgram_get_length = vmci_transport_dgram_get_length, + .dgram_payload_offset = sizeof(struct vmci_datagram), .stream_dequeue = vmci_transport_stream_dequeue, .stream_enqueue = vmci_transport_stream_enqueue, .stream_has_data = vmci_transport_stream_has_data, diff --git a/net/vmw_vsock/vsock_loopback.c b/net/vmw_vsock/vsock_loopback.c index 5c6360df1f31..2f3cabc79ee5 100644 --- a/net/vmw_vsock/vsock_loopback.c +++ b/net/vmw_vsock/vsock_loopback.c @@ -62,9 +62,11 @@ static struct virtio_transport loopback_transport = { .cancel_pkt = vsock_loopback_cancel_pkt, .dgram_bind = virtio_transport_dgram_bind, - .dgram_dequeue = virtio_transport_dgram_dequeue, .dgram_enqueue = virtio_transport_dgram_enqueue, .dgram_allow = virtio_transport_dgram_allow, + .dgram_get_cid = virtio_transport_dgram_get_cid, + .dgram_get_port = virtio_transport_dgram_get_port, + .dgram_get_length = virtio_transport_dgram_get_length, .stream_dequeue = virtio_transport_stream_dequeue, .stream_enqueue = virtio_transport_stream_enqueue, From patchwork Sat Jun 10 00:58:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bobby Eshleman X-Patchwork-Id: 105863 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp1279430vqr; Fri, 9 Jun 2023 17:59:53 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ5gmIalB1kf6SOe92AawHIBsZLiPRoZqlqmppj8I17f5E2mQ9tlppkzfoIQN4vKCMmaM1+T X-Received: by 2002:a17:90b:3909:b0:255:54ce:c3a9 with SMTP id ob9-20020a17090b390900b0025554cec3a9mr2145016pjb.24.1686358793163; Fri, 09 Jun 2023 17:59:53 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1686358793; cv=none; d=google.com; s=arc-20160816; b=weZWtHYNOiQMw02A4jI5jEpIqTAjMidIRElxxTRVKCR+nVkhzdEyRhphrMk6yh5WTf Npl53DTt/I1wToV+EXfOR9c5FeHuN1DBUnHLoN0NEHkuPvQmeEbZ037z7pvESifdN14H g7UIMYweS9z2m91BRrnOlX1DzrqX40VAqTJ9OqiHXoSd/779Pa428P169F9sEc/3KcCQ jO1Vn2t6GXF3hLrRvREcQqJBxYkEY3uFqLyjYKYKbYc7tPegsSuMSFU/V+7ph5ZTqiRd uHHd26tzHC+Zjej4lsXqBnVHIQSON4Duat8j7XxJk8n5BbZIjm6H+JCrHMiKRnthlrgi o7NA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:in-reply-to:references:message-id :content-transfer-encoding:mime-version:subject:date:from :dkim-signature; bh=EPbFW7iZxMvtqbmK4J1z/miwrV0fMapcvT6vEzi4BQ0=; b=ZJpmYhlf0clmZk4ExOf80grQLo1T93Lhfd3uplpi6HeSYjo3KJX5eVDX86IrahpiIy lEzBmphDO9ITbqQ+QszHx68FsCOYdZp01/0YPOe+qFNowZRaH2jA+uaLlVcidRVTtcU7 xUZbb6qk3IAFH2SMcAwwK3mLADyciaYWkgpNYRQKPtgQVGHc6Hs6dZij8y+ODUBok90s cqRkkISTEkak8t/sKOZ9AZSDsjt8HtV4JC+m2x9Jv0azB5BItHyOqnbWI7gxeHPIvp0j AjkjJSmsBmbNTLM+a99ZF1kDjOlT9tIz3X94jZGHAMfGOFWBMVy9zqPIiDO5YEz8YZcD Zzsg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@bytedance.com header.s=google header.b=eHIaq50n; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=bytedance.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id mm22-20020a17090b359600b0025693afd311si5148004pjb.43.2023.06.09.17.59.40; Fri, 09 Jun 2023 17:59:53 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@bytedance.com header.s=google header.b=eHIaq50n; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=bytedance.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232531AbjFJA6w (ORCPT + 99 others); Fri, 9 Jun 2023 20:58:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55218 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230044AbjFJA6u (ORCPT ); Fri, 9 Jun 2023 20:58:50 -0400 Received: from mail-qt1-x82c.google.com (mail-qt1-x82c.google.com [IPv6:2607:f8b0:4864:20::82c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2737E3583 for ; Fri, 9 Jun 2023 17:58:47 -0700 (PDT) Received: by mail-qt1-x82c.google.com with SMTP id d75a77b69052e-3f9b5ec058aso20959551cf.0 for ; Fri, 09 Jun 2023 17:58:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1686358726; x=1688950726; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=EPbFW7iZxMvtqbmK4J1z/miwrV0fMapcvT6vEzi4BQ0=; b=eHIaq50noFLOTfklzASCj2kkioU8QAZElobxZznaqVwCR+W0ZXU3bcPNgvc+WQtA4V ZPh99Ldk0TcosIYZJPhQcTKxuga8iR/gP2SlRfSF/FXq8qouuVLQEsvyDEM1zPWWBJkx hn7nWcxMz2c+xwvGdDgizok4EVHbTFiyq5QCVqNCKK6jxRyu1TMqWnnTzsKU8WOvy3+s SctSa77Chu2Ul63vM3KhNb3YEXgAXN6A4L0LtRz+e/cWMr+oRB4J33cwIds5xkVmkNCR 6MA/4rN3clrZtTnCTJSBUyzkmkGrwcx3x3L6DJl/fPcZ089lMjkv3tCg+HkV4kISDjXE WTDA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1686358726; x=1688950726; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=EPbFW7iZxMvtqbmK4J1z/miwrV0fMapcvT6vEzi4BQ0=; b=OzCudKAJM942o6GCS8RNrvrpsexws28z2bZpHZG4O8loKZjxcvM2helh0CM8JKWSPc 71nUvdW3EuUSF0pfTNcWfnwCIJYDXlT8TLc6L7dpdJ9Nv4ryrpWtxTac6qAfEzk+KnW9 v8ntIYPeTAjgd6XGT7zVhEF04juCw2oaXhUHslRnMMRzi3+uN4Hn+ty8o14WiUdYinNs pX2JmV+nYu5dHuZYG2mcnl6P9CtC5nfuQ8whvD2F98TJVEG08m2ZGrRD+UT/JTEEjlUC 9L3VXiZrJX2pMbu4b04mlDxOdLR5AxrbB3MNcwgH+Dv+ah0NPf8pBC7t8JAu9a4l6QWB G+BQ== X-Gm-Message-State: AC+VfDxDf/OZdnfh+3IeCeWjheoTTjDU/TIyfGyuGp6NWW62TpPBsomH hlV0B0hGNj+1oJdQaIbc+5mFjQ== X-Received: by 2002:a05:6214:509b:b0:625:b3a2:f655 with SMTP id kk27-20020a056214509b00b00625b3a2f655mr3585747qvb.30.1686358726197; Fri, 09 Jun 2023 17:58:46 -0700 (PDT) Received: from [172.17.0.4] ([130.44.212.126]) by smtp.gmail.com with ESMTPSA id x17-20020a0ce251000000b00606750abaf9sm1504075qvl.136.2023.06.09.17.58.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 09 Jun 2023 17:58:45 -0700 (PDT) From: Bobby Eshleman Date: Sat, 10 Jun 2023 00:58:29 +0000 Subject: [PATCH RFC net-next v4 2/8] vsock: refactor transport lookup code MIME-Version: 1.0 Message-Id: <20230413-b4-vsock-dgram-v4-2-0cebbb2ae899@bytedance.com> References: <20230413-b4-vsock-dgram-v4-0-0cebbb2ae899@bytedance.com> In-Reply-To: <20230413-b4-vsock-dgram-v4-0-0cebbb2ae899@bytedance.com> To: Stefan Hajnoczi , Stefano Garzarella , "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , "K. Y. Srinivasan" , Haiyang Zhang , Wei Liu , Dexuan Cui , Bryan Tan , Vishnu Dasa , VMware PV-Drivers Reviewers Cc: Dan Carpenter , Simon Horman , Krasnov Arseniy , kvm@vger.kernel.org, virtualization@lists.linux-foundation.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-hyperv@vger.kernel.org, bpf@vger.kernel.org, Bobby Eshleman X-Mailer: b4 0.12.2 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1768275357489104420?= X-GMAIL-MSGID: =?utf-8?q?1768275357489104420?= Introduce new reusable function vsock_connectible_lookup_transport() that performs the transport lookup logic. No functional change intended. Signed-off-by: Bobby Eshleman Reviewed-by: Stefano Garzarella --- net/vmw_vsock/af_vsock.c | 25 ++++++++++++++++++------- 1 file changed, 18 insertions(+), 7 deletions(-) diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c index ffb4dd8b6ea7..74358f0b47fa 100644 --- a/net/vmw_vsock/af_vsock.c +++ b/net/vmw_vsock/af_vsock.c @@ -422,6 +422,22 @@ static void vsock_deassign_transport(struct vsock_sock *vsk) vsk->transport = NULL; } +static const struct vsock_transport * +vsock_connectible_lookup_transport(unsigned int cid, __u8 flags) +{ + const struct vsock_transport *transport; + + if (vsock_use_local_transport(cid)) + transport = transport_local; + else if (cid <= VMADDR_CID_HOST || !transport_h2g || + (flags & VMADDR_FLAG_TO_HOST)) + transport = transport_g2h; + else + transport = transport_h2g; + + return transport; +} + /* Assign a transport to a socket and call the .init transport callback. * * Note: for connection oriented socket this must be called when vsk->remote_addr @@ -462,13 +478,8 @@ int vsock_assign_transport(struct vsock_sock *vsk, struct vsock_sock *psk) break; case SOCK_STREAM: case SOCK_SEQPACKET: - if (vsock_use_local_transport(remote_cid)) - new_transport = transport_local; - else if (remote_cid <= VMADDR_CID_HOST || !transport_h2g || - (remote_flags & VMADDR_FLAG_TO_HOST)) - new_transport = transport_g2h; - else - new_transport = transport_h2g; + new_transport = vsock_connectible_lookup_transport(remote_cid, + remote_flags); break; default: return -ESOCKTNOSUPPORT; From patchwork Sat Jun 10 00:58:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bobby Eshleman X-Patchwork-Id: 105864 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp1285514vqr; Fri, 9 Jun 2023 18:14:29 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ7Pa1f73YSPZaEZqrdHaAbUm/1IrVzotdzGa5ICR2/l//eqa/wGgYWeFEktB2uwVpxutL9G X-Received: by 2002:a17:903:1208:b0:1af:aafb:64c8 with SMTP id l8-20020a170903120800b001afaafb64c8mr527205plh.21.1686359668785; Fri, 09 Jun 2023 18:14:28 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1686359668; cv=none; d=google.com; s=arc-20160816; b=vRw1sncbgE1wHWmA3GoktJYAadqVKv8efvMCuaggxwSMXvZt1b5JYaXNbj9Lortbio EPW03JEbSDpTmnn0Hg2SAIL2VtfxhdQYUv615fNWeVGe/l7cg9eYvlDNXZZPRXYaXTL4 jj2T1zbXKiVZqtL20BsnFw9s8NnvZOgsqN8QdK1a7ueVCsjwzA7sEafVr/dhlZwurT/3 F8dTj5GAPiBDRb/XvaCgnSXt2jOSMmHkFQaOF5xNDoSSO1guiQ2j6r3UwUl5NV1EnQ1p evzvUqAL9SQABY8IEzdTmxonC1XcbxgzQGnlcAXC20jKIQNWMRfiAG5cPppWd0QdtkxE ltMA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:in-reply-to:references:message-id :content-transfer-encoding:mime-version:subject:date:from :dkim-signature; bh=RzIAOXq4i4bbs1zeMJ542v3hXYyPRwWNz5JJa1wC9EA=; b=p81ZQpO1u0itj8KdM0+lm+wPGNNlWry/BWs049UN+gv+9dF+W970vOmEzWzgF5S6AT iT0JEolGazm0NQX2BPtGqUISo50CXIjCLMy6TPzE61xOY9EZvrLwe+OJvYVYRrTWo2Ir XVdJSuSD43oJzUyyfZ9H2X9OFavXXoP7aTFTH/x3D82i8mtwB+gh99Uq23xHbZG5RBhR WfnW4eXuY3k6Csz0PBwYOcYAUgFHp/hNeI9td3y/4ZrkqVAsVsBoXIXhfXTitKWkIaeE XeXZIqi5pOZBqdmHWAVzZSKE0PLS4NPVJdqUyaDqlh1Hs95rAwglywxk6FaQXYs55qrF dirQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@bytedance.com header.s=google header.b=OP+oXfcQ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=bytedance.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id y16-20020a170902b49000b001b02fa876c7si3469993plr.578.2023.06.09.18.14.13; Fri, 09 Jun 2023 18:14:28 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@bytedance.com header.s=google header.b=OP+oXfcQ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=bytedance.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230283AbjFJBAG (ORCPT + 99 others); Fri, 9 Jun 2023 21:00:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55544 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233586AbjFJA7f (ORCPT ); Fri, 9 Jun 2023 20:59:35 -0400 Received: from mail-qv1-xf2f.google.com (mail-qv1-xf2f.google.com [IPv6:2607:f8b0:4864:20::f2f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0D4BA3A9A for ; Fri, 9 Jun 2023 17:58:48 -0700 (PDT) Received: by mail-qv1-xf2f.google.com with SMTP id 6a1803df08f44-62b69b95a33so16444536d6.0 for ; Fri, 09 Jun 2023 17:58:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1686358727; x=1688950727; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=RzIAOXq4i4bbs1zeMJ542v3hXYyPRwWNz5JJa1wC9EA=; b=OP+oXfcQ6tZNLSTEug87YmxqQRMSkH26emVGngJGdzDTW6IXWyoUAeN4zWaQC4/tUj SsY46cZa1RbtZHz9ZXbiZ36gteI+cLxx9MVEvep4MlxOsqxd6RtNq0WSL3hB+vvVT1UG dMU0+Nn43ALJe1WCC4vHMAxF+p/9bFj/od6jHaB7m6Ua5ku05kmLDH5VN6fHauyot/fO RYhxoAsH+X3zsv0pVlMkk3DGRJS8w7sLOBH39NxzwIUl1aSBnHcZHlEhlHW/dHX+YPWp 59/6uctu3mgQt3g1+8mhN3ECuakEW598ilenjEbUe6VzJJzYP4ar3Uwo849Cbvvn5SRU oEaw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1686358727; x=1688950727; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=RzIAOXq4i4bbs1zeMJ542v3hXYyPRwWNz5JJa1wC9EA=; b=Ps2HU6y0dUPkgau90KJL8PJpkyG4lMaX7hvIyPT7LfH/IUrMdIDw3QV5HRbyj9lLTa 24QxlVEthUKgfqhNLcc8hBf/Oz3bO5S84vfHeesj5iBiB9CIxyRN4p2cj7zdKMdrMxFM R7VaF688zR5KHgCEN4OynStU2tIoWlwaT4WDjUUw2cqKbC47nBjaNguUfosPSW7LHFKH hlcdtuTtm4durb/PyYiVCsqE1tItJI1noLtXpLZiD//b6oqVK1ysn1ndklO9Uearm2ie wi5mmvMGZ6TG2Q0naKEg9V20gTilEle+7ZwI8e1hMyICuHiiPGJ9NAYsRl5v/WlhEK3K ZMeA== X-Gm-Message-State: AC+VfDyg8Pvqbg/u+ZUZUZOds+SmUuoyM/0VUrK5qlhJinWqXSEdTsEr YlgzprcNhEZ6/BEnwVnk0cv48g== X-Received: by 2002:a05:6214:5013:b0:5a6:24f6:724d with SMTP id jo19-20020a056214501300b005a624f6724dmr4134395qvb.13.1686358727069; Fri, 09 Jun 2023 17:58:47 -0700 (PDT) Received: from [172.17.0.4] ([130.44.212.126]) by smtp.gmail.com with ESMTPSA id x17-20020a0ce251000000b00606750abaf9sm1504075qvl.136.2023.06.09.17.58.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 09 Jun 2023 17:58:46 -0700 (PDT) From: Bobby Eshleman Date: Sat, 10 Jun 2023 00:58:30 +0000 Subject: [PATCH RFC net-next v4 3/8] vsock: support multi-transport datagrams MIME-Version: 1.0 Message-Id: <20230413-b4-vsock-dgram-v4-3-0cebbb2ae899@bytedance.com> References: <20230413-b4-vsock-dgram-v4-0-0cebbb2ae899@bytedance.com> In-Reply-To: <20230413-b4-vsock-dgram-v4-0-0cebbb2ae899@bytedance.com> To: Stefan Hajnoczi , Stefano Garzarella , "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , "K. Y. Srinivasan" , Haiyang Zhang , Wei Liu , Dexuan Cui , Bryan Tan , Vishnu Dasa , VMware PV-Drivers Reviewers Cc: Dan Carpenter , Simon Horman , Krasnov Arseniy , kvm@vger.kernel.org, virtualization@lists.linux-foundation.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-hyperv@vger.kernel.org, bpf@vger.kernel.org, Bobby Eshleman X-Mailer: b4 0.12.2 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1768276276416137300?= X-GMAIL-MSGID: =?utf-8?q?1768276276416137300?= This patch adds support for multi-transport datagrams. This includes: - Per-packet lookup of transports when using sendto(sockaddr_vm) - Selecting H2G or G2H transport using VMADDR_FLAG_TO_HOST and CID in sockaddr_vm To preserve backwards compatibility with VMCI, some important changes were made. The "transport_dgram" / VSOCK_TRANSPORT_F_DGRAM is changed to be used for dgrams iff there is not yet a g2h or h2g transport that has been registered that can transmit the packet. If there is a g2h/h2g transport for that remote address, then that transport will be used and not "transport_dgram". This essentially makes "transport_dgram" a fallback transport for when h2g/g2h has not yet gone online, which appears to be the exact use case for VMCI. This design makes sense, because there is no reason that the transport_{g2h,h2g} cannot also service datagrams, which makes the role of transport_dgram difficult to understand outside of the VMCI context. The logic around "transport_dgram" had to be retained to prevent breaking VMCI: 1) VMCI datagrams appear to function outside of the h2g/g2h paradigm. When the vmci transport becomes online, it registers itself with the DGRAM feature, but not H2G/G2H. Only later when the transport has more information about its environment does it register H2G or G2H. In the case that a datagram socket becomes active after DGRAM registration but before G2H/H2G registration, the "transport_dgram" transport needs to be used. 2) VMCI seems to require special message be sent by the transport when a datagram socket calls bind(). Under the h2g/g2h model, the transport is selected using the remote_addr which is set by connect(). At bind time there is no remote_addr because often no connect() has been called yet: the transport is null. Therefore, with a null transport there doesn't seem to be any good way for a datagram socket a tell the VMCI transport that it has just had bind() called upon it. Only transports with a special datagram fallback use-case such as VMCI need to register VSOCK_TRANSPORT_F_DGRAM. Signed-off-by: Bobby Eshleman --- drivers/vhost/vsock.c | 1 - include/linux/virtio_vsock.h | 2 - net/vmw_vsock/af_vsock.c | 78 +++++++++++++++++++++++++-------- net/vmw_vsock/hyperv_transport.c | 6 --- net/vmw_vsock/virtio_transport.c | 1 - net/vmw_vsock/virtio_transport_common.c | 7 --- net/vmw_vsock/vsock_loopback.c | 1 - 7 files changed, 60 insertions(+), 36 deletions(-) diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c index c8201c070b4b..8f0082da5e70 100644 --- a/drivers/vhost/vsock.c +++ b/drivers/vhost/vsock.c @@ -410,7 +410,6 @@ static struct virtio_transport vhost_transport = { .cancel_pkt = vhost_transport_cancel_pkt, .dgram_enqueue = virtio_transport_dgram_enqueue, - .dgram_bind = virtio_transport_dgram_bind, .dgram_allow = virtio_transport_dgram_allow, .dgram_get_cid = virtio_transport_dgram_get_cid, .dgram_get_port = virtio_transport_dgram_get_port, diff --git a/include/linux/virtio_vsock.h b/include/linux/virtio_vsock.h index 23521a318cf0..73afa09f4585 100644 --- a/include/linux/virtio_vsock.h +++ b/include/linux/virtio_vsock.h @@ -216,8 +216,6 @@ void virtio_transport_notify_buffer_size(struct vsock_sock *vsk, u64 *val); u64 virtio_transport_stream_rcvhiwat(struct vsock_sock *vsk); bool virtio_transport_stream_is_active(struct vsock_sock *vsk); bool virtio_transport_stream_allow(u32 cid, u32 port); -int virtio_transport_dgram_bind(struct vsock_sock *vsk, - struct sockaddr_vm *addr); bool virtio_transport_dgram_allow(u32 cid, u32 port); int virtio_transport_dgram_get_cid(struct sk_buff *skb, unsigned int *cid); int virtio_transport_dgram_get_port(struct sk_buff *skb, unsigned int *port); diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c index 74358f0b47fa..ef86765f3765 100644 --- a/net/vmw_vsock/af_vsock.c +++ b/net/vmw_vsock/af_vsock.c @@ -438,6 +438,18 @@ vsock_connectible_lookup_transport(unsigned int cid, __u8 flags) return transport; } +static const struct vsock_transport * +vsock_dgram_lookup_transport(unsigned int cid, __u8 flags) +{ + const struct vsock_transport *transport; + + transport = vsock_connectible_lookup_transport(cid, flags); + if (transport) + return transport; + + return transport_dgram; +} + /* Assign a transport to a socket and call the .init transport callback. * * Note: for connection oriented socket this must be called when vsk->remote_addr @@ -474,7 +486,8 @@ int vsock_assign_transport(struct vsock_sock *vsk, struct vsock_sock *psk) switch (sk->sk_type) { case SOCK_DGRAM: - new_transport = transport_dgram; + new_transport = vsock_dgram_lookup_transport(remote_cid, + remote_flags); break; case SOCK_STREAM: case SOCK_SEQPACKET: @@ -691,6 +704,9 @@ static int __vsock_bind_connectible(struct vsock_sock *vsk, static int __vsock_bind_dgram(struct vsock_sock *vsk, struct sockaddr_vm *addr) { + if (!vsk->transport || !vsk->transport->dgram_bind) + return -EINVAL; + return vsk->transport->dgram_bind(vsk, addr); } @@ -1172,19 +1188,24 @@ static int vsock_dgram_sendmsg(struct socket *sock, struct msghdr *msg, lock_sock(sk); - transport = vsk->transport; - - err = vsock_auto_bind(vsk); - if (err) - goto out; - - /* If the provided message contains an address, use that. Otherwise * fall back on the socket's remote handle (if it has been connected). */ if (msg->msg_name && vsock_addr_cast(msg->msg_name, msg->msg_namelen, &remote_addr) == 0) { + transport = vsock_dgram_lookup_transport(remote_addr->svm_cid, + remote_addr->svm_flags); + if (!transport) { + err = -EINVAL; + goto out; + } + + if (!try_module_get(transport->module)) { + err = -ENODEV; + goto out; + } + /* Ensure this address is of the right type and is a valid * destination. */ @@ -1193,11 +1214,27 @@ static int vsock_dgram_sendmsg(struct socket *sock, struct msghdr *msg, remote_addr->svm_cid = transport->get_local_cid(); if (!vsock_addr_bound(remote_addr)) { + module_put(transport->module); + err = -EINVAL; + goto out; + } + + if (!transport->dgram_allow(remote_addr->svm_cid, + remote_addr->svm_port)) { + module_put(transport->module); err = -EINVAL; goto out; } + + err = transport->dgram_enqueue(vsk, remote_addr, msg, len); + module_put(transport->module); } else if (sock->state == SS_CONNECTED) { remote_addr = &vsk->remote_addr; + transport = vsk->transport; + + err = vsock_auto_bind(vsk); + if (err) + goto out; if (remote_addr->svm_cid == VMADDR_CID_ANY) remote_addr->svm_cid = transport->get_local_cid(); @@ -1205,23 +1242,23 @@ static int vsock_dgram_sendmsg(struct socket *sock, struct msghdr *msg, /* XXX Should connect() or this function ensure remote_addr is * bound? */ - if (!vsock_addr_bound(&vsk->remote_addr)) { + if (!vsock_addr_bound(remote_addr)) { err = -EINVAL; goto out; } - } else { - err = -EINVAL; - goto out; - } - if (!transport->dgram_allow(remote_addr->svm_cid, - remote_addr->svm_port)) { + if (!transport->dgram_allow(remote_addr->svm_cid, + remote_addr->svm_port)) { + err = -EINVAL; + goto out; + } + + err = transport->dgram_enqueue(vsk, remote_addr, msg, len); + } else { err = -EINVAL; goto out; } - err = transport->dgram_enqueue(vsk, remote_addr, msg, len); - out: release_sock(sk); return err; @@ -1255,13 +1292,18 @@ static int vsock_dgram_connect(struct socket *sock, if (err) goto out; + memcpy(&vsk->remote_addr, remote_addr, sizeof(vsk->remote_addr)); + + err = vsock_assign_transport(vsk, NULL); + if (err) + goto out; + if (!vsk->transport->dgram_allow(remote_addr->svm_cid, remote_addr->svm_port)) { err = -EINVAL; goto out; } - memcpy(&vsk->remote_addr, remote_addr, sizeof(vsk->remote_addr)); sock->state = SS_CONNECTED; /* sock map disallows redirection of non-TCP sockets with sk_state != diff --git a/net/vmw_vsock/hyperv_transport.c b/net/vmw_vsock/hyperv_transport.c index ff6e87e25fa0..c00bc5da769a 100644 --- a/net/vmw_vsock/hyperv_transport.c +++ b/net/vmw_vsock/hyperv_transport.c @@ -551,11 +551,6 @@ static void hvs_destruct(struct vsock_sock *vsk) kfree(hvs); } -static int hvs_dgram_bind(struct vsock_sock *vsk, struct sockaddr_vm *addr) -{ - return -EOPNOTSUPP; -} - static int hvs_dgram_get_cid(struct sk_buff *skb, unsigned int *cid) { return -EOPNOTSUPP; @@ -841,7 +836,6 @@ static struct vsock_transport hvs_transport = { .connect = hvs_connect, .shutdown = hvs_shutdown, - .dgram_bind = hvs_dgram_bind, .dgram_get_cid = hvs_dgram_get_cid, .dgram_get_port = hvs_dgram_get_port, .dgram_get_length = hvs_dgram_get_length, diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c index 5763cdf13804..1b7843a7779a 100644 --- a/net/vmw_vsock/virtio_transport.c +++ b/net/vmw_vsock/virtio_transport.c @@ -428,7 +428,6 @@ static struct virtio_transport virtio_transport = { .shutdown = virtio_transport_shutdown, .cancel_pkt = virtio_transport_cancel_pkt, - .dgram_bind = virtio_transport_dgram_bind, .dgram_enqueue = virtio_transport_dgram_enqueue, .dgram_allow = virtio_transport_dgram_allow, .dgram_get_cid = virtio_transport_dgram_get_cid, diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio_transport_common.c index e6903c719964..d5a3c8efe84b 100644 --- a/net/vmw_vsock/virtio_transport_common.c +++ b/net/vmw_vsock/virtio_transport_common.c @@ -790,13 +790,6 @@ bool virtio_transport_stream_allow(u32 cid, u32 port) } EXPORT_SYMBOL_GPL(virtio_transport_stream_allow); -int virtio_transport_dgram_bind(struct vsock_sock *vsk, - struct sockaddr_vm *addr) -{ - return -EOPNOTSUPP; -} -EXPORT_SYMBOL_GPL(virtio_transport_dgram_bind); - int virtio_transport_dgram_get_cid(struct sk_buff *skb, unsigned int *cid) { return -EOPNOTSUPP; diff --git a/net/vmw_vsock/vsock_loopback.c b/net/vmw_vsock/vsock_loopback.c index 2f3cabc79ee5..e9de45a26fbd 100644 --- a/net/vmw_vsock/vsock_loopback.c +++ b/net/vmw_vsock/vsock_loopback.c @@ -61,7 +61,6 @@ static struct virtio_transport loopback_transport = { .shutdown = virtio_transport_shutdown, .cancel_pkt = vsock_loopback_cancel_pkt, - .dgram_bind = virtio_transport_dgram_bind, .dgram_enqueue = virtio_transport_dgram_enqueue, .dgram_allow = virtio_transport_dgram_allow, .dgram_get_cid = virtio_transport_dgram_get_cid, From patchwork Sat Jun 10 00:58:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bobby Eshleman X-Patchwork-Id: 105866 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp1285616vqr; Fri, 9 Jun 2023 18:14:47 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ5xNsRtzyfGaEe85m8FYKTHNgcx0L+ACA8284sr4oCVhuQSX0CXROzG8gFEPhZIUU7nG2wU X-Received: by 2002:a05:6a20:4291:b0:118:d91b:610d with SMTP id o17-20020a056a20429100b00118d91b610dmr2951395pzj.30.1686359687088; Fri, 09 Jun 2023 18:14:47 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1686359687; cv=none; d=google.com; s=arc-20160816; b=HfrupuL6GnzeDGKs5A128xZ6hYexbPOGtNqxA8dHtaWiMchU7Iv1rXdHqLCQ57cDpR F3ktKeNkJSgoy/yawb2Yg1P6oiEeurp7EX5/+CDJNXrtueL666Fw6sf/2BcYWKIJHjNP rtDw0LPsBpCrgXbgb9qR3+utFCT1MpyI0OVPlkNgKBlQCDQz0N18kXOML+Qg8yJg2M4H FyM8Bdq00Me4wlF7N8tVKSiUYYmlU+ztQyLWKKTKwQIrgs7teZwyb2eN1vI4eHPHSYTr f1Uz1twCCSnsN6hNzv7nKspxGaaFwf7q0Gjxvsw2KWrLhlsOau+1bpWp08XBUrWS4xej G6gQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:in-reply-to:references:message-id :content-transfer-encoding:mime-version:subject:date:from :dkim-signature; bh=R1vY4qAtc5xUmwIOF/wXuc4xbGrdjFfiVCQ6YVrfe4Y=; b=rCtVjjOaXCuvme0+l1PJeumGIDjdeKKg/zy+qRfoa69qLjM3xAPgfXfYBbPDfzJqET ODYZQJ5xR1WjOmyvtD8kxLvIIpVwsCgd2p4aAPJmW/smx9Bi/+gYr4PgiCrMepPT+sCe AarANufJ+XqVRMKDo4vrxOG2mS6yy3beOEnFoqa7M5WEQjBN8oix1pFarUWxgcdSKdw1 5c/erRuwPmZPoa6aGP2dFXzQMAsBmf9uG1fi8njG4D2ZWn3dyljLKbZWUSFErU4F2iDy YCRF/nP+kIwigeA44DDl6zNSMLM+s0UatxsYl20rqjE/6Qo5rzeOMwsPg8hPxmYjtwN+ wfag== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@bytedance.com header.s=google header.b=CPB0lHtH; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=bytedance.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id j24-20020a63cf18000000b0053fc3bc33e4si3263677pgg.795.2023.06.09.18.14.34; Fri, 09 Jun 2023 18:14:47 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@bytedance.com header.s=google header.b=CPB0lHtH; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=bytedance.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233180AbjFJBAM (ORCPT + 99 others); Fri, 9 Jun 2023 21:00:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55560 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233602AbjFJA7g (ORCPT ); Fri, 9 Jun 2023 20:59:36 -0400 Received: from mail-qv1-xf31.google.com (mail-qv1-xf31.google.com [IPv6:2607:f8b0:4864:20::f31]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 108E83AA1 for ; Fri, 9 Jun 2023 17:58:49 -0700 (PDT) Received: by mail-qv1-xf31.google.com with SMTP id 6a1803df08f44-6260b578097so17633466d6.3 for ; Fri, 09 Jun 2023 17:58:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1686358728; x=1688950728; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=R1vY4qAtc5xUmwIOF/wXuc4xbGrdjFfiVCQ6YVrfe4Y=; b=CPB0lHtHpC0VvuE1XUOxtVINK+5k1qVb8FsGYnJckM3/Ks1XlEQoUKE0zegBrEfALJ SQtz8S9XlQZroBiFXFi1poXrMRfUi5nVCFE73lzpfTqxv3q0x7QsUrbJuK9Nny8gYyAD 19dh4c+sYmGEPptwgm8/cvTOs0wSxgv9ggqwYzBIoWNGXG+O27QYrc9w3fQC9rSW17t0 22UT2Dbl0MOtDJ24lCo1j3rNJCZl+3E5gXrAZHZT5lyu4A7wO6Pw3+U9OYfzGgQ16sy7 rsQxset4dICcVxssuHh7XsXqhabkT2/Pu2T5IhTDeaIomYFPHbPfdcmIEfEWQg4wfR9/ D+Bg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1686358728; x=1688950728; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=R1vY4qAtc5xUmwIOF/wXuc4xbGrdjFfiVCQ6YVrfe4Y=; b=g5kuZjv3wOqdhdDH/KSAoR0IhQGTY3p29SjdPOjbRAS8MmtNVy5UzgCDb/z5nGpvLJ 76YR37qaoZvDFvY7igl3MOTlpq6hhnTwV5yav/X96TbhV7HnnQKM6weLXcl3OCBahLHO +UBfKDKBJgQTlSTGT2wJ3GzchPPlfYiapEw3TvW449E/24e6ZlHBIMF+TfIF7I0dKMKd Ii0wn+VHDipsw3BFWXmhl5TBsZ3yIb7ELhGtTr761lCZFAIV97UABgsq/HmikkucqAqo Nb6PEFC6Gj7b3yguiFAkuUhfVvQJLNYfe/8REt+IlBWTb9U9m2YzXGwUmXhfambSB/6T AuDQ== X-Gm-Message-State: AC+VfDyhAefGi4iGErwtI2Kv+eTWmMcAD/G1AZn7YUnsSWVGM42a06u2 UFxuAmYSIY3P2cICCil6yG2fyw== X-Received: by 2002:a05:6214:250e:b0:62b:4e7e:8aba with SMTP id gf14-20020a056214250e00b0062b4e7e8abamr4016793qvb.60.1686358728054; Fri, 09 Jun 2023 17:58:48 -0700 (PDT) Received: from [172.17.0.4] ([130.44.212.126]) by smtp.gmail.com with ESMTPSA id x17-20020a0ce251000000b00606750abaf9sm1504075qvl.136.2023.06.09.17.58.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 09 Jun 2023 17:58:47 -0700 (PDT) From: Bobby Eshleman Date: Sat, 10 Jun 2023 00:58:31 +0000 Subject: [PATCH RFC net-next v4 4/8] vsock: make vsock bind reusable MIME-Version: 1.0 Message-Id: <20230413-b4-vsock-dgram-v4-4-0cebbb2ae899@bytedance.com> References: <20230413-b4-vsock-dgram-v4-0-0cebbb2ae899@bytedance.com> In-Reply-To: <20230413-b4-vsock-dgram-v4-0-0cebbb2ae899@bytedance.com> To: Stefan Hajnoczi , Stefano Garzarella , "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , "K. Y. Srinivasan" , Haiyang Zhang , Wei Liu , Dexuan Cui , Bryan Tan , Vishnu Dasa , VMware PV-Drivers Reviewers Cc: Dan Carpenter , Simon Horman , Krasnov Arseniy , kvm@vger.kernel.org, virtualization@lists.linux-foundation.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-hyperv@vger.kernel.org, bpf@vger.kernel.org, Bobby Eshleman X-Mailer: b4 0.12.2 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1768276295078301442?= X-GMAIL-MSGID: =?utf-8?q?1768276295078301442?= This commit makes the bind table management functions in vsock usable for different bind tables. For use by datagrams in a future patch. Signed-off-by: Bobby Eshleman --- net/vmw_vsock/af_vsock.c | 33 ++++++++++++++++++++++++++------- 1 file changed, 26 insertions(+), 7 deletions(-) diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c index ef86765f3765..7a3ca4270446 100644 --- a/net/vmw_vsock/af_vsock.c +++ b/net/vmw_vsock/af_vsock.c @@ -230,11 +230,12 @@ static void __vsock_remove_connected(struct vsock_sock *vsk) sock_put(&vsk->sk); } -static struct sock *__vsock_find_bound_socket(struct sockaddr_vm *addr) +struct sock *vsock_find_bound_socket_common(struct sockaddr_vm *addr, + struct list_head *bind_table) { struct vsock_sock *vsk; - list_for_each_entry(vsk, vsock_bound_sockets(addr), bound_table) { + list_for_each_entry(vsk, bind_table, bound_table) { if (vsock_addr_equals_addr(addr, &vsk->local_addr)) return sk_vsock(vsk); @@ -247,6 +248,11 @@ static struct sock *__vsock_find_bound_socket(struct sockaddr_vm *addr) return NULL; } +static struct sock *__vsock_find_bound_socket(struct sockaddr_vm *addr) +{ + return vsock_find_bound_socket_common(addr, vsock_bound_sockets(addr)); +} + static struct sock *__vsock_find_connected_socket(struct sockaddr_vm *src, struct sockaddr_vm *dst) { @@ -646,12 +652,17 @@ static void vsock_pending_work(struct work_struct *work) /**** SOCKET OPERATIONS ****/ -static int __vsock_bind_connectible(struct vsock_sock *vsk, - struct sockaddr_vm *addr) +static int vsock_bind_common(struct vsock_sock *vsk, + struct sockaddr_vm *addr, + struct list_head *bind_table, + size_t table_size) { static u32 port; struct sockaddr_vm new_addr; + if (table_size < VSOCK_HASH_SIZE) + return -1; + if (!port) port = get_random_u32_above(LAST_RESERVED_PORT); @@ -667,7 +678,8 @@ static int __vsock_bind_connectible(struct vsock_sock *vsk, new_addr.svm_port = port++; - if (!__vsock_find_bound_socket(&new_addr)) { + if (!vsock_find_bound_socket_common(&new_addr, + &bind_table[VSOCK_HASH(addr)])) { found = true; break; } @@ -684,7 +696,8 @@ static int __vsock_bind_connectible(struct vsock_sock *vsk, return -EACCES; } - if (__vsock_find_bound_socket(&new_addr)) + if (vsock_find_bound_socket_common(&new_addr, + &bind_table[VSOCK_HASH(addr)])) return -EADDRINUSE; } @@ -696,11 +709,17 @@ static int __vsock_bind_connectible(struct vsock_sock *vsk, * by AF_UNIX. */ __vsock_remove_bound(vsk); - __vsock_insert_bound(vsock_bound_sockets(&vsk->local_addr), vsk); + __vsock_insert_bound(&bind_table[VSOCK_HASH(&vsk->local_addr)], vsk); return 0; } +static int __vsock_bind_connectible(struct vsock_sock *vsk, + struct sockaddr_vm *addr) +{ + return vsock_bind_common(vsk, addr, vsock_bind_table, VSOCK_HASH_SIZE + 1); +} + static int __vsock_bind_dgram(struct vsock_sock *vsk, struct sockaddr_vm *addr) { From patchwork Sat Jun 10 00:58:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bobby Eshleman X-Patchwork-Id: 105868 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp1288024vqr; Fri, 9 Jun 2023 18:21:30 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6zZbJveNKp21iDzv1+ZQDKCiQWG0OoLehVmLfNdE/Hc5c9lYl0CxfI8KNsUEjpX5XMiGqr X-Received: by 2002:a17:902:c194:b0:1b0:5a41:db40 with SMTP id d20-20020a170902c19400b001b05a41db40mr454180pld.50.1686360090563; Fri, 09 Jun 2023 18:21:30 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1686360090; cv=none; d=google.com; s=arc-20160816; b=iOTEx3LBeyNXATIZuAnCr0c4v2FyQquKZJbQwvjlgAmsbN+TcCmef3gef8Ohp/2w5x g00owVGDGtW1luNwxQCDivYT46/LKYQJ6yBaELKCMONb5jRKpY3VuSiQ8frt7Si8iWVK 8Lae+tY/srZZqWzkX0PsCPCKwduZvoiBontqNxthvipF0bq2RCtLI5kXvXtUBJaz7s5f NjJlGsLgkk8EZHzPGoO2GTu5BrEqTJdFmlw9SJuG3c6ISwCjy456GYe/pdiviQ3K+tLH BV0Rxdzdbu0PhlWmzSy9DaD06fWendx08+E4QRhujLu0n9n/zf8tvtpI9vUPDj5wLuB3 37Yw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:in-reply-to:references:message-id :content-transfer-encoding:mime-version:subject:date:from :dkim-signature; bh=DFrleghhHqJGwyApjTc8QCRKulATYWQc6gIOZoAbxyw=; b=GM13I3h4aCbfL57OclSz4EDFnIMw3YNQvHgXX8oLboFGQdWvkplKZsRxT9Liicrdh7 /RNJ5/xhlq3FiXkSwUHuhfMUbv9+cjOKItc9XtwuHvE+PT6amtOQ51CRwlD7tCCSZyxA 4dE1cbVnnYQMAjd42Li+qQJYIzfJ99fU8RIU10rOzp3r/l7HIoGoyQhcSZZ4kWaHmITL 3/pQvNpjPKFwb9vIcdzsvy62ZQt1Lx0aQkXY0pr5g8q8FIpSuOlyUR7NtLaWyixMnXna 6pPBsOVmeMp0036HDK+a2Gop1VIGdmKLi3rIe/n28mBG6ex+jRVn0PqcEp1UKkqgffnn h9vQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@bytedance.com header.s=google header.b=ksml6xPt; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=bytedance.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id z18-20020a170903019200b001ab089f7329si75190plg.73.2023.06.09.18.21.17; Fri, 09 Jun 2023 18:21:30 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@bytedance.com header.s=google header.b=ksml6xPt; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=bytedance.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233520AbjFJBAK (ORCPT + 99 others); Fri, 9 Jun 2023 21:00:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55564 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229985AbjFJA7h (ORCPT ); Fri, 9 Jun 2023 20:59:37 -0400 Received: from mail-qv1-xf2a.google.com (mail-qv1-xf2a.google.com [IPv6:2607:f8b0:4864:20::f2a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DEA6E30EC for ; Fri, 9 Jun 2023 17:58:49 -0700 (PDT) Received: by mail-qv1-xf2a.google.com with SMTP id 6a1803df08f44-62606e67c0dso18097716d6.2 for ; Fri, 09 Jun 2023 17:58:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1686358729; x=1688950729; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=DFrleghhHqJGwyApjTc8QCRKulATYWQc6gIOZoAbxyw=; b=ksml6xPtDkxvZbOyeBnygyfCAoLKD9q0OoR5ThMAuqNWqOyORqD7ip9kezP4rUQ1pn cA3456UR+nvIFDtB4hQkAr9+dKLyOoB0Y8uha3LGNU+M+xhPkSTa7bIVvcuZnFwYP2GC ZeEHzPd5W/3KKUl1LXXn+R864cg8cuPoU6ZDfrh/NKSNdEOSPjKHYDVyE/yjCGrqIXdt 7HK2d+ygZZUKxpNeMasEJZ/ytDuYXAYFMacasr+ggsr8KojjvAWg1zXjYgoOPbiETiNr JtnJfhnbIucaihsxy/lQGs/DMMKT+Q9ovj9LdzJEIqHug2Ab3Rtgf2TouE1snzEnh4xG IFqA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1686358729; x=1688950729; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=DFrleghhHqJGwyApjTc8QCRKulATYWQc6gIOZoAbxyw=; b=cr7TKwuIEwpLrSrqkZSD/NfIY/KhaIJqKa4a/lEOwnHFAzMd2ks7d8ASnh0FIiAVfS CagWQNmDpEg7OTV+C88hYqhquG+BCl4KH7hCaav8duzTr0LlzD9Gxa0sqxTv5D7cqx6s WxqGPb0c0UATXi38W98nJkfV1UVg4B0M0yGBc4Tts9/07QzZL0e5C/Q5wKW3Gp+q0+ub IptsvFN9eODKL0ZDPzWBnyYefyWal5JF6a7SrOlrJOZ/twJ/8Ck9Mw8gF/4pY7RYMjOw g0qjr9EPK1qjj4C/0SiHMjzARUYJ+t3H2esmoL+4rzzazDePYfN9YIApH0FkyMHS74of MNcA== X-Gm-Message-State: AC+VfDyoH53z0WqcppzT+rjgkuMyQNUS8qeaTJ6uorGbB90fA4Sck4j4 1CJL8PICLRj7vyqeJLGrK8Vltg== X-Received: by 2002:a05:6214:e87:b0:626:2b44:40c with SMTP id hf7-20020a0562140e8700b006262b44040cmr3491456qvb.59.1686358729039; Fri, 09 Jun 2023 17:58:49 -0700 (PDT) Received: from [172.17.0.4] ([130.44.212.126]) by smtp.gmail.com with ESMTPSA id x17-20020a0ce251000000b00606750abaf9sm1504075qvl.136.2023.06.09.17.58.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 09 Jun 2023 17:58:48 -0700 (PDT) From: Bobby Eshleman Date: Sat, 10 Jun 2023 00:58:32 +0000 Subject: [PATCH RFC net-next v4 5/8] virtio/vsock: add VIRTIO_VSOCK_F_DGRAM feature bit MIME-Version: 1.0 Message-Id: <20230413-b4-vsock-dgram-v4-5-0cebbb2ae899@bytedance.com> References: <20230413-b4-vsock-dgram-v4-0-0cebbb2ae899@bytedance.com> In-Reply-To: <20230413-b4-vsock-dgram-v4-0-0cebbb2ae899@bytedance.com> To: Stefan Hajnoczi , Stefano Garzarella , "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , "K. Y. Srinivasan" , Haiyang Zhang , Wei Liu , Dexuan Cui , Bryan Tan , Vishnu Dasa , VMware PV-Drivers Reviewers Cc: Dan Carpenter , Simon Horman , Krasnov Arseniy , kvm@vger.kernel.org, virtualization@lists.linux-foundation.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-hyperv@vger.kernel.org, bpf@vger.kernel.org, Bobby Eshleman , Jiang Wang X-Mailer: b4 0.12.2 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1768276718045403267?= X-GMAIL-MSGID: =?utf-8?q?1768276718045403267?= This commit adds a feature bit for virtio vsock to support datagrams. Signed-off-by: Jiang Wang Signed-off-by: Bobby Eshleman --- include/uapi/linux/virtio_vsock.h | 1 + 1 file changed, 1 insertion(+) diff --git a/include/uapi/linux/virtio_vsock.h b/include/uapi/linux/virtio_vsock.h index 64738838bee5..9c25f267bbc0 100644 --- a/include/uapi/linux/virtio_vsock.h +++ b/include/uapi/linux/virtio_vsock.h @@ -40,6 +40,7 @@ /* The feature bitmap for virtio vsock */ #define VIRTIO_VSOCK_F_SEQPACKET 1 /* SOCK_SEQPACKET supported */ +#define VIRTIO_VSOCK_F_DGRAM 3 /* SOCK_DGRAM supported */ struct virtio_vsock_config { __le64 guest_cid; From patchwork Sat Jun 10 00:58:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bobby Eshleman X-Patchwork-Id: 105867 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp1286772vqr; Fri, 9 Jun 2023 18:17:57 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6p7+H1BCPgfiV8QW7lFkjoKrTI9OhLuKbX46DPBQ8lFSHcV/GP6RpvcvagjyrEywpK0Pvn X-Received: by 2002:a17:90a:199:b0:24d:f739:d62a with SMTP id 25-20020a17090a019900b0024df739d62amr2435005pjc.23.1686359877414; Fri, 09 Jun 2023 18:17:57 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1686359877; cv=none; d=google.com; s=arc-20160816; b=PJj7yrdoGcVb472mzP1vsRBly+M4f7Vllko7Jj1z8tz7WvLPfSj8NRbh9uQAm3KO5t KVaX3Bp2DSEsT3WDgZfcoDRdTNfq9kWlMNKdzk4MhbQO15qxYa/D8GAS5ieBDbtVHGbf NSYYiiqO8VLtU82Z2Hl2NF5mWWHO2BVSpGCRR1gOF1f6Rh2qRc8e5tjE2Fxq4qMQ5bZB lXeNrbb6wY8KFFdH09mgAbg5+8nZm3AkXKqYxpoCGs+dqJ0c/uSOFQ2PxphogL8QOEY/ SRe2Ok1M0Uplw9u0vKTQakzOr+BokIhfzwj7Xy8XUSchrz2FKpHaFGOai+59GIobn8u8 BWmQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:in-reply-to:references:message-id :content-transfer-encoding:mime-version:subject:date:from :dkim-signature; bh=+qTT+fqIn72MAfzKX3DZlsFUpv/yjLpaFryoFzrOLkc=; b=UtlTM+0CCd9KwfJM8soVLP0RUWvq7jPuXHX/qg4sImgcWJDgWHytFq69qD1GpCP+Gm qQAaQtGgPtHW5s34pAKKFKJ433hJvbc/pPyNmC0Q7nQMP/Mk9rPamz/UGvLf+ziOT20o TqY5XEV1rx++yThmoLWqkOcC8RzI804AGLDl8lILpH9TdsVzRuaCfY8A50KsHQ6YWXJD sqXIiqMpgMZiwgX+Bco+8ZDNpfSbWZLyKx4+1tP/89OuYjM2+I5ZtLWbWQyQk6b6BFo0 30X+h428RWOIDMV+ihwCReP7Enin0ADcLeX9IvtycrZfpOtX0ybEx7OVNKEx8S1SDD7k mj2Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@bytedance.com header.s=google header.b=N12xru5L; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=bytedance.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id q15-20020a17090a304f00b00259b0576a9dsi5206355pjl.163.2023.06.09.18.17.44; Fri, 09 Jun 2023 18:17:57 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@bytedance.com header.s=google header.b=N12xru5L; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=bytedance.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233185AbjFJA70 (ORCPT + 99 others); Fri, 9 Jun 2023 20:59:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55578 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232785AbjFJA7N (ORCPT ); Fri, 9 Jun 2023 20:59:13 -0400 Received: from mail-qv1-xf36.google.com (mail-qv1-xf36.google.com [IPv6:2607:f8b0:4864:20::f36]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 404E63AAB for ; Fri, 9 Jun 2023 17:58:51 -0700 (PDT) Received: by mail-qv1-xf36.google.com with SMTP id 6a1803df08f44-6261616efd2so17628716d6.2 for ; Fri, 09 Jun 2023 17:58:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1686358730; x=1688950730; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=+qTT+fqIn72MAfzKX3DZlsFUpv/yjLpaFryoFzrOLkc=; b=N12xru5LOjGo65KY4AbTUgYd10/3s4lCdByPGDhPLuAVezzHX0blQDHO13ryPS4WXb FRfUOhhFZZhSE2qJ0dubbtlJujZ3/yg/vVsRAqXSGOBBd5QmYLSPVJI1nCNLTg0wn3oV 3y95O/ounI75bGaSUM6i8SA65QMXcJD9r1N8MIxNngNAPlTY8fLSuLHg7XmdB8qJty+d ayJnIiLbN3JT1j9YgKufy8AzxK9HPqiXbaYGdH61bS4nhQI+qmzBHl7Hb20sc5X48VJT yQQBOSviDtkx9YhpBVuqJbI0wMtTnu/q0TOvUGGyS+mFQxGExNaTpTw5llbHT7JNstVd KL4w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1686358730; x=1688950730; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=+qTT+fqIn72MAfzKX3DZlsFUpv/yjLpaFryoFzrOLkc=; b=LWtr9vaEILxTrc/OgYpqj4NqtYl3pz/uLrk4/NdAMUMZTt6XwVtkMB3oUKzo4T12Xj p27/AjNVa15vKRARuNHy3jXK1Xx5M9SMhp01AYr3G4AI6SgEpCs4LjwuB1r9dAt5GHMF E5upEr+3b9Nn/90mIrM2mlNIUv2IePLDcgXmzpei988xItnPafKYyyrzrkeW9jBvYhEn Iimz6VcLjfSllpPgLC9DG+eG7t7oqGD73LsLnJpEOfPdVyVS7XJKkEm8mvU62tds2a9g HjpJYFF4NG4K6+ZzP0LZA0Tadc4EZXLaCQxYpd6pH2gzjRba3MP4G3ljeo74vmVPlx7Y 2EdA== X-Gm-Message-State: AC+VfDz9neZQ/Z/lxxDw9gnUMHL1/No9f/yo7q+gB3/fsAGjaAMhtH41 so+R2Ri7QIW8i8jU0S6J4ryouA== X-Received: by 2002:ad4:5964:0:b0:625:b72a:1416 with SMTP id eq4-20020ad45964000000b00625b72a1416mr3524516qvb.12.1686358730028; Fri, 09 Jun 2023 17:58:50 -0700 (PDT) Received: from [172.17.0.4] ([130.44.212.126]) by smtp.gmail.com with ESMTPSA id x17-20020a0ce251000000b00606750abaf9sm1504075qvl.136.2023.06.09.17.58.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 09 Jun 2023 17:58:49 -0700 (PDT) From: Bobby Eshleman Date: Sat, 10 Jun 2023 00:58:33 +0000 Subject: [PATCH RFC net-next v4 6/8] virtio/vsock: support dgrams MIME-Version: 1.0 Message-Id: <20230413-b4-vsock-dgram-v4-6-0cebbb2ae899@bytedance.com> References: <20230413-b4-vsock-dgram-v4-0-0cebbb2ae899@bytedance.com> In-Reply-To: <20230413-b4-vsock-dgram-v4-0-0cebbb2ae899@bytedance.com> To: Stefan Hajnoczi , Stefano Garzarella , "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , "K. Y. Srinivasan" , Haiyang Zhang , Wei Liu , Dexuan Cui , Bryan Tan , Vishnu Dasa , VMware PV-Drivers Reviewers Cc: Dan Carpenter , Simon Horman , Krasnov Arseniy , kvm@vger.kernel.org, virtualization@lists.linux-foundation.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-hyperv@vger.kernel.org, bpf@vger.kernel.org, Bobby Eshleman X-Mailer: b4 0.12.2 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1768276494542571293?= X-GMAIL-MSGID: =?utf-8?q?1768276494542571293?= This commit adds support for datagrams over virtio/vsock. Message boundaries are preserved on a per-skb and per-vq entry basis. Messages are copied in whole from the user to an SKB, which in turn is added to the scatterlist for the virtqueue in whole for the device. Messages do not straddle skbs and they do not straddle packets. Messages may be truncated by the receiving user if their buffer is shorter than the message. Other properties of vsock datagrams: - Datagrams self-throttle at the per-socket sk_sndbuf threshold. - The same virtqueue is used as is used for streams and seqpacket flows - Credits are not used for datagrams - Packets are dropped silently by the device, which means the virtqueue will still get kicked even during high packet loss, so long as the socket does not exceed sk_sndbuf. Future work might include finding a way to reduce the virtqueue kick rate for datagram flows with high packet loss. Signed-off-by: Bobby Eshleman --- drivers/vhost/vsock.c | 27 ++++- include/linux/virtio_vsock.h | 5 +- include/net/af_vsock.h | 1 + include/uapi/linux/virtio_vsock.h | 1 + net/vmw_vsock/af_vsock.c | 58 +++++++-- net/vmw_vsock/virtio_transport.c | 23 +++- net/vmw_vsock/virtio_transport_common.c | 207 ++++++++++++++++++++++++-------- net/vmw_vsock/vsock_loopback.c | 8 +- 8 files changed, 264 insertions(+), 66 deletions(-) diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c index 8f0082da5e70..159c1a22c1a8 100644 --- a/drivers/vhost/vsock.c +++ b/drivers/vhost/vsock.c @@ -32,7 +32,8 @@ enum { VHOST_VSOCK_FEATURES = VHOST_FEATURES | (1ULL << VIRTIO_F_ACCESS_PLATFORM) | - (1ULL << VIRTIO_VSOCK_F_SEQPACKET) + (1ULL << VIRTIO_VSOCK_F_SEQPACKET) | + (1ULL << VIRTIO_VSOCK_F_DGRAM) }; enum { @@ -56,6 +57,7 @@ struct vhost_vsock { atomic_t queued_replies; u32 guest_cid; + bool dgram_allow; bool seqpacket_allow; }; @@ -394,6 +396,7 @@ static bool vhost_vsock_more_replies(struct vhost_vsock *vsock) return val < vq->num; } +static bool vhost_transport_dgram_allow(u32 cid, u32 port); static bool vhost_transport_seqpacket_allow(u32 remote_cid); static struct virtio_transport vhost_transport = { @@ -410,10 +413,11 @@ static struct virtio_transport vhost_transport = { .cancel_pkt = vhost_transport_cancel_pkt, .dgram_enqueue = virtio_transport_dgram_enqueue, - .dgram_allow = virtio_transport_dgram_allow, + .dgram_allow = vhost_transport_dgram_allow, .dgram_get_cid = virtio_transport_dgram_get_cid, .dgram_get_port = virtio_transport_dgram_get_port, .dgram_get_length = virtio_transport_dgram_get_length, + .dgram_payload_offset = 0, .stream_enqueue = virtio_transport_stream_enqueue, .stream_dequeue = virtio_transport_stream_dequeue, @@ -446,6 +450,22 @@ static struct virtio_transport vhost_transport = { .send_pkt = vhost_transport_send_pkt, }; +static bool vhost_transport_dgram_allow(u32 cid, u32 port) +{ + struct vhost_vsock *vsock; + bool dgram_allow = false; + + rcu_read_lock(); + vsock = vhost_vsock_get(cid); + + if (vsock) + dgram_allow = vsock->dgram_allow; + + rcu_read_unlock(); + + return dgram_allow; +} + static bool vhost_transport_seqpacket_allow(u32 remote_cid) { struct vhost_vsock *vsock; @@ -802,6 +822,9 @@ static int vhost_vsock_set_features(struct vhost_vsock *vsock, u64 features) if (features & (1ULL << VIRTIO_VSOCK_F_SEQPACKET)) vsock->seqpacket_allow = true; + if (features & (1ULL << VIRTIO_VSOCK_F_DGRAM)) + vsock->dgram_allow = true; + for (i = 0; i < ARRAY_SIZE(vsock->vqs); i++) { vq = &vsock->vqs[i]; mutex_lock(&vq->mutex); diff --git a/include/linux/virtio_vsock.h b/include/linux/virtio_vsock.h index 73afa09f4585..237ca87a2ecd 100644 --- a/include/linux/virtio_vsock.h +++ b/include/linux/virtio_vsock.h @@ -216,7 +216,6 @@ void virtio_transport_notify_buffer_size(struct vsock_sock *vsk, u64 *val); u64 virtio_transport_stream_rcvhiwat(struct vsock_sock *vsk); bool virtio_transport_stream_is_active(struct vsock_sock *vsk); bool virtio_transport_stream_allow(u32 cid, u32 port); -bool virtio_transport_dgram_allow(u32 cid, u32 port); int virtio_transport_dgram_get_cid(struct sk_buff *skb, unsigned int *cid); int virtio_transport_dgram_get_port(struct sk_buff *skb, unsigned int *port); int virtio_transport_dgram_get_length(struct sk_buff *skb, size_t *len); @@ -247,4 +246,8 @@ void virtio_transport_put_credit(struct virtio_vsock_sock *vvs, u32 credit); void virtio_transport_deliver_tap_pkt(struct sk_buff *skb); int virtio_transport_purge_skbs(void *vsk, struct sk_buff_head *list); int virtio_transport_read_skb(struct vsock_sock *vsk, skb_read_actor_t read_actor); +void virtio_transport_init_dgram_bind_tables(void); +int virtio_transport_dgram_get_cid(struct sk_buff *skb, unsigned int *cid); +int virtio_transport_dgram_get_port(struct sk_buff *skb, unsigned int *port); +int virtio_transport_dgram_get_length(struct sk_buff *skb, size_t *len); #endif /* _LINUX_VIRTIO_VSOCK_H */ diff --git a/include/net/af_vsock.h b/include/net/af_vsock.h index 7bedb9ee7e3e..c115e655b4f5 100644 --- a/include/net/af_vsock.h +++ b/include/net/af_vsock.h @@ -225,6 +225,7 @@ void vsock_for_each_connected_socket(struct vsock_transport *transport, void (*fn)(struct sock *sk)); int vsock_assign_transport(struct vsock_sock *vsk, struct vsock_sock *psk); bool vsock_find_cid(unsigned int cid); +struct sock *vsock_find_bound_dgram_socket(struct sockaddr_vm *addr); /**** TAP ****/ diff --git a/include/uapi/linux/virtio_vsock.h b/include/uapi/linux/virtio_vsock.h index 9c25f267bbc0..27b4b2b8bf13 100644 --- a/include/uapi/linux/virtio_vsock.h +++ b/include/uapi/linux/virtio_vsock.h @@ -70,6 +70,7 @@ struct virtio_vsock_hdr { enum virtio_vsock_type { VIRTIO_VSOCK_TYPE_STREAM = 1, VIRTIO_VSOCK_TYPE_SEQPACKET = 2, + VIRTIO_VSOCK_TYPE_DGRAM = 3, }; enum virtio_vsock_op { diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c index 7a3ca4270446..b0b18e7f4299 100644 --- a/net/vmw_vsock/af_vsock.c +++ b/net/vmw_vsock/af_vsock.c @@ -114,6 +114,7 @@ static int __vsock_bind(struct sock *sk, struct sockaddr_vm *addr); static void vsock_sk_destruct(struct sock *sk); static int vsock_queue_rcv_skb(struct sock *sk, struct sk_buff *skb); +static bool sock_type_connectible(u16 type); /* Protocol family. */ struct proto vsock_proto = { @@ -180,6 +181,8 @@ struct list_head vsock_connected_table[VSOCK_HASH_SIZE]; EXPORT_SYMBOL_GPL(vsock_connected_table); DEFINE_SPINLOCK(vsock_table_lock); EXPORT_SYMBOL_GPL(vsock_table_lock); +static struct list_head vsock_dgram_bind_table[VSOCK_HASH_SIZE]; +static DEFINE_SPINLOCK(vsock_dgram_table_lock); /* Autobind this socket to the local address if necessary. */ static int vsock_auto_bind(struct vsock_sock *vsk) @@ -202,6 +205,9 @@ static void vsock_init_tables(void) for (i = 0; i < ARRAY_SIZE(vsock_connected_table); i++) INIT_LIST_HEAD(&vsock_connected_table[i]); + + for (i = 0; i < ARRAY_SIZE(vsock_dgram_bind_table); i++) + INIT_LIST_HEAD(&vsock_dgram_bind_table[i]); } static void __vsock_insert_bound(struct list_head *list, @@ -230,8 +236,8 @@ static void __vsock_remove_connected(struct vsock_sock *vsk) sock_put(&vsk->sk); } -struct sock *vsock_find_bound_socket_common(struct sockaddr_vm *addr, - struct list_head *bind_table) +static struct sock *vsock_find_bound_socket_common(struct sockaddr_vm *addr, + struct list_head *bind_table) { struct vsock_sock *vsk; @@ -248,6 +254,23 @@ struct sock *vsock_find_bound_socket_common(struct sockaddr_vm *addr, return NULL; } +struct sock * +vsock_find_bound_dgram_socket(struct sockaddr_vm *addr) +{ + struct sock *sk; + + spin_lock_bh(&vsock_dgram_table_lock); + sk = vsock_find_bound_socket_common(addr, + &vsock_dgram_bind_table[VSOCK_HASH(addr)]); + if (sk) + sock_hold(sk); + + spin_unlock_bh(&vsock_dgram_table_lock); + + return sk; +} +EXPORT_SYMBOL_GPL(vsock_find_bound_dgram_socket); + static struct sock *__vsock_find_bound_socket(struct sockaddr_vm *addr) { return vsock_find_bound_socket_common(addr, vsock_bound_sockets(addr)); @@ -287,6 +310,14 @@ void vsock_insert_connected(struct vsock_sock *vsk) } EXPORT_SYMBOL_GPL(vsock_insert_connected); +static void vsock_remove_dgram_bound(struct vsock_sock *vsk) +{ + spin_lock_bh(&vsock_dgram_table_lock); + if (__vsock_in_bound_table(vsk)) + __vsock_remove_bound(vsk); + spin_unlock_bh(&vsock_dgram_table_lock); +} + void vsock_remove_bound(struct vsock_sock *vsk) { spin_lock_bh(&vsock_table_lock); @@ -338,7 +369,10 @@ EXPORT_SYMBOL_GPL(vsock_find_connected_socket); void vsock_remove_sock(struct vsock_sock *vsk) { - vsock_remove_bound(vsk); + if (sock_type_connectible(sk_vsock(vsk)->sk_type)) + vsock_remove_bound(vsk); + else + vsock_remove_dgram_bound(vsk); vsock_remove_connected(vsk); } EXPORT_SYMBOL_GPL(vsock_remove_sock); @@ -720,11 +754,19 @@ static int __vsock_bind_connectible(struct vsock_sock *vsk, return vsock_bind_common(vsk, addr, vsock_bind_table, VSOCK_HASH_SIZE + 1); } -static int __vsock_bind_dgram(struct vsock_sock *vsk, - struct sockaddr_vm *addr) +static int vsock_bind_dgram(struct vsock_sock *vsk, + struct sockaddr_vm *addr) { - if (!vsk->transport || !vsk->transport->dgram_bind) - return -EINVAL; + if (!vsk->transport || !vsk->transport->dgram_bind) { + int retval; + + spin_lock_bh(&vsock_dgram_table_lock); + retval = vsock_bind_common(vsk, addr, vsock_dgram_bind_table, + VSOCK_HASH_SIZE); + spin_unlock_bh(&vsock_dgram_table_lock); + + return retval; + } return vsk->transport->dgram_bind(vsk, addr); } @@ -755,7 +797,7 @@ static int __vsock_bind(struct sock *sk, struct sockaddr_vm *addr) break; case SOCK_DGRAM: - retval = __vsock_bind_dgram(vsk, addr); + retval = vsock_bind_dgram(vsk, addr); break; default: diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c index 1b7843a7779a..7160a3104218 100644 --- a/net/vmw_vsock/virtio_transport.c +++ b/net/vmw_vsock/virtio_transport.c @@ -63,6 +63,7 @@ struct virtio_vsock { u32 guest_cid; bool seqpacket_allow; + bool dgram_allow; }; static u32 virtio_transport_get_local_cid(void) @@ -413,6 +414,7 @@ static void virtio_vsock_rx_done(struct virtqueue *vq) queue_work(virtio_vsock_workqueue, &vsock->rx_work); } +static bool virtio_transport_dgram_allow(u32 cid, u32 port); static bool virtio_transport_seqpacket_allow(u32 remote_cid); static struct virtio_transport virtio_transport = { @@ -465,6 +467,21 @@ static struct virtio_transport virtio_transport = { .send_pkt = virtio_transport_send_pkt, }; +static bool virtio_transport_dgram_allow(u32 cid, u32 port) +{ + struct virtio_vsock *vsock; + bool dgram_allow; + + dgram_allow = false; + rcu_read_lock(); + vsock = rcu_dereference(the_virtio_vsock); + if (vsock) + dgram_allow = vsock->dgram_allow; + rcu_read_unlock(); + + return dgram_allow; +} + static bool virtio_transport_seqpacket_allow(u32 remote_cid) { struct virtio_vsock *vsock; @@ -658,6 +675,9 @@ static int virtio_vsock_probe(struct virtio_device *vdev) if (virtio_has_feature(vdev, VIRTIO_VSOCK_F_SEQPACKET)) vsock->seqpacket_allow = true; + if (virtio_has_feature(vdev, VIRTIO_VSOCK_F_DGRAM)) + vsock->dgram_allow = true; + vdev->priv = vsock; ret = virtio_vsock_vqs_init(vsock); @@ -750,7 +770,8 @@ static struct virtio_device_id id_table[] = { }; static unsigned int features[] = { - VIRTIO_VSOCK_F_SEQPACKET + VIRTIO_VSOCK_F_SEQPACKET, + VIRTIO_VSOCK_F_DGRAM }; static struct virtio_driver virtio_vsock_driver = { diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio_transport_common.c index d5a3c8efe84b..bc9d459723f5 100644 --- a/net/vmw_vsock/virtio_transport_common.c +++ b/net/vmw_vsock/virtio_transport_common.c @@ -37,6 +37,35 @@ virtio_transport_get_ops(struct vsock_sock *vsk) return container_of(t, struct virtio_transport, transport); } +/* Requires info->msg and info->vsk */ +static struct sk_buff * +virtio_transport_sock_alloc_send_skb(struct virtio_vsock_pkt_info *info, unsigned int size, + gfp_t mask, int *err) +{ + struct sk_buff *skb; + struct sock *sk; + int noblock; + + if (size < VIRTIO_VSOCK_SKB_HEADROOM) { + *err = -EINVAL; + return NULL; + } + + if (info->msg) + noblock = info->msg->msg_flags & MSG_DONTWAIT; + else + noblock = 1; + + sk = sk_vsock(info->vsk); + sk->sk_allocation = mask; + skb = sock_alloc_send_skb(sk, size, noblock, err); + if (!skb) + return NULL; + + skb_reserve(skb, VIRTIO_VSOCK_SKB_HEADROOM); + return skb; +} + /* Returns a new packet on success, otherwise returns NULL. * * If NULL is returned, errp is set to a negative errno. @@ -47,7 +76,8 @@ virtio_transport_alloc_skb(struct virtio_vsock_pkt_info *info, u32 src_cid, u32 src_port, u32 dst_cid, - u32 dst_port) + u32 dst_port, + int *errp) { const size_t skb_len = VIRTIO_VSOCK_SKB_HEADROOM + len; struct virtio_vsock_hdr *hdr; @@ -55,9 +85,21 @@ virtio_transport_alloc_skb(struct virtio_vsock_pkt_info *info, void *payload; int err; - skb = virtio_vsock_alloc_skb(skb_len, GFP_KERNEL); - if (!skb) + /* dgrams do not use credits, self-throttle according to sk_sndbuf + * using sock_alloc_send_skb. This helps avoid triggering the OOM. + */ + if (info->vsk && info->type == VIRTIO_VSOCK_TYPE_DGRAM) { + skb = virtio_transport_sock_alloc_send_skb(info, skb_len, GFP_KERNEL, &err); + } else { + skb = virtio_vsock_alloc_skb(skb_len, GFP_KERNEL); + if (!skb) + err = -ENOMEM; + } + + if (!skb) { + *errp = err; return NULL; + } hdr = virtio_vsock_hdr(skb); hdr->type = cpu_to_le16(info->type); @@ -96,12 +138,14 @@ virtio_transport_alloc_skb(struct virtio_vsock_pkt_info *info, if (info->vsk && !skb_set_owner_sk_safe(skb, sk_vsock(info->vsk))) { WARN_ONCE(1, "failed to allocate skb on vsock socket with sk_refcnt == 0\n"); + err = -EFAULT; goto out; } return skb; out: + *errp = err; kfree_skb(skb); return NULL; } @@ -183,7 +227,9 @@ EXPORT_SYMBOL_GPL(virtio_transport_deliver_tap_pkt); static u16 virtio_transport_get_type(struct sock *sk) { - if (sk->sk_type == SOCK_STREAM) + if (sk->sk_type == SOCK_DGRAM) + return VIRTIO_VSOCK_TYPE_DGRAM; + else if (sk->sk_type == SOCK_STREAM) return VIRTIO_VSOCK_TYPE_STREAM; else return VIRTIO_VSOCK_TYPE_SEQPACKET; @@ -239,11 +285,10 @@ static int virtio_transport_send_pkt_info(struct vsock_sock *vsk, skb = virtio_transport_alloc_skb(info, skb_len, src_cid, src_port, - dst_cid, dst_port); - if (!skb) { - ret = -ENOMEM; + dst_cid, dst_port, + &ret); + if (!skb) break; - } virtio_transport_inc_tx_pkt(vvs, skb); @@ -583,14 +628,30 @@ virtio_transport_seqpacket_enqueue(struct vsock_sock *vsk, } EXPORT_SYMBOL_GPL(virtio_transport_seqpacket_enqueue); -int -virtio_transport_dgram_dequeue(struct vsock_sock *vsk, - struct msghdr *msg, - size_t len, int flags) +int virtio_transport_dgram_get_cid(struct sk_buff *skb, unsigned int *cid) +{ + *cid = le64_to_cpu(virtio_vsock_hdr(skb)->src_cid); + return 0; +} +EXPORT_SYMBOL_GPL(virtio_transport_dgram_get_cid); + +int virtio_transport_dgram_get_port(struct sk_buff *skb, unsigned int *port) +{ + *port = le32_to_cpu(virtio_vsock_hdr(skb)->src_port); + return 0; +} +EXPORT_SYMBOL_GPL(virtio_transport_dgram_get_port); + +int virtio_transport_dgram_get_length(struct sk_buff *skb, size_t *len) { - return -EOPNOTSUPP; + /* The device layer must have already moved the data ptr beyond the + * header for skb->len to be correct. + */ + WARN_ON(skb->data == skb->head); + *len = skb->len; + return 0; } -EXPORT_SYMBOL_GPL(virtio_transport_dgram_dequeue); +EXPORT_SYMBOL_GPL(virtio_transport_dgram_get_length); s64 virtio_transport_stream_has_data(struct vsock_sock *vsk) { @@ -790,30 +851,6 @@ bool virtio_transport_stream_allow(u32 cid, u32 port) } EXPORT_SYMBOL_GPL(virtio_transport_stream_allow); -int virtio_transport_dgram_get_cid(struct sk_buff *skb, unsigned int *cid) -{ - return -EOPNOTSUPP; -} -EXPORT_SYMBOL_GPL(virtio_transport_dgram_get_cid); - -int virtio_transport_dgram_get_port(struct sk_buff *skb, unsigned int *port) -{ - return -EOPNOTSUPP; -} -EXPORT_SYMBOL_GPL(virtio_transport_dgram_get_port); - -int virtio_transport_dgram_get_length(struct sk_buff *skb, size_t *len) -{ - return -EOPNOTSUPP; -} -EXPORT_SYMBOL_GPL(virtio_transport_dgram_get_length); - -bool virtio_transport_dgram_allow(u32 cid, u32 port) -{ - return false; -} -EXPORT_SYMBOL_GPL(virtio_transport_dgram_allow); - int virtio_transport_connect(struct vsock_sock *vsk) { struct virtio_vsock_pkt_info info = { @@ -846,7 +883,34 @@ virtio_transport_dgram_enqueue(struct vsock_sock *vsk, struct msghdr *msg, size_t dgram_len) { - return -EOPNOTSUPP; + const struct virtio_transport *t_ops; + struct virtio_vsock_pkt_info info = { + .op = VIRTIO_VSOCK_OP_RW, + .msg = msg, + .vsk = vsk, + .type = VIRTIO_VSOCK_TYPE_DGRAM, + }; + u32 src_cid, src_port; + struct sk_buff *skb; + int err; + + if (dgram_len > VIRTIO_VSOCK_MAX_PKT_BUF_SIZE) + return -EMSGSIZE; + + t_ops = virtio_transport_get_ops(vsk); + src_cid = t_ops->transport.get_local_cid(); + src_port = vsk->local_addr.svm_port; + + skb = virtio_transport_alloc_skb(&info, dgram_len, + src_cid, src_port, + remote_addr->svm_cid, + remote_addr->svm_port, + &err); + + if (!skb) + return err; + + return t_ops->send_pkt(skb); } EXPORT_SYMBOL_GPL(virtio_transport_dgram_enqueue); @@ -903,6 +967,7 @@ static int virtio_transport_reset_no_sock(const struct virtio_transport *t, .reply = true, }; struct sk_buff *reply; + int err; /* Send RST only if the original pkt is not a RST pkt */ if (le16_to_cpu(hdr->op) == VIRTIO_VSOCK_OP_RST) @@ -915,9 +980,10 @@ static int virtio_transport_reset_no_sock(const struct virtio_transport *t, le64_to_cpu(hdr->dst_cid), le32_to_cpu(hdr->dst_port), le64_to_cpu(hdr->src_cid), - le32_to_cpu(hdr->src_port)); + le32_to_cpu(hdr->src_port), + &err); if (!reply) - return -ENOMEM; + return err; return t->send_pkt(reply); } @@ -1137,6 +1203,21 @@ virtio_transport_recv_enqueue(struct vsock_sock *vsk, kfree_skb(skb); } +/* This function takes ownership of the skb. + * + * It either places the skb on the sk_receive_queue or frees it. + */ +static void +virtio_transport_recv_dgram(struct sock *sk, struct sk_buff *skb) +{ + if (sock_queue_rcv_skb(sk, skb)) { + kfree_skb(skb); + return; + } + + sk->sk_data_ready(sk); +} + static int virtio_transport_recv_connected(struct sock *sk, struct sk_buff *skb) @@ -1300,7 +1381,8 @@ virtio_transport_recv_listen(struct sock *sk, struct sk_buff *skb, static bool virtio_transport_valid_type(u16 type) { return (type == VIRTIO_VSOCK_TYPE_STREAM) || - (type == VIRTIO_VSOCK_TYPE_SEQPACKET); + (type == VIRTIO_VSOCK_TYPE_SEQPACKET) || + (type == VIRTIO_VSOCK_TYPE_DGRAM); } /* We are under the virtio-vsock's vsock->rx_lock or vhost-vsock's vq->mutex @@ -1314,40 +1396,52 @@ void virtio_transport_recv_pkt(struct virtio_transport *t, struct vsock_sock *vsk; struct sock *sk; bool space_available; + u16 type; vsock_addr_init(&src, le64_to_cpu(hdr->src_cid), le32_to_cpu(hdr->src_port)); vsock_addr_init(&dst, le64_to_cpu(hdr->dst_cid), le32_to_cpu(hdr->dst_port)); + type = le16_to_cpu(hdr->type); + trace_virtio_transport_recv_pkt(src.svm_cid, src.svm_port, dst.svm_cid, dst.svm_port, le32_to_cpu(hdr->len), - le16_to_cpu(hdr->type), + type, le16_to_cpu(hdr->op), le32_to_cpu(hdr->flags), le32_to_cpu(hdr->buf_alloc), le32_to_cpu(hdr->fwd_cnt)); - if (!virtio_transport_valid_type(le16_to_cpu(hdr->type))) { + if (!virtio_transport_valid_type(type)) { (void)virtio_transport_reset_no_sock(t, skb); goto free_pkt; } - /* The socket must be in connected or bound table - * otherwise send reset back + /* For stream/seqpacket, the socket must be in connected or bound table + * otherwise send reset back. + * + * For datagrams, no reset is sent back. */ sk = vsock_find_connected_socket(&src, &dst); if (!sk) { - sk = vsock_find_bound_socket(&dst); - if (!sk) { - (void)virtio_transport_reset_no_sock(t, skb); - goto free_pkt; + if (type == VIRTIO_VSOCK_TYPE_DGRAM) { + sk = vsock_find_bound_dgram_socket(&dst); + if (!sk) + goto free_pkt; + } else { + sk = vsock_find_bound_socket(&dst); + if (!sk) { + (void)virtio_transport_reset_no_sock(t, skb); + goto free_pkt; + } } } - if (virtio_transport_get_type(sk) != le16_to_cpu(hdr->type)) { - (void)virtio_transport_reset_no_sock(t, skb); + if (virtio_transport_get_type(sk) != type) { + if (type != VIRTIO_VSOCK_TYPE_DGRAM) + (void)virtio_transport_reset_no_sock(t, skb); sock_put(sk); goto free_pkt; } @@ -1363,12 +1457,18 @@ void virtio_transport_recv_pkt(struct virtio_transport *t, /* Check if sk has been closed before lock_sock */ if (sock_flag(sk, SOCK_DONE)) { - (void)virtio_transport_reset_no_sock(t, skb); + if (type != VIRTIO_VSOCK_TYPE_DGRAM) + (void)virtio_transport_reset_no_sock(t, skb); release_sock(sk); sock_put(sk); goto free_pkt; } + if (sk->sk_type == SOCK_DGRAM) { + virtio_transport_recv_dgram(sk, skb); + goto out; + } + space_available = virtio_transport_space_update(sk, skb); /* Update CID in case it has changed after a transport reset event */ @@ -1400,6 +1500,7 @@ void virtio_transport_recv_pkt(struct virtio_transport *t, break; } +out: release_sock(sk); /* Release refcnt obtained when we fetched this socket out of the diff --git a/net/vmw_vsock/vsock_loopback.c b/net/vmw_vsock/vsock_loopback.c index e9de45a26fbd..68312aa8c972 100644 --- a/net/vmw_vsock/vsock_loopback.c +++ b/net/vmw_vsock/vsock_loopback.c @@ -46,6 +46,7 @@ static int vsock_loopback_cancel_pkt(struct vsock_sock *vsk) return 0; } +static bool vsock_loopback_dgram_allow(u32 cid, u32 port); static bool vsock_loopback_seqpacket_allow(u32 remote_cid); static struct virtio_transport loopback_transport = { @@ -62,7 +63,7 @@ static struct virtio_transport loopback_transport = { .cancel_pkt = vsock_loopback_cancel_pkt, .dgram_enqueue = virtio_transport_dgram_enqueue, - .dgram_allow = virtio_transport_dgram_allow, + .dgram_allow = vsock_loopback_dgram_allow, .dgram_get_cid = virtio_transport_dgram_get_cid, .dgram_get_port = virtio_transport_dgram_get_port, .dgram_get_length = virtio_transport_dgram_get_length, @@ -98,6 +99,11 @@ static struct virtio_transport loopback_transport = { .send_pkt = vsock_loopback_send_pkt, }; +static bool vsock_loopback_dgram_allow(u32 cid, u32 port) +{ + return true; +} + static bool vsock_loopback_seqpacket_allow(u32 remote_cid) { return true; From patchwork Sat Jun 10 00:58:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bobby Eshleman X-Patchwork-Id: 105865 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp1285512vqr; Fri, 9 Jun 2023 18:14:29 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ7hjbTZGgBj8MyKo53rqP4WVc0PvAnd6BA6svqUAjipiefjX6r/gta4kyRJ7RBIZEvN8Kti X-Received: by 2002:a05:6a20:4320:b0:10c:89cc:bc5f with SMTP id h32-20020a056a20432000b0010c89ccbc5fmr3597280pzk.20.1686359668686; Fri, 09 Jun 2023 18:14:28 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1686359668; cv=none; d=google.com; s=arc-20160816; b=tFpfRtm73OfEQ/RzpIc8bbmEVG8TMILwsEYNSN4bIdeyuSC//X3LbMsJzeIBEm+jAQ 0f9gHL5n3BxzMRS0gQS0t289Dn/WSqSftj34GCdzwRD2iWHa4Jv7AZHHvlQGkdA45o9/ 9jjhfQom6MHWgpeo5kjVYeX8zM2zHG7bwI0kFwgObufnsII38uHfHafowAtLRSHCfiqH TcyCRQ2Ua9o8OLUI3T2q5Ldwi0HNOUItuR6ba4WkiiJ8itpqXeoRIoijF+P1BjkGBSJJ sIvi37EQwKbTmyn8w/JaMKt95uXFpNG42uISCIU7qkWhTNpObFY0RlKnavZkoFkbMGz7 PY2A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:in-reply-to:references:message-id :content-transfer-encoding:mime-version:subject:date:from :dkim-signature; bh=Aqfr869q6Mb/yqKuH6IYHpPSHpN4AUobRqCBoJOd5f0=; b=q8aiyXl3FgR/jXvQIfpc7/hH9jC3+DGkuUyEkEUyX92ZPN+Dy8awg9lkck/+13rA3N xqUUqh+9NdnQbMB1CMxCSESlJRSb4UcrfgVdN7ZMN87XjIcDCuModD3/HCgGFfvAELvZ OnwKXNAOqCPC4StplsIqsmNd8o483fGGW1AffmZkwkxVX5p6za0XvZlpALq6HOTAnjSQ gfdGDZSZAFEp+07y1WK80MXz8aWTvLKQru51oyL57fkFU7M+2vYkvppUKO/fzl/t+gZw hbVG26zmKjCpfahqxDnjGzJWI2QYGqUZhl+couAQxzG0CtQUxTAQNzpFefvc1dX8qqhy uG3w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@bytedance.com header.s=google header.b=Z2dc6wVx; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=bytedance.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id j70-20020a638049000000b0054a6ce5ea16si1066599pgd.170.2023.06.09.18.14.11; Fri, 09 Jun 2023 18:14:28 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@bytedance.com header.s=google header.b=Z2dc6wVx; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=bytedance.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233399AbjFJA73 (ORCPT + 99 others); Fri, 9 Jun 2023 20:59:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55578 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232858AbjFJA7O (ORCPT ); Fri, 9 Jun 2023 20:59:14 -0400 Received: from mail-qv1-xf35.google.com (mail-qv1-xf35.google.com [IPv6:2607:f8b0:4864:20::f35]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BCFB93AAC for ; Fri, 9 Jun 2023 17:58:52 -0700 (PDT) Received: by mail-qv1-xf35.google.com with SMTP id 6a1803df08f44-6261a8bbe5fso18276176d6.0 for ; Fri, 09 Jun 2023 17:58:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1686358731; x=1688950731; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=Aqfr869q6Mb/yqKuH6IYHpPSHpN4AUobRqCBoJOd5f0=; b=Z2dc6wVxBQWwM6G4cxwYC3LFQyLIUow0Bqd30DUdYkGoy8CX/0DuiZyHUUNqlDAe9+ LJbnb2KYFFUWK+CVK2eizJ1+1QvlBUjqtVXLW2sUgqVojdJPIwBopQkh9EM/NNzw2Cfq EubTviuchOue2RSk1tcRGtpo3hJhNxQbGXJKg8Djqeikyf7xKbVW1pz4c2zKSITBLCDb Ra8sS08H2ZGLLLE0Zlj/+ThJFu1cxNACG/4o3cmBJ8RZRdAiO9gZwoojfZ2iJHMEdWib 3nOfeInj/chEv19BiCmDDpsy3NI481Jwxs64irtmH6Jsspj3JEqVtvONFb0ZAhO05wVj TNYw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1686358731; x=1688950731; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Aqfr869q6Mb/yqKuH6IYHpPSHpN4AUobRqCBoJOd5f0=; b=Ta7XFUHj1ZajIFUh8jUH4+ZOIHZl/NYO/kXDkjCSeSaWInzcw7f3WMBjzNN5Eymw64 SYjymM/1ahaxfFKhOlLzDJXhhEhRubCftK/UGBh3uZirZxCnv6syy3Cd6k9N2F9L7w1B tGmZEPJ1R83S/kFXbUGLJKfj5pwSZaL1dQXAbTf2VAKcY6AdilDnfQQR7wxJJ5VXogiS nJQy3jkZjdiLNv+5VG0uVKkC7oogsysEP3YRs4uhzS4NsH5uUX2uI84mGkLwI2765IoM s3CggEtJt0wgGChuhLqxsq9XcxjtVIk13CRYLUWvjGYEgE4/hi/LRiYRdQBaoKFvejXR 9WSA== X-Gm-Message-State: AC+VfDyi9jUxspfZhrSPFQKNmM+fM2201teevAi/mO4YSME58xL86ORV EqbtLu04oMIgnDWcJzeinrob3g== X-Received: by 2002:a05:6214:3012:b0:626:476:5e8c with SMTP id ke18-20020a056214301200b0062604765e8cmr2697524qvb.52.1686358730915; Fri, 09 Jun 2023 17:58:50 -0700 (PDT) Received: from [172.17.0.4] ([130.44.212.126]) by smtp.gmail.com with ESMTPSA id x17-20020a0ce251000000b00606750abaf9sm1504075qvl.136.2023.06.09.17.58.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 09 Jun 2023 17:58:50 -0700 (PDT) From: Bobby Eshleman Date: Sat, 10 Jun 2023 00:58:34 +0000 Subject: [PATCH RFC net-next v4 7/8] vsock: Add lockless sendmsg() support MIME-Version: 1.0 Message-Id: <20230413-b4-vsock-dgram-v4-7-0cebbb2ae899@bytedance.com> References: <20230413-b4-vsock-dgram-v4-0-0cebbb2ae899@bytedance.com> In-Reply-To: <20230413-b4-vsock-dgram-v4-0-0cebbb2ae899@bytedance.com> To: Stefan Hajnoczi , Stefano Garzarella , "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , "K. Y. Srinivasan" , Haiyang Zhang , Wei Liu , Dexuan Cui , Bryan Tan , Vishnu Dasa , VMware PV-Drivers Reviewers Cc: Dan Carpenter , Simon Horman , Krasnov Arseniy , kvm@vger.kernel.org, virtualization@lists.linux-foundation.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-hyperv@vger.kernel.org, bpf@vger.kernel.org, Bobby Eshleman X-Mailer: b4 0.12.2 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1768276275712106619?= X-GMAIL-MSGID: =?utf-8?q?1768276275712106619?= Because the dgram sendmsg() path for AF_VSOCK acquires the socket lock it does not scale when many senders share a socket. Prior to this patch the socket lock is used to protect both reads and writes to the local_addr, remote_addr, transport, and buffer size variables of a vsock socket. What follows are the new protection schemes for these fields that ensure a race-free and usually lock-free multi-sender sendmsg() path for vsock dgrams. - local_addr local_addr changes as a result of binding a socket. The write path for local_addr is bind() and various vsock_auto_bind() call sites. After a socket has been bound via vsock_auto_bind() or bind(), subsequent calls to bind()/vsock_auto_bind() do not write to local_addr again. bind() rejects the user request and vsock_auto_bind() early exits. Therefore, the local addr can not change while a parallel thread is in sendmsg() and lock-free reads of local addr in sendmsg() are safe. Change: only acquire lock for auto-binding as-needed in sendmsg(). - buffer size variables Not used by dgram, so they do not need protection. No change. - remote_addr and transport Because a remote_addr update may result in a changed transport, but we would like to be able to read these two fields lock-free but coherently in the vsock send path, this patch packages these two fields into a new struct vsock_remote_info that is referenced by an RCU-protected pointer. Writes are synchronized as usual by the socket lock. Reads only take place in RCU read-side critical sections. When remote_addr or transport is updated, a new remote info is allocated. Old readers still see the old coherent remote_addr/transport pair, and new readers will refer to the new coherent. The coherency between remote_addr and transport previously provided by the socket lock alone is now also preserved by RCU, except with the highly-scalable lock-free read-side. Helpers are introduced for accessing and updating the new pointer. The new structure is contains an rcu_head so that kfree_rcu() can be used. This removes the need of writers to use synchronize_rcu() after freeing old structures which is simply more efficient and reduces code churn where remote_addr/transport are already being updated inside RCU read-side sections. Only virtio has been tested, but updates were necessary to the VMCI and hyperv code. Unfortunately the author does not have access to VMCI/hyperv systems so those changes are untested. Perf Tests (results from patch v2) vCPUS: 16 Threads: 16 Payload: 4KB Test Runs: 5 Type: SOCK_DGRAM Before: 245.2 MB/s After: 509.2 MB/s (+107%) Notably, on the same test system, vsock dgram even outperforms multi-threaded UDP over virtio-net with vhost and MQ support enabled. Throughput metrics for single-threaded SOCK_DGRAM and single/multi-threaded SOCK_STREAM showed no statistically signficant throughput changes (lowest p-value reaching 0.27), with the range of the mean difference ranging between -5% to +1%. Signed-off-by: Bobby Eshleman --- drivers/vhost/vsock.c | 12 +- include/linux/virtio_vsock.h | 3 +- include/net/af_vsock.h | 38 ++- net/vmw_vsock/af_vsock.c | 399 ++++++++++++++++++++++++++------ net/vmw_vsock/diag.c | 10 +- net/vmw_vsock/hyperv_transport.c | 27 ++- net/vmw_vsock/virtio_transport_common.c | 34 ++- net/vmw_vsock/vmci_transport.c | 84 +++++-- net/vmw_vsock/vsock_bpf.c | 10 +- 9 files changed, 492 insertions(+), 125 deletions(-) diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c index 159c1a22c1a8..b027a780d333 100644 --- a/drivers/vhost/vsock.c +++ b/drivers/vhost/vsock.c @@ -297,13 +297,17 @@ static int vhost_transport_cancel_pkt(struct vsock_sock *vsk) { struct vhost_vsock *vsock; + unsigned int cid; int cnt = 0; int ret = -ENODEV; rcu_read_lock(); + ret = vsock_remote_addr_cid(vsk, &cid); + if (ret < 0) + goto out; /* Find the vhost_vsock according to guest context id */ - vsock = vhost_vsock_get(vsk->remote_addr.svm_cid); + vsock = vhost_vsock_get(cid); if (!vsock) goto out; @@ -706,6 +710,10 @@ static void vhost_vsock_flush(struct vhost_vsock *vsock) static void vhost_vsock_reset_orphans(struct sock *sk) { struct vsock_sock *vsk = vsock_sk(sk); + unsigned int cid; + + if (vsock_remote_addr_cid(vsk, &cid) < 0) + return; /* vmci_transport.c doesn't take sk_lock here either. At least we're * under vsock_table_lock so the sock cannot disappear while we're @@ -713,7 +721,7 @@ static void vhost_vsock_reset_orphans(struct sock *sk) */ /* If the peer is still valid, no need to reset connection */ - if (vhost_vsock_get(vsk->remote_addr.svm_cid)) + if (vhost_vsock_get(cid)) return; /* If the close timeout is pending, let it expire. This avoids races diff --git a/include/linux/virtio_vsock.h b/include/linux/virtio_vsock.h index 237ca87a2ecd..97656e83606f 100644 --- a/include/linux/virtio_vsock.h +++ b/include/linux/virtio_vsock.h @@ -231,7 +231,8 @@ virtio_transport_stream_enqueue(struct vsock_sock *vsk, struct msghdr *msg, size_t len); int -virtio_transport_dgram_enqueue(struct vsock_sock *vsk, +virtio_transport_dgram_enqueue(const struct vsock_transport *transport, + struct vsock_sock *vsk, struct sockaddr_vm *remote_addr, struct msghdr *msg, size_t len); diff --git a/include/net/af_vsock.h b/include/net/af_vsock.h index c115e655b4f5..928b09fbc64b 100644 --- a/include/net/af_vsock.h +++ b/include/net/af_vsock.h @@ -25,12 +25,17 @@ extern spinlock_t vsock_table_lock; #define vsock_sk(__sk) ((struct vsock_sock *)__sk) #define sk_vsock(__vsk) (&(__vsk)->sk) +struct vsock_remote_info { + struct sockaddr_vm addr; + struct rcu_head rcu; + const struct vsock_transport *transport; +}; + struct vsock_sock { /* sk must be the first member. */ struct sock sk; - const struct vsock_transport *transport; struct sockaddr_vm local_addr; - struct sockaddr_vm remote_addr; + struct vsock_remote_info __rcu *remote_info; /* Links for the global tables of bound and connected sockets. */ struct list_head bound_table; struct list_head connected_table; @@ -120,8 +125,8 @@ struct vsock_transport { /* DGRAM. */ int (*dgram_bind)(struct vsock_sock *, struct sockaddr_vm *); - int (*dgram_enqueue)(struct vsock_sock *, struct sockaddr_vm *, - struct msghdr *, size_t len); + int (*dgram_enqueue)(const struct vsock_transport *, struct vsock_sock *, + struct sockaddr_vm *, struct msghdr *, size_t len); bool (*dgram_allow)(u32 cid, u32 port); int (*dgram_get_cid)(struct sk_buff *skb, unsigned int *cid); int (*dgram_get_port)(struct sk_buff *skb, unsigned int *port); @@ -196,6 +201,16 @@ void vsock_core_unregister(const struct vsock_transport *t); /* The transport may downcast this to access transport-specific functions */ const struct vsock_transport *vsock_core_get_transport(struct vsock_sock *vsk); +static inline struct vsock_remote_info * +vsock_core_get_remote_info(struct vsock_sock *vsk) +{ + /* vsk->remote_info may be accessed if the rcu read lock is held OR the + * socket lock is held + */ + return rcu_dereference_check(vsk->remote_info, + lockdep_sock_is_held(sk_vsock(vsk))); +} + /**** UTILS ****/ /* vsock_table_lock must be held */ @@ -214,7 +229,7 @@ void vsock_release_pending(struct sock *pending); void vsock_add_pending(struct sock *listener, struct sock *pending); void vsock_remove_pending(struct sock *listener, struct sock *pending); void vsock_enqueue_accept(struct sock *listener, struct sock *connected); -void vsock_insert_connected(struct vsock_sock *vsk); +int vsock_insert_connected(struct vsock_sock *vsk); void vsock_remove_bound(struct vsock_sock *vsk); void vsock_remove_connected(struct vsock_sock *vsk); struct sock *vsock_find_bound_socket(struct sockaddr_vm *addr); @@ -223,7 +238,8 @@ struct sock *vsock_find_connected_socket(struct sockaddr_vm *src, void vsock_remove_sock(struct vsock_sock *vsk); void vsock_for_each_connected_socket(struct vsock_transport *transport, void (*fn)(struct sock *sk)); -int vsock_assign_transport(struct vsock_sock *vsk, struct vsock_sock *psk); +int vsock_assign_transport(struct vsock_sock *vsk, struct vsock_sock *psk, + struct sockaddr_vm *remote_addr); bool vsock_find_cid(unsigned int cid); struct sock *vsock_find_bound_dgram_socket(struct sockaddr_vm *addr); @@ -253,4 +269,14 @@ static inline void __init vsock_bpf_build_proto(void) {} #endif +/* RCU-protected remote addr helpers */ +int vsock_remote_addr_cid(struct vsock_sock *vsk, unsigned int *cid); +int vsock_remote_addr_port(struct vsock_sock *vsk, unsigned int *port); +int vsock_remote_addr_cid_port(struct vsock_sock *vsk, unsigned int *cid, + unsigned int *port); +int vsock_remote_addr_copy(struct vsock_sock *vsk, struct sockaddr_vm *dest); +bool vsock_remote_addr_bound(struct vsock_sock *vsk); +bool vsock_remote_addr_equals(struct vsock_sock *vsk, struct sockaddr_vm *other); +int vsock_remote_addr_update_cid_port(struct vsock_sock *vsk, u32 cid, u32 port); + #endif /* __AF_VSOCK_H__ */ diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c index b0b18e7f4299..9e620d67889b 100644 --- a/net/vmw_vsock/af_vsock.c +++ b/net/vmw_vsock/af_vsock.c @@ -114,7 +114,12 @@ static int __vsock_bind(struct sock *sk, struct sockaddr_vm *addr); static void vsock_sk_destruct(struct sock *sk); static int vsock_queue_rcv_skb(struct sock *sk, struct sk_buff *skb); +static bool vsock_use_local_transport(unsigned int remote_cid); static bool sock_type_connectible(u16 type); +static const struct vsock_transport * +vsock_connectible_lookup_transport(unsigned int cid, __u8 flags); +static const struct vsock_transport * +vsock_dgram_lookup_transport(unsigned int cid, __u8 flags); /* Protocol family. */ struct proto vsock_proto = { @@ -146,6 +151,123 @@ static const struct vsock_transport *transport_local; static DEFINE_MUTEX(vsock_register_mutex); /**** UTILS ****/ +bool vsock_remote_addr_bound(struct vsock_sock *vsk) +{ + struct vsock_remote_info *remote_info; + bool ret; + + rcu_read_lock(); + remote_info = vsock_core_get_remote_info(vsk); + if (!remote_info) { + rcu_read_unlock(); + return false; + } + + ret = vsock_addr_bound(&remote_info->addr); + rcu_read_unlock(); + + return ret; +} +EXPORT_SYMBOL_GPL(vsock_remote_addr_bound); + +int vsock_remote_addr_copy(struct vsock_sock *vsk, struct sockaddr_vm *dest) +{ + struct vsock_remote_info *remote_info; + + rcu_read_lock(); + remote_info = vsock_core_get_remote_info(vsk); + if (!remote_info) { + rcu_read_unlock(); + return -EINVAL; + } + memcpy(dest, &remote_info->addr, sizeof(*dest)); + rcu_read_unlock(); + + return 0; +} +EXPORT_SYMBOL_GPL(vsock_remote_addr_copy); + +int vsock_remote_addr_cid(struct vsock_sock *vsk, unsigned int *cid) +{ + return vsock_remote_addr_cid_port(vsk, cid, NULL); +} +EXPORT_SYMBOL_GPL(vsock_remote_addr_cid); + +int vsock_remote_addr_port(struct vsock_sock *vsk, unsigned int *port) +{ + return vsock_remote_addr_cid_port(vsk, NULL, port); +} +EXPORT_SYMBOL_GPL(vsock_remote_addr_port); + +int vsock_remote_addr_cid_port(struct vsock_sock *vsk, unsigned int *cid, + unsigned int *port) +{ + struct vsock_remote_info *remote_info; + + rcu_read_lock(); + remote_info = vsock_core_get_remote_info(vsk); + if (!remote_info) { + rcu_read_unlock(); + return -EINVAL; + } + + if (cid) + *cid = remote_info->addr.svm_cid; + if (port) + *port = remote_info->addr.svm_port; + + rcu_read_unlock(); + return 0; +} +EXPORT_SYMBOL_GPL(vsock_remote_addr_cid_port); + +/* The socket lock must be held by the caller */ +static int vsock_set_remote_info(struct vsock_sock *vsk, + const struct vsock_transport *transport, + struct sockaddr_vm *addr) +{ + struct vsock_remote_info *old, *new; + + if (addr || transport) { + new = kmalloc(sizeof(*new), GFP_KERNEL); + if (!new) + return -ENOMEM; + + if (addr) + memcpy(&new->addr, addr, sizeof(new->addr)); + + if (transport) + new->transport = transport; + } else { + new = NULL; + } + + old = rcu_replace_pointer(vsk->remote_info, new, + lockdep_sock_is_held(sk_vsock(vsk))); + kfree_rcu(old, rcu); + + return 0; +} + +bool vsock_remote_addr_equals(struct vsock_sock *vsk, + struct sockaddr_vm *other) +{ + struct vsock_remote_info *remote_info; + bool equals; + + rcu_read_lock(); + remote_info = vsock_core_get_remote_info(vsk); + if (!remote_info) { + rcu_read_unlock(); + return false; + } + + equals = vsock_addr_equals_addr(&remote_info->addr, other); + rcu_read_unlock(); + + return equals; +} +EXPORT_SYMBOL_GPL(vsock_remote_addr_equals); /* Each bound VSocket is stored in the bind hash table and each connected * VSocket is stored in the connected hash table. @@ -283,10 +405,17 @@ static struct sock *__vsock_find_connected_socket(struct sockaddr_vm *src, list_for_each_entry(vsk, vsock_connected_sockets(src, dst), connected_table) { - if (vsock_addr_equals_addr(src, &vsk->remote_addr) && + struct vsock_remote_info *remote_info; + + rcu_read_lock(); + remote_info = vsock_core_get_remote_info(vsk); + if (remote_info && + vsock_addr_equals_addr(src, &remote_info->addr) && dst->svm_port == vsk->local_addr.svm_port) { + rcu_read_unlock(); return sk_vsock(vsk); } + rcu_read_unlock(); } return NULL; @@ -299,14 +428,25 @@ static void vsock_insert_unbound(struct vsock_sock *vsk) spin_unlock_bh(&vsock_table_lock); } -void vsock_insert_connected(struct vsock_sock *vsk) +int vsock_insert_connected(struct vsock_sock *vsk) { - struct list_head *list = vsock_connected_sockets( - &vsk->remote_addr, &vsk->local_addr); + struct vsock_remote_info *remote_info; + struct list_head *list; + + rcu_read_lock(); + remote_info = vsock_core_get_remote_info(vsk); + if (!remote_info) { + rcu_read_unlock(); + return -EINVAL; + } + list = vsock_connected_sockets(&remote_info->addr, &vsk->local_addr); + rcu_read_unlock(); spin_lock_bh(&vsock_table_lock); __vsock_insert_connected(list, vsk); spin_unlock_bh(&vsock_table_lock); + + return 0; } EXPORT_SYMBOL_GPL(vsock_insert_connected); @@ -388,7 +528,7 @@ void vsock_for_each_connected_socket(struct vsock_transport *transport, struct vsock_sock *vsk; list_for_each_entry(vsk, &vsock_connected_table[i], connected_table) { - if (vsk->transport != transport) + if (vsock_core_get_transport(vsk) != transport) continue; fn(sk_vsock(vsk)); @@ -454,12 +594,19 @@ static bool vsock_use_local_transport(unsigned int remote_cid) static void vsock_deassign_transport(struct vsock_sock *vsk) { - if (!vsk->transport) + struct vsock_remote_info *remote_info; + + remote_info = rcu_replace_pointer(vsk->remote_info, NULL, + lockdep_sock_is_held(sk_vsock(vsk))); + if (!remote_info) return; - vsk->transport->destruct(vsk); - module_put(vsk->transport->module); - vsk->transport = NULL; + if (remote_info->transport) { + remote_info->transport->destruct(vsk); + module_put(remote_info->transport->module); + } + + kfree_rcu(remote_info, rcu); } static const struct vsock_transport * @@ -490,26 +637,29 @@ vsock_dgram_lookup_transport(unsigned int cid, __u8 flags) return transport_dgram; } -/* Assign a transport to a socket and call the .init transport callback. +/* Assign a transport and remote addr to a socket and call the .init transport + * callback. * - * Note: for connection oriented socket this must be called when vsk->remote_addr - * is set (e.g. during the connect() or when a connection request on a listener - * socket is received). - * The vsk->remote_addr is used to decide which transport to use: + * The remote_addr is used to decide which transport to use. Both the addr + * and transport are updated simultaneously via RCU-protected pointer: * - remote CID == VMADDR_CID_LOCAL or g2h->local_cid or VMADDR_CID_HOST if * g2h is not loaded, will use local transport; * - remote CID <= VMADDR_CID_HOST or h2g is not loaded or remote flags field * includes VMADDR_FLAG_TO_HOST flag value, will use guest->host transport; * - remote CID > VMADDR_CID_HOST will use host->guest transport; */ -int vsock_assign_transport(struct vsock_sock *vsk, struct vsock_sock *psk) +int vsock_assign_transport(struct vsock_sock *vsk, struct vsock_sock *psk, + struct sockaddr_vm *remote_addr) { const struct vsock_transport *new_transport; + struct vsock_remote_info *old_info; struct sock *sk = sk_vsock(vsk); - unsigned int remote_cid = vsk->remote_addr.svm_cid; + unsigned int remote_cid; __u8 remote_flags; int ret; + remote_cid = remote_addr->svm_cid; + /* If the packet is coming with the source and destination CIDs higher * than VMADDR_CID_HOST, then a vsock channel where all the packets are * forwarded to the host should be established. Then the host will @@ -519,10 +669,10 @@ int vsock_assign_transport(struct vsock_sock *vsk, struct vsock_sock *psk) * the connect path the flag can be set by the user space application. */ if (psk && vsk->local_addr.svm_cid > VMADDR_CID_HOST && - vsk->remote_addr.svm_cid > VMADDR_CID_HOST) - vsk->remote_addr.svm_flags |= VMADDR_FLAG_TO_HOST; + remote_cid > VMADDR_CID_HOST) + remote_addr->svm_flags |= VMADDR_FLAG_TO_HOST; - remote_flags = vsk->remote_addr.svm_flags; + remote_flags = remote_addr->svm_flags; switch (sk->sk_type) { case SOCK_DGRAM: @@ -538,8 +688,9 @@ int vsock_assign_transport(struct vsock_sock *vsk, struct vsock_sock *psk) return -ESOCKTNOSUPPORT; } - if (vsk->transport) { - if (vsk->transport == new_transport) + old_info = vsock_core_get_remote_info(vsk); + if (old_info && old_info->transport) { + if (old_info->transport == new_transport) return 0; /* transport->release() must be called with sock lock acquired. @@ -548,7 +699,7 @@ int vsock_assign_transport(struct vsock_sock *vsk, struct vsock_sock *psk) * function is called on a new socket which is not assigned to * any transport. */ - vsk->transport->release(vsk); + old_info->transport->release(vsk); vsock_deassign_transport(vsk); } @@ -566,13 +717,18 @@ int vsock_assign_transport(struct vsock_sock *vsk, struct vsock_sock *psk) } } - ret = new_transport->init(vsk, psk); + ret = vsock_set_remote_info(vsk, new_transport, remote_addr); if (ret) { module_put(new_transport->module); return ret; } - vsk->transport = new_transport; + ret = new_transport->init(vsk, psk); + if (ret) { + vsock_set_remote_info(vsk, NULL, NULL); + module_put(new_transport->module); + return ret; + } return 0; } @@ -629,12 +785,14 @@ static bool vsock_is_pending(struct sock *sk) static int vsock_send_shutdown(struct sock *sk, int mode) { + const struct vsock_transport *transport; struct vsock_sock *vsk = vsock_sk(sk); - if (!vsk->transport) + transport = vsock_core_get_transport(vsk); + if (!transport) return -ENODEV; - return vsk->transport->shutdown(vsk, mode); + return transport->shutdown(vsk, mode); } static void vsock_pending_work(struct work_struct *work) @@ -757,7 +915,10 @@ static int __vsock_bind_connectible(struct vsock_sock *vsk, static int vsock_bind_dgram(struct vsock_sock *vsk, struct sockaddr_vm *addr) { - if (!vsk->transport || !vsk->transport->dgram_bind) { + const struct vsock_transport *transport; + + transport = vsock_core_get_transport(vsk); + if (!transport || !transport->dgram_bind) { int retval; spin_lock_bh(&vsock_dgram_table_lock); @@ -768,7 +929,7 @@ static int vsock_bind_dgram(struct vsock_sock *vsk, return retval; } - return vsk->transport->dgram_bind(vsk, addr); + return transport->dgram_bind(vsk, addr); } static int __vsock_bind(struct sock *sk, struct sockaddr_vm *addr) @@ -817,6 +978,7 @@ static struct sock *__vsock_create(struct net *net, unsigned short type, int kern) { + struct vsock_remote_info *remote_info; struct sock *sk; struct vsock_sock *psk; struct vsock_sock *vsk; @@ -836,7 +998,14 @@ static struct sock *__vsock_create(struct net *net, vsk = vsock_sk(sk); vsock_addr_init(&vsk->local_addr, VMADDR_CID_ANY, VMADDR_PORT_ANY); - vsock_addr_init(&vsk->remote_addr, VMADDR_CID_ANY, VMADDR_PORT_ANY); + + remote_info = kmalloc(sizeof(*remote_info), GFP_KERNEL); + if (!remote_info) { + sk_free(sk); + return NULL; + } + vsock_addr_init(&remote_info->addr, VMADDR_CID_ANY, VMADDR_PORT_ANY); + rcu_assign_pointer(vsk->remote_info, remote_info); sk->sk_destruct = vsock_sk_destruct; sk->sk_backlog_rcv = vsock_queue_rcv_skb; @@ -883,6 +1052,7 @@ static bool sock_type_connectible(u16 type) static void __vsock_release(struct sock *sk, int level) { if (sk) { + const struct vsock_transport *transport; struct sock *pending; struct vsock_sock *vsk; @@ -896,8 +1066,9 @@ static void __vsock_release(struct sock *sk, int level) */ lock_sock_nested(sk, level); - if (vsk->transport) - vsk->transport->release(vsk); + transport = vsock_core_get_transport(vsk); + if (transport) + transport->release(vsk); else if (sock_type_connectible(sk->sk_type)) vsock_remove_sock(vsk); @@ -927,8 +1098,6 @@ static void vsock_sk_destruct(struct sock *sk) * possibly register the address family with the kernel. */ vsock_addr_init(&vsk->local_addr, VMADDR_CID_ANY, VMADDR_PORT_ANY); - vsock_addr_init(&vsk->remote_addr, VMADDR_CID_ANY, VMADDR_PORT_ANY); - put_cred(vsk->owner); } @@ -952,16 +1121,22 @@ EXPORT_SYMBOL_GPL(vsock_create_connected); s64 vsock_stream_has_data(struct vsock_sock *vsk) { - return vsk->transport->stream_has_data(vsk); + const struct vsock_transport *transport; + + transport = vsock_core_get_transport(vsk); + + return transport->stream_has_data(vsk); } EXPORT_SYMBOL_GPL(vsock_stream_has_data); s64 vsock_connectible_has_data(struct vsock_sock *vsk) { + const struct vsock_transport *transport; struct sock *sk = sk_vsock(vsk); + transport = vsock_core_get_transport(vsk); if (sk->sk_type == SOCK_SEQPACKET) - return vsk->transport->seqpacket_has_data(vsk); + return transport->seqpacket_has_data(vsk); else return vsock_stream_has_data(vsk); } @@ -969,7 +1144,10 @@ EXPORT_SYMBOL_GPL(vsock_connectible_has_data); s64 vsock_stream_has_space(struct vsock_sock *vsk) { - return vsk->transport->stream_has_space(vsk); + const struct vsock_transport *transport; + + transport = vsock_core_get_transport(vsk); + return transport->stream_has_space(vsk); } EXPORT_SYMBOL_GPL(vsock_stream_has_space); @@ -1018,6 +1196,7 @@ static int vsock_getname(struct socket *sock, struct sock *sk; struct vsock_sock *vsk; struct sockaddr_vm *vm_addr; + struct vsock_remote_info *rcu_ptr; sk = sock->sk; vsk = vsock_sk(sk); @@ -1030,7 +1209,14 @@ static int vsock_getname(struct socket *sock, err = -ENOTCONN; goto out; } - vm_addr = &vsk->remote_addr; + + rcu_ptr = vsock_core_get_remote_info(vsk); + if (!rcu_ptr) { + err = -EINVAL; + goto out; + } + + vm_addr = &rcu_ptr->addr; } else { vm_addr = &vsk->local_addr; } @@ -1154,7 +1340,7 @@ static __poll_t vsock_poll(struct file *file, struct socket *sock, lock_sock(sk); - transport = vsk->transport; + transport = vsock_core_get_transport(vsk); /* Listening sockets that have connections in their accept * queue can be read. @@ -1225,9 +1411,11 @@ static __poll_t vsock_poll(struct file *file, struct socket *sock, static int vsock_read_skb(struct sock *sk, skb_read_actor_t read_actor) { + const struct vsock_transport *transport; struct vsock_sock *vsk = vsock_sk(sk); - return vsk->transport->read_skb(vsk, read_actor); + transport = vsock_core_get_transport(vsk); + return transport->read_skb(vsk, read_actor); } static int vsock_dgram_sendmsg(struct socket *sock, struct msghdr *msg, @@ -1236,7 +1424,7 @@ static int vsock_dgram_sendmsg(struct socket *sock, struct msghdr *msg, int err; struct sock *sk; struct vsock_sock *vsk; - struct sockaddr_vm *remote_addr; + struct sockaddr_vm stack_addr, *remote_addr; const struct vsock_transport *transport; if (msg->msg_flags & MSG_OOB) @@ -1247,7 +1435,23 @@ static int vsock_dgram_sendmsg(struct socket *sock, struct msghdr *msg, sk = sock->sk; vsk = vsock_sk(sk); - lock_sock(sk); + /* If auto-binding is required, acquire the slock to avoid potential + * race conditions. Otherwise, do not acquire the lock. + * + * We know that the first check of local_addr is racy (indicated by + * data_race()). By acquiring the lock and then subsequently checking + * again if local_addr is bound (inside vsock_auto_bind()), we can + * ensure there are no real data races. + * + * This technique is borrowed by inet_send_prepare(). + */ + if (data_race(!vsock_addr_bound(&vsk->local_addr))) { + lock_sock(sk); + err = vsock_auto_bind(vsk); + release_sock(sk); + if (err) + return err; + } /* If the provided message contains an address, use that. Otherwise * fall back on the socket's remote handle (if it has been connected). @@ -1257,6 +1461,7 @@ static int vsock_dgram_sendmsg(struct socket *sock, struct msghdr *msg, &remote_addr) == 0) { transport = vsock_dgram_lookup_transport(remote_addr->svm_cid, remote_addr->svm_flags); + if (!transport) { err = -EINVAL; goto out; @@ -1287,18 +1492,39 @@ static int vsock_dgram_sendmsg(struct socket *sock, struct msghdr *msg, goto out; } - err = transport->dgram_enqueue(vsk, remote_addr, msg, len); + err = transport->dgram_enqueue(transport, vsk, remote_addr, msg, len); module_put(transport->module); } else if (sock->state == SS_CONNECTED) { - remote_addr = &vsk->remote_addr; - transport = vsk->transport; + struct vsock_remote_info *remote_info; + const struct vsock_transport *transport; - err = vsock_auto_bind(vsk); - if (err) + rcu_read_lock(); + remote_info = vsock_core_get_remote_info(vsk); + if (!remote_info) { + err = -EINVAL; + rcu_read_unlock(); goto out; + } - if (remote_addr->svm_cid == VMADDR_CID_ANY) + transport = remote_info->transport; + memcpy(&stack_addr, &remote_info->addr, sizeof(stack_addr)); + rcu_read_unlock(); + + remote_addr = &stack_addr; + + if (remote_addr->svm_cid == VMADDR_CID_ANY) { remote_addr->svm_cid = transport->get_local_cid(); + lock_sock(sk_vsock(vsk)); + /* Even though the CID has changed, We do not have to + * look up the transport again because the local CID + * will never resolve to a different transport. + */ + err = vsock_set_remote_info(vsk, transport, remote_addr); + release_sock(sk_vsock(vsk)); + + if (err) + goto out; + } /* XXX Should connect() or this function ensure remote_addr is * bound? @@ -1314,14 +1540,13 @@ static int vsock_dgram_sendmsg(struct socket *sock, struct msghdr *msg, goto out; } - err = transport->dgram_enqueue(vsk, remote_addr, msg, len); + err = transport->dgram_enqueue(transport, vsk, &stack_addr, msg, len); } else { err = -EINVAL; goto out; } out: - release_sock(sk); return err; } @@ -1332,18 +1557,22 @@ static int vsock_dgram_connect(struct socket *sock, struct sock *sk; struct vsock_sock *vsk; struct sockaddr_vm *remote_addr; + const struct vsock_transport *transport; sk = sock->sk; vsk = vsock_sk(sk); err = vsock_addr_cast(addr, addr_len, &remote_addr); if (err == -EAFNOSUPPORT && remote_addr->svm_family == AF_UNSPEC) { + struct sockaddr_vm addr_any; + lock_sock(sk); - vsock_addr_init(&vsk->remote_addr, VMADDR_CID_ANY, - VMADDR_PORT_ANY); + vsock_addr_init(&addr_any, VMADDR_CID_ANY, VMADDR_PORT_ANY); + err = vsock_set_remote_info(vsk, vsock_core_get_transport(vsk), + &addr_any); sock->state = SS_UNCONNECTED; release_sock(sk); - return 0; + return err; } else if (err != 0) return -EINVAL; @@ -1353,14 +1582,13 @@ static int vsock_dgram_connect(struct socket *sock, if (err) goto out; - memcpy(&vsk->remote_addr, remote_addr, sizeof(vsk->remote_addr)); - - err = vsock_assign_transport(vsk, NULL); + err = vsock_assign_transport(vsk, NULL, remote_addr); if (err) goto out; - if (!vsk->transport->dgram_allow(remote_addr->svm_cid, - remote_addr->svm_port)) { + transport = vsock_core_get_transport(vsk); + if (!transport->dgram_allow(remote_addr->svm_cid, + remote_addr->svm_port)) { err = -EINVAL; goto out; } @@ -1407,7 +1635,9 @@ int vsock_dgram_recvmsg(struct socket *sock, struct msghdr *msg, if (flags & MSG_OOB || flags & MSG_ERRQUEUE) return -EOPNOTSUPP; - transport = vsk->transport; + rcu_read_lock(); + transport = vsock_core_get_transport(vsk); + rcu_read_unlock(); /* Retrieve the head sk_buff from the socket's receive queue. */ err = 0; @@ -1475,7 +1705,7 @@ static const struct proto_ops vsock_dgram_ops = { static int vsock_transport_cancel_pkt(struct vsock_sock *vsk) { - const struct vsock_transport *transport = vsk->transport; + const struct vsock_transport *transport = vsock_core_get_transport(vsk); if (!transport || !transport->cancel_pkt) return -EOPNOTSUPP; @@ -1512,6 +1742,7 @@ static int vsock_connect(struct socket *sock, struct sockaddr *addr, struct sock *sk; struct vsock_sock *vsk; const struct vsock_transport *transport; + struct vsock_remote_info *remote_info; struct sockaddr_vm *remote_addr; long timeout; DEFINE_WAIT(wait); @@ -1549,14 +1780,20 @@ static int vsock_connect(struct socket *sock, struct sockaddr *addr, } /* Set the remote address that we are connecting to. */ - memcpy(&vsk->remote_addr, remote_addr, - sizeof(vsk->remote_addr)); - - err = vsock_assign_transport(vsk, NULL); + err = vsock_assign_transport(vsk, NULL, remote_addr); if (err) goto out; - transport = vsk->transport; + rcu_read_lock(); + remote_info = vsock_core_get_remote_info(vsk); + if (!remote_info) { + err = -EINVAL; + rcu_read_unlock(); + goto out; + } + + transport = remote_info->transport; + rcu_read_unlock(); /* The hypervisor and well-known contexts do not have socket * endpoints. @@ -1820,7 +2057,7 @@ static int vsock_connectible_setsockopt(struct socket *sock, lock_sock(sk); - transport = vsk->transport; + transport = vsock_core_get_transport(vsk); switch (optname) { case SO_VM_SOCKETS_BUFFER_SIZE: @@ -1958,7 +2195,7 @@ static int vsock_connectible_sendmsg(struct socket *sock, struct msghdr *msg, lock_sock(sk); - transport = vsk->transport; + transport = vsock_core_get_transport(vsk); /* Callers should not provide a destination with connection oriented * sockets. @@ -1981,7 +2218,7 @@ static int vsock_connectible_sendmsg(struct socket *sock, struct msghdr *msg, goto out; } - if (!vsock_addr_bound(&vsk->remote_addr)) { + if (!vsock_remote_addr_bound(vsk)) { err = -EDESTADDRREQ; goto out; } @@ -2102,7 +2339,7 @@ static int vsock_connectible_wait_data(struct sock *sk, vsk = vsock_sk(sk); err = 0; - transport = vsk->transport; + transport = vsock_core_get_transport(vsk); while (1) { prepare_to_wait(sk_sleep(sk), wait, TASK_INTERRUPTIBLE); @@ -2170,7 +2407,7 @@ static int __vsock_stream_recvmsg(struct sock *sk, struct msghdr *msg, DEFINE_WAIT(wait); vsk = vsock_sk(sk); - transport = vsk->transport; + transport = vsock_core_get_transport(vsk); /* We must not copy less than target bytes into the user's buffer * before returning successfully, so we wait for the consume queue to @@ -2246,7 +2483,7 @@ static int __vsock_seqpacket_recvmsg(struct sock *sk, struct msghdr *msg, DEFINE_WAIT(wait); vsk = vsock_sk(sk); - transport = vsk->transport; + transport = vsock_core_get_transport(vsk); timeout = sock_rcvtimeo(sk, flags & MSG_DONTWAIT); @@ -2303,7 +2540,7 @@ vsock_connectible_recvmsg(struct socket *sock, struct msghdr *msg, size_t len, lock_sock(sk); - transport = vsk->transport; + transport = vsock_core_get_transport(vsk); if (!transport || sk->sk_state != TCP_ESTABLISHED) { /* Recvmsg is supposed to return 0 if a peer performs an @@ -2370,7 +2607,7 @@ static int vsock_set_rcvlowat(struct sock *sk, int val) if (val > vsk->buffer_size) return -EINVAL; - transport = vsk->transport; + transport = vsock_core_get_transport(vsk); if (transport && transport->set_rcvlowat) return transport->set_rcvlowat(vsk, val); @@ -2460,7 +2697,10 @@ static int vsock_create(struct net *net, struct socket *sock, vsk = vsock_sk(sk); if (sock->type == SOCK_DGRAM) { - ret = vsock_assign_transport(vsk, NULL); + struct sockaddr_vm remote_addr; + + vsock_addr_init(&remote_addr, VMADDR_CID_ANY, VMADDR_PORT_ANY); + ret = vsock_assign_transport(vsk, NULL, &remote_addr); if (ret < 0) { sock_put(sk); return ret; @@ -2582,7 +2822,18 @@ static void __exit vsock_exit(void) const struct vsock_transport *vsock_core_get_transport(struct vsock_sock *vsk) { - return vsk->transport; + const struct vsock_transport *transport; + struct vsock_remote_info *remote_info; + + rcu_read_lock(); + remote_info = vsock_core_get_remote_info(vsk); + if (!remote_info) { + rcu_read_unlock(); + return NULL; + } + transport = remote_info->transport; + rcu_read_unlock(); + return transport; } EXPORT_SYMBOL_GPL(vsock_core_get_transport); diff --git a/net/vmw_vsock/diag.c b/net/vmw_vsock/diag.c index a2823b1c5e28..f843bae86b32 100644 --- a/net/vmw_vsock/diag.c +++ b/net/vmw_vsock/diag.c @@ -15,8 +15,14 @@ static int sk_diag_fill(struct sock *sk, struct sk_buff *skb, u32 portid, u32 seq, u32 flags) { struct vsock_sock *vsk = vsock_sk(sk); + struct sockaddr_vm remote_addr; struct vsock_diag_msg *rep; struct nlmsghdr *nlh; + int err; + + err = vsock_remote_addr_copy(vsk, &remote_addr); + if (err < 0) + return err; nlh = nlmsg_put(skb, portid, seq, SOCK_DIAG_BY_FAMILY, sizeof(*rep), flags); @@ -36,8 +42,8 @@ static int sk_diag_fill(struct sock *sk, struct sk_buff *skb, rep->vdiag_shutdown = sk->sk_shutdown; rep->vdiag_src_cid = vsk->local_addr.svm_cid; rep->vdiag_src_port = vsk->local_addr.svm_port; - rep->vdiag_dst_cid = vsk->remote_addr.svm_cid; - rep->vdiag_dst_port = vsk->remote_addr.svm_port; + rep->vdiag_dst_cid = remote_addr.svm_cid; + rep->vdiag_dst_port = remote_addr.svm_port; rep->vdiag_ino = sock_i_ino(sk); sock_diag_save_cookie(sk, rep->vdiag_cookie); diff --git a/net/vmw_vsock/hyperv_transport.c b/net/vmw_vsock/hyperv_transport.c index c00bc5da769a..84e8c64b3365 100644 --- a/net/vmw_vsock/hyperv_transport.c +++ b/net/vmw_vsock/hyperv_transport.c @@ -323,6 +323,8 @@ static void hvs_open_connection(struct vmbus_channel *chan) goto out; if (conn_from_host) { + struct sockaddr_vm remote_addr; + if (sk->sk_ack_backlog >= sk->sk_max_ack_backlog) goto out; @@ -336,10 +338,9 @@ static void hvs_open_connection(struct vmbus_channel *chan) hvs_addr_init(&vnew->local_addr, if_type); /* Remote peer is always the host */ - vsock_addr_init(&vnew->remote_addr, - VMADDR_CID_HOST, VMADDR_PORT_ANY); - vnew->remote_addr.svm_port = get_port_by_srv_id(if_instance); - ret = vsock_assign_transport(vnew, vsock_sk(sk)); + vsock_addr_init(&remote_addr, VMADDR_CID_HOST, get_port_by_srv_id(if_instance)); + + ret = vsock_assign_transport(vnew, vsock_sk(sk), &remote_addr); /* Transport assigned (looking at remote_addr) must be the * same where we received the request. */ @@ -459,13 +460,18 @@ static int hvs_connect(struct vsock_sock *vsk) { union hvs_service_id vm, host; struct hvsock *h = vsk->trans; + int err; vm.srv_id = srv_id_template; vm.svm_port = vsk->local_addr.svm_port; h->vm_srv_id = vm.srv_id; host.srv_id = srv_id_template; - host.svm_port = vsk->remote_addr.svm_port; + + err = vsock_remote_addr_port(vsk, &host.svm_port); + if (err < 0) + return err; + h->host_srv_id = host.srv_id; return vmbus_send_tl_connect_request(&h->vm_srv_id, &h->host_srv_id); @@ -566,7 +572,8 @@ static int hvs_dgram_get_length(struct sk_buff *skb, size_t *len) return -EOPNOTSUPP; } -static int hvs_dgram_enqueue(struct vsock_sock *vsk, +static int hvs_dgram_enqueue(const struct vsock_transport *transport, + struct vsock_sock *vsk, struct sockaddr_vm *remote, struct msghdr *msg, size_t dgram_len) { @@ -866,7 +873,13 @@ static struct vsock_transport hvs_transport = { static bool hvs_check_transport(struct vsock_sock *vsk) { - return vsk->transport == &hvs_transport; + bool ret; + + rcu_read_lock(); + ret = vsock_core_get_transport(vsk) == &hvs_transport; + rcu_read_unlock(); + + return ret; } static int hvs_probe(struct hv_device *hdev, diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio_transport_common.c index bc9d459723f5..9d090f208648 100644 --- a/net/vmw_vsock/virtio_transport_common.c +++ b/net/vmw_vsock/virtio_transport_common.c @@ -259,8 +259,9 @@ static int virtio_transport_send_pkt_info(struct vsock_sock *vsk, src_cid = t_ops->transport.get_local_cid(); src_port = vsk->local_addr.svm_port; if (!info->remote_cid) { - dst_cid = vsk->remote_addr.svm_cid; - dst_port = vsk->remote_addr.svm_port; + ret = vsock_remote_addr_cid_port(vsk, &dst_cid, &dst_port); + if (ret < 0) + return ret; } else { dst_cid = info->remote_cid; dst_port = info->remote_port; @@ -878,12 +879,14 @@ int virtio_transport_shutdown(struct vsock_sock *vsk, int mode) EXPORT_SYMBOL_GPL(virtio_transport_shutdown); int -virtio_transport_dgram_enqueue(struct vsock_sock *vsk, +virtio_transport_dgram_enqueue(const struct vsock_transport *transport, + struct vsock_sock *vsk, struct sockaddr_vm *remote_addr, struct msghdr *msg, size_t dgram_len) { - const struct virtio_transport *t_ops; + const struct virtio_transport *t_ops = + (const struct virtio_transport *)transport; struct virtio_vsock_pkt_info info = { .op = VIRTIO_VSOCK_OP_RW, .msg = msg, @@ -897,7 +900,6 @@ virtio_transport_dgram_enqueue(struct vsock_sock *vsk, if (dgram_len > VIRTIO_VSOCK_MAX_PKT_BUF_SIZE) return -EMSGSIZE; - t_ops = virtio_transport_get_ops(vsk); src_cid = t_ops->transport.get_local_cid(); src_port = vsk->local_addr.svm_port; @@ -1121,7 +1123,11 @@ virtio_transport_recv_connecting(struct sock *sk, case VIRTIO_VSOCK_OP_RESPONSE: sk->sk_state = TCP_ESTABLISHED; sk->sk_socket->state = SS_CONNECTED; - vsock_insert_connected(vsk); + err = vsock_insert_connected(vsk); + if (err) { + skerr = ECONNRESET; + goto destroy; + } sk->sk_state_change(sk); break; case VIRTIO_VSOCK_OP_INVALID: @@ -1323,6 +1329,7 @@ virtio_transport_recv_listen(struct sock *sk, struct sk_buff *skb, struct virtio_vsock_hdr *hdr = virtio_vsock_hdr(skb); struct vsock_sock *vsk = vsock_sk(sk); struct vsock_sock *vchild; + struct sockaddr_vm child_remote; struct sock *child; int ret; @@ -1351,14 +1358,13 @@ virtio_transport_recv_listen(struct sock *sk, struct sk_buff *skb, vchild = vsock_sk(child); vsock_addr_init(&vchild->local_addr, le64_to_cpu(hdr->dst_cid), le32_to_cpu(hdr->dst_port)); - vsock_addr_init(&vchild->remote_addr, le64_to_cpu(hdr->src_cid), + vsock_addr_init(&child_remote, le64_to_cpu(hdr->src_cid), le32_to_cpu(hdr->src_port)); - - ret = vsock_assign_transport(vchild, vsk); + ret = vsock_assign_transport(vchild, vsk, &child_remote); /* Transport assigned (looking at remote_addr) must be the same * where we received the request. */ - if (ret || vchild->transport != &t->transport) { + if (ret || vsock_core_get_transport(vchild) != &t->transport) { release_sock(child); virtio_transport_reset_no_sock(t, skb); sock_put(child); @@ -1368,7 +1374,13 @@ virtio_transport_recv_listen(struct sock *sk, struct sk_buff *skb, if (virtio_transport_space_update(child, skb)) child->sk_write_space(child); - vsock_insert_connected(vchild); + ret = vsock_insert_connected(vchild); + if (ret) { + release_sock(child); + virtio_transport_reset_no_sock(t, skb); + sock_put(child); + return ret; + } vsock_enqueue_accept(sk, child); virtio_transport_send_response(vchild, skb); diff --git a/net/vmw_vsock/vmci_transport.c b/net/vmw_vsock/vmci_transport.c index bbc63826bf48..943539857ccb 100644 --- a/net/vmw_vsock/vmci_transport.c +++ b/net/vmw_vsock/vmci_transport.c @@ -283,18 +283,25 @@ vmci_transport_send_control_pkt(struct sock *sk, u16 proto, struct vmci_handle handle) { + struct sockaddr_vm addr_stack; + struct sockaddr_vm *remote_addr = &addr_stack; struct vsock_sock *vsk; + int err; vsk = vsock_sk(sk); if (!vsock_addr_bound(&vsk->local_addr)) return -EINVAL; - if (!vsock_addr_bound(&vsk->remote_addr)) + if (!vsock_remote_addr_bound(vsk)) return -EINVAL; + err = vsock_remote_addr_copy(vsk, remote_addr); + if (err < 0) + return err; + return vmci_transport_alloc_send_control_pkt(&vsk->local_addr, - &vsk->remote_addr, + remote_addr, type, size, mode, wait, proto, handle); } @@ -317,6 +324,7 @@ static int vmci_transport_send_reset(struct sock *sk, struct sockaddr_vm *dst_ptr; struct sockaddr_vm dst; struct vsock_sock *vsk; + int err; if (pkt->type == VMCI_TRANSPORT_PACKET_TYPE_RST) return 0; @@ -326,13 +334,16 @@ static int vmci_transport_send_reset(struct sock *sk, if (!vsock_addr_bound(&vsk->local_addr)) return -EINVAL; - if (vsock_addr_bound(&vsk->remote_addr)) { - dst_ptr = &vsk->remote_addr; + if (vsock_remote_addr_bound(vsk)) { + err = vsock_remote_addr_copy(vsk, &dst); + if (err < 0) + return err; } else { vsock_addr_init(&dst, pkt->dg.src.context, pkt->src_port); - dst_ptr = &dst; } + dst_ptr = &dst; + return vmci_transport_alloc_send_control_pkt(&vsk->local_addr, dst_ptr, VMCI_TRANSPORT_PACKET_TYPE_RST, 0, 0, NULL, VSOCK_PROTO_INVALID, @@ -490,7 +501,7 @@ static struct sock *vmci_transport_get_pending( list_for_each_entry(vpending, &vlistener->pending_links, pending_links) { - if (vsock_addr_equals_addr(&src, &vpending->remote_addr) && + if (vsock_remote_addr_equals(vpending, &src) && pkt->dst_port == vpending->local_addr.svm_port) { pending = sk_vsock(vpending); sock_hold(pending); @@ -940,6 +951,7 @@ static void vmci_transport_recv_pkt_work(struct work_struct *work) static int vmci_transport_recv_listen(struct sock *sk, struct vmci_transport_packet *pkt) { + struct sockaddr_vm remote_addr; struct sock *pending; struct vsock_sock *vpending; int err; @@ -1015,10 +1027,10 @@ static int vmci_transport_recv_listen(struct sock *sk, vsock_addr_init(&vpending->local_addr, pkt->dg.dst.context, pkt->dst_port); - vsock_addr_init(&vpending->remote_addr, pkt->dg.src.context, - pkt->src_port); - err = vsock_assign_transport(vpending, vsock_sk(sk)); + vsock_addr_init(&remote_addr, pkt->dg.src.context, pkt->src_port); + + err = vsock_assign_transport(vpending, vsock_sk(sk), &remote_addr); /* Transport assigned (looking at remote_addr) must be the same * where we received the request. */ @@ -1133,6 +1145,7 @@ vmci_transport_recv_connecting_server(struct sock *listener, { struct vsock_sock *vpending; struct vmci_handle handle; + unsigned int vpending_remote_cid; struct vmci_qp *qpair; bool is_local; u32 flags; @@ -1189,8 +1202,13 @@ vmci_transport_recv_connecting_server(struct sock *listener, /* vpending->local_addr always has a context id so we do not need to * worry about VMADDR_CID_ANY in this case. */ - is_local = - vpending->remote_addr.svm_cid == vpending->local_addr.svm_cid; + err = vsock_remote_addr_cid(vpending, &vpending_remote_cid); + if (err < 0) { + skerr = EPROTO; + goto destroy; + } + + is_local = vpending_remote_cid == vpending->local_addr.svm_cid; flags = VMCI_QPFLAG_ATTACH_ONLY; flags |= is_local ? VMCI_QPFLAG_LOCAL : 0; @@ -1203,7 +1221,7 @@ vmci_transport_recv_connecting_server(struct sock *listener, flags, vmci_transport_is_trusted( vpending, - vpending->remote_addr.svm_cid)); + vpending_remote_cid)); if (err < 0) { vmci_transport_send_reset(pending, pkt); skerr = -err; @@ -1277,6 +1295,8 @@ static int vmci_transport_recv_connecting_client(struct sock *sk, struct vmci_transport_packet *pkt) { + struct vsock_remote_info *remote_info; + struct sockaddr_vm *remote_addr; struct vsock_sock *vsk; int err; int skerr; @@ -1306,9 +1326,20 @@ vmci_transport_recv_connecting_client(struct sock *sk, break; case VMCI_TRANSPORT_PACKET_TYPE_NEGOTIATE: case VMCI_TRANSPORT_PACKET_TYPE_NEGOTIATE2: + rcu_read_lock(); + remote_info = vsock_core_get_remote_info(vsk); + if (!remote_info) { + skerr = EPROTO; + err = -EINVAL; + rcu_read_unlock(); + goto destroy; + } + + remote_addr = &remote_info->addr; + if (pkt->u.size == 0 - || pkt->dg.src.context != vsk->remote_addr.svm_cid - || pkt->src_port != vsk->remote_addr.svm_port + || pkt->dg.src.context != remote_addr->svm_cid + || pkt->src_port != remote_addr->svm_port || !vmci_handle_is_invalid(vmci_trans(vsk)->qp_handle) || vmci_trans(vsk)->qpair || vmci_trans(vsk)->produce_size != 0 @@ -1316,9 +1347,10 @@ vmci_transport_recv_connecting_client(struct sock *sk, || vmci_trans(vsk)->detach_sub_id != VMCI_INVALID_ID) { skerr = EPROTO; err = -EINVAL; - + rcu_read_unlock(); goto destroy; } + rcu_read_unlock(); err = vmci_transport_recv_connecting_client_negotiate(sk, pkt); if (err) { @@ -1379,6 +1411,7 @@ static int vmci_transport_recv_connecting_client_negotiate( int err; struct vsock_sock *vsk; struct vmci_handle handle; + unsigned int remote_cid; struct vmci_qp *qpair; u32 detach_sub_id; bool is_local; @@ -1449,19 +1482,23 @@ static int vmci_transport_recv_connecting_client_negotiate( /* Make VMCI select the handle for us. */ handle = VMCI_INVALID_HANDLE; - is_local = vsk->remote_addr.svm_cid == vsk->local_addr.svm_cid; + + err = vsock_remote_addr_cid(vsk, &remote_cid); + if (err < 0) + goto destroy; + + is_local = remote_cid == vsk->local_addr.svm_cid; flags = is_local ? VMCI_QPFLAG_LOCAL : 0; err = vmci_transport_queue_pair_alloc(&qpair, &handle, pkt->u.size, pkt->u.size, - vsk->remote_addr.svm_cid, + remote_cid, flags, vmci_transport_is_trusted( vsk, - vsk-> - remote_addr.svm_cid)); + remote_cid)); if (err < 0) goto destroy; @@ -1692,6 +1729,7 @@ static int vmci_transport_dgram_bind(struct vsock_sock *vsk, } static int vmci_transport_dgram_enqueue( + const struct vsock_transport *transport, struct vsock_sock *vsk, struct sockaddr_vm *remote_addr, struct msghdr *msg, @@ -2052,7 +2090,13 @@ static struct vsock_transport vmci_transport = { static bool vmci_check_transport(struct vsock_sock *vsk) { - return vsk->transport == &vmci_transport; + bool retval; + + rcu_read_lock(); + retval = vsock_core_get_transport(vsk) == &vmci_transport; + rcu_read_unlock(); + + return retval; } static void vmci_vsock_transport_cb(bool is_host) diff --git a/net/vmw_vsock/vsock_bpf.c b/net/vmw_vsock/vsock_bpf.c index a3c97546ab84..4d811c9cdf6e 100644 --- a/net/vmw_vsock/vsock_bpf.c +++ b/net/vmw_vsock/vsock_bpf.c @@ -148,6 +148,7 @@ static void vsock_bpf_check_needs_rebuild(struct proto *ops) int vsock_bpf_update_proto(struct sock *sk, struct sk_psock *psock, bool restore) { + const struct vsock_transport *transport; struct vsock_sock *vsk; if (restore) { @@ -157,10 +158,15 @@ int vsock_bpf_update_proto(struct sock *sk, struct sk_psock *psock, bool restore } vsk = vsock_sk(sk); - if (!vsk->transport) + + rcu_read_lock(); + transport = vsock_core_get_transport(vsk); + rcu_read_unlock(); + + if (!transport) return -ENODEV; - if (!vsk->transport->read_skb) + if (!transport->read_skb) return -EOPNOTSUPP; vsock_bpf_check_needs_rebuild(psock->sk_proto); From patchwork Sat Jun 10 00:58:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bobby Eshleman X-Patchwork-Id: 105870 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp1288819vqr; Fri, 9 Jun 2023 18:23:49 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6L7ehjU7j9cs9azE3wzL5JIKmMlps/nNg3buOIB8oB1H4xMG8SIA3sSgaOBUvB313RO1Z0 X-Received: by 2002:a17:903:32c7:b0:1b0:4c32:5d6d with SMTP id i7-20020a17090332c700b001b04c325d6dmr544971plr.31.1686360229606; Fri, 09 Jun 2023 18:23:49 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1686360229; cv=none; d=google.com; s=arc-20160816; b=Gt4vn5+uv22yKzcWGDcH+h4ba/Uw1FPCknOEbA08QeH6ra6oJSq2H9YFN3JBhhsaDY 6jFzCidcPjSR7IJRhmcT4nMborB5WveDuEfZKwS3T9zzpoCZ/xxnDRxyN3OlNjCTM7I9 Rq3MYy/6MtQSIXpkd/mv6onyaObn/6o1IHCwtV00ib5/WmwtBeDH8quSY/w4Ctd1HRT5 zAE22GFH8oVm0D5g8gbYC6qLSs07BVeYOJBrAaBjL33e+h75UJWCTOk8fF9on1wyIru9 6N9FXNTqHc6T3VCJ6qaOMlNPVYwII58DHCEbxQu3dxWFTiaLBJnv0reGgYQhlihxz9FR A+DQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:in-reply-to:references:message-id :content-transfer-encoding:mime-version:subject:date:from :dkim-signature; bh=PT++VZNUMXEFhvCG58k2InRSiOzVfFUx6mXd86yXfnU=; b=Cgeso4X8hOHjNZo28WBMuRKr9Vi5pfoJ/y4NXqUTmlkPvVi8UmeUNXj48k86nsN6Qg Mpii1FmfCt3GXbpMJHUXj5KGcTpo+j2nmkzmmiADws1b1l/AZR4vODqM/KAAvK18H/jp qkemw3VF7LOuyakmIiq6dzZ1VkYoSdDR74E6/hkFgPp9O1+PuZWcUqtB3Ag/2vw1/+6W PupZhPMfXOmRHFUN3nj6HDaegsSGIc4SDHeXUoThAjPcgVdaoGhHirlOAA9D74MLFKpG JUatIFo0YBPqYHDWYST2Uy1AtzXV0IBrdJBWzYqmYlIAMUw8aOqmj4hOqHC0VmgSJWLf Ni0w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@bytedance.com header.s=google header.b=cSg5caQe; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=bytedance.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id o9-20020a170902d4c900b001a511254836si3529774plg.89.2023.06.09.18.23.37; Fri, 09 Jun 2023 18:23:49 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@bytedance.com header.s=google header.b=cSg5caQe; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=bytedance.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233514AbjFJBAP (ORCPT + 99 others); Fri, 9 Jun 2023 21:00:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55578 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233676AbjFJA7i (ORCPT ); Fri, 9 Jun 2023 20:59:38 -0400 Received: from mail-qv1-xf2f.google.com (mail-qv1-xf2f.google.com [IPv6:2607:f8b0:4864:20::f2f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0128E30E5 for ; Fri, 9 Jun 2023 17:58:52 -0700 (PDT) Received: by mail-qv1-xf2f.google.com with SMTP id 6a1803df08f44-62614a2ce61so18086216d6.3 for ; Fri, 09 Jun 2023 17:58:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1686358732; x=1688950732; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=PT++VZNUMXEFhvCG58k2InRSiOzVfFUx6mXd86yXfnU=; b=cSg5caQeIBvwWDH70orHwCvAFbiGeRsgy7S9WZEbKNb46uRN6NOw/fAZfIDTT2uLu2 k/B6hEhzqgUq+hpMt5uaaKnmjKPJWj4eDbN+Ac5lR/sQg/GcJLaNoyjnWOFb1yug8CoG 6UzllIeqddTDolRxdW70Yr2kPglZLUHgkn2t7T4zyPbqO4Q1RzltvqZ82VYva7xdrKJS y9NI0E4SK2mdTgQvpNi+XNKubtAvr1xzXQKDbffZ1L6bL0kzhruFf1x1PsBKWPwhbGtv PLavA8vJMIw38VpoaxgSIhT9fe1W8iy2xPzuLFEJWL4JffMwyonnNNh5CIxA8L0YG9BD q/xw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1686358732; x=1688950732; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=PT++VZNUMXEFhvCG58k2InRSiOzVfFUx6mXd86yXfnU=; b=HXP6j17U5sHf/TnWhrecohbpzRRwFgUOfevo/YKiARlYBe82Jm9YZRwCbwuvue7146 tq6OHz6JBhJ2+C4qu2+EGGmeKkP5mj3s1MqSgSj2AlWATuE4Tp3QyvTtR+9G94FDjOX5 xyumE/Ia7gSNmufeHpF8iVQlx4sQkYSlgj1ppQSmaXAHEXjPpdXzb60ycBVaevfiSwhl yss+XMKpTrW6CQrLGKA6do5Ez0q9AbBfH29ujym1BSg0frF3lf7nbVYBq8jvbtiqHOrh Q/np4F80JuMdhrPLdunNo4uVcZW8UAZCTTHzjO5Tcx9pLBrth2QtyJRjOvf3rMgZrNf5 SA0Q== X-Gm-Message-State: AC+VfDzKFRATKzZz0GITssXStDSUSTxvOinLndxZeTp5pw8RCWhlnsVI c42SrD8YBOMGjXQVtALm12c0Zw== X-Received: by 2002:a05:6214:f61:b0:623:8d60:da60 with SMTP id iy1-20020a0562140f6100b006238d60da60mr3351561qvb.34.1686358731747; Fri, 09 Jun 2023 17:58:51 -0700 (PDT) Received: from [172.17.0.4] ([130.44.212.126]) by smtp.gmail.com with ESMTPSA id x17-20020a0ce251000000b00606750abaf9sm1504075qvl.136.2023.06.09.17.58.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 09 Jun 2023 17:58:51 -0700 (PDT) From: Bobby Eshleman Date: Sat, 10 Jun 2023 00:58:35 +0000 Subject: [PATCH RFC net-next v4 8/8] tests: add vsock dgram tests MIME-Version: 1.0 Message-Id: <20230413-b4-vsock-dgram-v4-8-0cebbb2ae899@bytedance.com> References: <20230413-b4-vsock-dgram-v4-0-0cebbb2ae899@bytedance.com> In-Reply-To: <20230413-b4-vsock-dgram-v4-0-0cebbb2ae899@bytedance.com> To: Stefan Hajnoczi , Stefano Garzarella , "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , "K. Y. Srinivasan" , Haiyang Zhang , Wei Liu , Dexuan Cui , Bryan Tan , Vishnu Dasa , VMware PV-Drivers Reviewers Cc: Dan Carpenter , Simon Horman , Krasnov Arseniy , kvm@vger.kernel.org, virtualization@lists.linux-foundation.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-hyperv@vger.kernel.org, bpf@vger.kernel.org, Bobby Eshleman , Jiang Wang X-Mailer: b4 0.12.2 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1768276864072250051?= X-GMAIL-MSGID: =?utf-8?q?1768276864072250051?= From: Jiang Wang This patch adds tests for vsock datagram. Signed-off-by: Bobby Eshleman Signed-off-by: Jiang Wang --- tools/testing/vsock/util.c | 141 ++++++++++++- tools/testing/vsock/util.h | 6 + tools/testing/vsock/vsock_test.c | 432 +++++++++++++++++++++++++++++++++++++++ 3 files changed, 578 insertions(+), 1 deletion(-) diff --git a/tools/testing/vsock/util.c b/tools/testing/vsock/util.c index 01b636d3039a..811e70d7cf1e 100644 --- a/tools/testing/vsock/util.c +++ b/tools/testing/vsock/util.c @@ -99,7 +99,8 @@ static int vsock_connect(unsigned int cid, unsigned int port, int type) int ret; int fd; - control_expectln("LISTENING"); + if (type != SOCK_DGRAM) + control_expectln("LISTENING"); fd = socket(AF_VSOCK, type, 0); @@ -130,6 +131,11 @@ int vsock_seqpacket_connect(unsigned int cid, unsigned int port) return vsock_connect(cid, port, SOCK_SEQPACKET); } +int vsock_dgram_connect(unsigned int cid, unsigned int port) +{ + return vsock_connect(cid, port, SOCK_DGRAM); +} + /* Listen on and return the first incoming connection. The remote * address is stored to clientaddrp. clientaddrp may be NULL. */ @@ -211,6 +217,34 @@ int vsock_seqpacket_accept(unsigned int cid, unsigned int port, return vsock_accept(cid, port, clientaddrp, SOCK_SEQPACKET); } +int vsock_dgram_bind(unsigned int cid, unsigned int port) +{ + union { + struct sockaddr sa; + struct sockaddr_vm svm; + } addr = { + .svm = { + .svm_family = AF_VSOCK, + .svm_port = port, + .svm_cid = cid, + }, + }; + int fd; + + fd = socket(AF_VSOCK, SOCK_DGRAM, 0); + if (fd < 0) { + perror("socket"); + exit(EXIT_FAILURE); + } + + if (bind(fd, &addr.sa, sizeof(addr.svm)) < 0) { + perror("bind"); + exit(EXIT_FAILURE); + } + + return fd; +} + /* Transmit one byte and check the return value. * * expected_ret: @@ -260,6 +294,57 @@ void send_byte(int fd, int expected_ret, int flags) } } +/* Transmit one byte and check the return value. + * + * expected_ret: + * <0 Negative errno (for testing errors) + * 0 End-of-file + * 1 Success + */ +void sendto_byte(int fd, const struct sockaddr *dest_addr, int len, int expected_ret, + int flags) +{ + const uint8_t byte = 'A'; + ssize_t nwritten; + + timeout_begin(TIMEOUT); + do { + nwritten = sendto(fd, &byte, sizeof(byte), flags, dest_addr, + len); + timeout_check("write"); + } while (nwritten < 0 && errno == EINTR); + timeout_end(); + + if (expected_ret < 0) { + if (nwritten != -1) { + fprintf(stderr, "bogus sendto(2) return value %zd\n", + nwritten); + exit(EXIT_FAILURE); + } + if (errno != -expected_ret) { + perror("write"); + exit(EXIT_FAILURE); + } + return; + } + + if (nwritten < 0) { + perror("write"); + exit(EXIT_FAILURE); + } + if (nwritten == 0) { + if (expected_ret == 0) + return; + + fprintf(stderr, "unexpected EOF while sending byte\n"); + exit(EXIT_FAILURE); + } + if (nwritten != sizeof(byte)) { + fprintf(stderr, "bogus sendto(2) return value %zd\n", nwritten); + exit(EXIT_FAILURE); + } +} + /* Receive one byte and check the return value. * * expected_ret: @@ -313,6 +398,60 @@ void recv_byte(int fd, int expected_ret, int flags) } } +/* Receive one byte and check the return value. + * + * expected_ret: + * <0 Negative errno (for testing errors) + * 0 End-of-file + * 1 Success + */ +void recvfrom_byte(int fd, struct sockaddr *src_addr, socklen_t *addrlen, + int expected_ret, int flags) +{ + uint8_t byte; + ssize_t nread; + + timeout_begin(TIMEOUT); + do { + nread = recvfrom(fd, &byte, sizeof(byte), flags, src_addr, addrlen); + timeout_check("read"); + } while (nread < 0 && errno == EINTR); + timeout_end(); + + if (expected_ret < 0) { + if (nread != -1) { + fprintf(stderr, "bogus recvfrom(2) return value %zd\n", + nread); + exit(EXIT_FAILURE); + } + if (errno != -expected_ret) { + perror("read"); + exit(EXIT_FAILURE); + } + return; + } + + if (nread < 0) { + perror("read"); + exit(EXIT_FAILURE); + } + if (nread == 0) { + if (expected_ret == 0) + return; + + fprintf(stderr, "unexpected EOF while receiving byte\n"); + exit(EXIT_FAILURE); + } + if (nread != sizeof(byte)) { + fprintf(stderr, "bogus recvfrom(2) return value %zd\n", nread); + exit(EXIT_FAILURE); + } + if (byte != 'A') { + fprintf(stderr, "unexpected byte read %c\n", byte); + exit(EXIT_FAILURE); + } +} + /* Run test cases. The program terminates if a failure occurs. */ void run_tests(const struct test_case *test_cases, const struct test_opts *opts) diff --git a/tools/testing/vsock/util.h b/tools/testing/vsock/util.h index fb99208a95ea..a69e128d120c 100644 --- a/tools/testing/vsock/util.h +++ b/tools/testing/vsock/util.h @@ -37,13 +37,19 @@ void init_signals(void); unsigned int parse_cid(const char *str); int vsock_stream_connect(unsigned int cid, unsigned int port); int vsock_seqpacket_connect(unsigned int cid, unsigned int port); +int vsock_dgram_connect(unsigned int cid, unsigned int port); int vsock_stream_accept(unsigned int cid, unsigned int port, struct sockaddr_vm *clientaddrp); int vsock_seqpacket_accept(unsigned int cid, unsigned int port, struct sockaddr_vm *clientaddrp); +int vsock_dgram_bind(unsigned int cid, unsigned int port); void vsock_wait_remote_close(int fd); void send_byte(int fd, int expected_ret, int flags); +void sendto_byte(int fd, const struct sockaddr *dest_addr, int len, int expected_ret, + int flags); void recv_byte(int fd, int expected_ret, int flags); +void recvfrom_byte(int fd, struct sockaddr *src_addr, socklen_t *addrlen, + int expected_ret, int flags); void run_tests(const struct test_case *test_cases, const struct test_opts *opts); void list_tests(const struct test_case *test_cases); diff --git a/tools/testing/vsock/vsock_test.c b/tools/testing/vsock/vsock_test.c index ac1bd3ac1533..ded82d39ee5d 100644 --- a/tools/testing/vsock/vsock_test.c +++ b/tools/testing/vsock/vsock_test.c @@ -1053,6 +1053,413 @@ static void test_stream_virtio_skb_merge_server(const struct test_opts *opts) close(fd); } +static void test_dgram_sendto_client(const struct test_opts *opts) +{ + union { + struct sockaddr sa; + struct sockaddr_vm svm; + } addr = { + .svm = { + .svm_family = AF_VSOCK, + .svm_port = 1234, + .svm_cid = opts->peer_cid, + }, + }; + int fd; + + /* Wait for the server to be ready */ + control_expectln("BIND"); + + fd = socket(AF_VSOCK, SOCK_DGRAM, 0); + if (fd < 0) { + perror("socket"); + exit(EXIT_FAILURE); + } + + sendto_byte(fd, &addr.sa, sizeof(addr.svm), 1, 0); + + /* Notify the server that the client has finished */ + control_writeln("DONE"); + + close(fd); +} + +static void test_dgram_sendto_server(const struct test_opts *opts) +{ + union { + struct sockaddr sa; + struct sockaddr_vm svm; + } addr = { + .svm = { + .svm_family = AF_VSOCK, + .svm_port = 1234, + .svm_cid = VMADDR_CID_ANY, + }, + }; + int len = sizeof(addr.sa); + int fd; + + fd = socket(AF_VSOCK, SOCK_DGRAM, 0); + if (fd < 0) { + perror("socket"); + exit(EXIT_FAILURE); + } + + if (bind(fd, &addr.sa, sizeof(addr.svm)) < 0) { + perror("bind"); + exit(EXIT_FAILURE); + } + + /* Notify the client that the server is ready */ + control_writeln("BIND"); + + recvfrom_byte(fd, &addr.sa, &len, 1, 0); + + /* Wait for the client to finish */ + control_expectln("DONE"); + + close(fd); +} + +static void test_dgram_connect_client(const struct test_opts *opts) +{ + union { + struct sockaddr sa; + struct sockaddr_vm svm; + } addr = { + .svm = { + .svm_family = AF_VSOCK, + .svm_port = 1234, + .svm_cid = opts->peer_cid, + }, + }; + int ret; + int fd; + + /* Wait for the server to be ready */ + control_expectln("BIND"); + + fd = socket(AF_VSOCK, SOCK_DGRAM, 0); + if (fd < 0) { + perror("bind"); + exit(EXIT_FAILURE); + } + + ret = connect(fd, &addr.sa, sizeof(addr.svm)); + if (ret < 0) { + perror("connect"); + exit(EXIT_FAILURE); + } + + send_byte(fd, 1, 0); + + /* Notify the server that the client has finished */ + control_writeln("DONE"); + + close(fd); +} + +static void test_dgram_connect_server(const struct test_opts *opts) +{ + test_dgram_sendto_server(opts); +} + +static void test_dgram_multiconn_sendto_client(const struct test_opts *opts) +{ + union { + struct sockaddr sa; + struct sockaddr_vm svm; + } addr = { + .svm = { + .svm_family = AF_VSOCK, + .svm_port = 1234, + .svm_cid = opts->peer_cid, + }, + }; + int fds[MULTICONN_NFDS]; + int i; + + /* Wait for the server to be ready */ + control_expectln("BIND"); + + for (i = 0; i < MULTICONN_NFDS; i++) { + fds[i] = socket(AF_VSOCK, SOCK_DGRAM, 0); + if (fds[i] < 0) { + perror("socket"); + exit(EXIT_FAILURE); + } + } + + for (i = 0; i < MULTICONN_NFDS; i++) + sendto_byte(fds[i], &addr.sa, sizeof(addr.svm), 1, 0); + + /* Notify the server that the client has finished */ + control_writeln("DONE"); + + for (i = 0; i < MULTICONN_NFDS; i++) + close(fds[i]); +} + +static void test_dgram_multiconn_sendto_server(const struct test_opts *opts) +{ + union { + struct sockaddr sa; + struct sockaddr_vm svm; + } addr = { + .svm = { + .svm_family = AF_VSOCK, + .svm_port = 1234, + .svm_cid = VMADDR_CID_ANY, + }, + }; + int len = sizeof(addr.sa); + int fd; + int i; + + fd = socket(AF_VSOCK, SOCK_DGRAM, 0); + if (fd < 0) { + perror("socket"); + exit(EXIT_FAILURE); + } + + if (bind(fd, &addr.sa, sizeof(addr.svm)) < 0) { + perror("bind"); + exit(EXIT_FAILURE); + } + + /* Notify the client that the server is ready */ + control_writeln("BIND"); + + for (i = 0; i < MULTICONN_NFDS; i++) + recvfrom_byte(fd, &addr.sa, &len, 1, 0); + + /* Wait for the client to finish */ + control_expectln("DONE"); + + close(fd); +} + +static void test_dgram_multiconn_send_client(const struct test_opts *opts) +{ + int fds[MULTICONN_NFDS]; + int i; + + /* Wait for the server to be ready */ + control_expectln("BIND"); + + for (i = 0; i < MULTICONN_NFDS; i++) { + fds[i] = vsock_dgram_connect(opts->peer_cid, 1234); + if (fds[i] < 0) { + perror("socket"); + exit(EXIT_FAILURE); + } + } + + for (i = 0; i < MULTICONN_NFDS; i++) + send_byte(fds[i], 1, 0); + + /* Notify the server that the client has finished */ + control_writeln("DONE"); + + for (i = 0; i < MULTICONN_NFDS; i++) + close(fds[i]); +} + +static void test_dgram_multiconn_send_server(const struct test_opts *opts) +{ + union { + struct sockaddr sa; + struct sockaddr_vm svm; + } addr = { + .svm = { + .svm_family = AF_VSOCK, + .svm_port = 1234, + .svm_cid = VMADDR_CID_ANY, + }, + }; + int fd; + int i; + + fd = socket(AF_VSOCK, SOCK_DGRAM, 0); + if (fd < 0) { + perror("socket"); + exit(EXIT_FAILURE); + } + + if (bind(fd, &addr.sa, sizeof(addr.svm)) < 0) { + perror("bind"); + exit(EXIT_FAILURE); + } + + /* Notify the client that the server is ready */ + control_writeln("BIND"); + + for (i = 0; i < MULTICONN_NFDS; i++) + recv_byte(fd, 1, 0); + + /* Wait for the client to finish */ + control_expectln("DONE"); + + close(fd); +} + +static void test_dgram_msg_bounds_client(const struct test_opts *opts) +{ + unsigned long recv_buf_size; + int page_size; + int msg_cnt; + int fd; + + fd = vsock_dgram_connect(opts->peer_cid, 1234); + if (fd < 0) { + perror("connect"); + exit(EXIT_FAILURE); + } + + /* Let the server know the client is ready */ + control_writeln("CLNTREADY"); + + msg_cnt = control_readulong(); + recv_buf_size = control_readulong(); + + /* Wait, until receiver sets buffer size. */ + control_expectln("SRVREADY"); + + page_size = getpagesize(); + + for (int i = 0; i < msg_cnt; i++) { + unsigned long curr_hash; + ssize_t send_size; + size_t buf_size; + void *buf; + + /* Use "small" buffers and "big" buffers. */ + if (i & 1) + buf_size = page_size + + (rand() % (MAX_MSG_SIZE - page_size)); + else + buf_size = 1 + (rand() % page_size); + + buf_size = min(buf_size, recv_buf_size); + + buf = malloc(buf_size); + + if (!buf) { + perror("malloc"); + exit(EXIT_FAILURE); + } + + memset(buf, rand() & 0xff, buf_size); + /* Set at least one MSG_EOR + some random. */ + + send_size = send(fd, buf, buf_size, 0); + + if (send_size < 0) { + perror("send"); + exit(EXIT_FAILURE); + } + + if (send_size != buf_size) { + fprintf(stderr, "Invalid send size\n"); + exit(EXIT_FAILURE); + } + + /* In theory the implementation isn't required to transmit + * these packets in order, so we use this SYNC control message + * so that server and client coordinate sending and receiving + * one packet at a time. The client sends a packet and waits + * until it has been received before sending another. + */ + control_writeln("PKTSENT"); + control_expectln("PKTRECV"); + + /* Send the server a hash of the packet */ + curr_hash = hash_djb2(buf, buf_size); + control_writeulong(curr_hash); + free(buf); + } + + control_writeln("SENDDONE"); + close(fd); +} + +static void test_dgram_msg_bounds_server(const struct test_opts *opts) +{ + const unsigned long msg_cnt = 16; + unsigned long sock_buf_size; + struct msghdr msg = {0}; + struct iovec iov = {0}; + char buf[MAX_MSG_SIZE]; + socklen_t len; + int fd; + int i; + + fd = vsock_dgram_bind(VMADDR_CID_ANY, 1234); + + if (fd < 0) { + perror("bind"); + exit(EXIT_FAILURE); + } + + /* Set receive buffer to maximum */ + sock_buf_size = -1; + if (setsockopt(fd, SOL_SOCKET, SO_RCVBUF, + &sock_buf_size, sizeof(sock_buf_size))) { + perror("setsockopt(SO_RECVBUF)"); + exit(EXIT_FAILURE); + } + + /* Retrieve the receive buffer size */ + len = sizeof(sock_buf_size); + if (getsockopt(fd, SOL_SOCKET, SO_RCVBUF, + &sock_buf_size, &len)) { + perror("getsockopt(SO_RECVBUF)"); + exit(EXIT_FAILURE); + } + + /* Client ready to receive parameters */ + control_expectln("CLNTREADY"); + + control_writeulong(msg_cnt); + control_writeulong(sock_buf_size); + + /* Ready to receive data. */ + control_writeln("SRVREADY"); + + iov.iov_base = buf; + iov.iov_len = sizeof(buf); + msg.msg_iov = &iov; + msg.msg_iovlen = 1; + + for (i = 0; i < msg_cnt; i++) { + unsigned long remote_hash; + unsigned long curr_hash; + ssize_t recv_size; + + control_expectln("PKTSENT"); + recv_size = recvmsg(fd, &msg, 0); + control_writeln("PKTRECV"); + + if (!recv_size) + break; + + if (recv_size < 0) { + perror("recvmsg"); + exit(EXIT_FAILURE); + } + + curr_hash = hash_djb2(msg.msg_iov[0].iov_base, recv_size); + remote_hash = control_readulong(); + + if (curr_hash != remote_hash) { + fprintf(stderr, "Message bounds broken\n"); + exit(EXIT_FAILURE); + } + } + + close(fd); +} + static struct test_case test_cases[] = { { .name = "SOCK_STREAM connection reset", @@ -1128,6 +1535,31 @@ static struct test_case test_cases[] = { .run_client = test_stream_virtio_skb_merge_client, .run_server = test_stream_virtio_skb_merge_server, }, + { + .name = "SOCK_DGRAM client sendto", + .run_client = test_dgram_sendto_client, + .run_server = test_dgram_sendto_server, + }, + { + .name = "SOCK_DGRAM client connect", + .run_client = test_dgram_connect_client, + .run_server = test_dgram_connect_server, + }, + { + .name = "SOCK_DGRAM multiple connections using sendto", + .run_client = test_dgram_multiconn_sendto_client, + .run_server = test_dgram_multiconn_sendto_server, + }, + { + .name = "SOCK_DGRAM multiple connections using send", + .run_client = test_dgram_multiconn_send_client, + .run_server = test_dgram_multiconn_send_server, + }, + { + .name = "SOCK_DGRAM msg bounds", + .run_client = test_dgram_msg_bounds_client, + .run_server = test_dgram_msg_bounds_server, + }, {}, };