Message ID | 20221205084127.535-4-xieyongji@bytedance.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp2135346wrr; Mon, 5 Dec 2022 00:50:15 -0800 (PST) X-Google-Smtp-Source: AA0mqf52JzPQ1K/YzVbCDmFr+OGQC2K11a/cA1iPC3ZQKV+fxW6aPZgYo2HIx6baqg/P+iM6/e+B X-Received: by 2002:a17:906:b819:b0:7c0:f20c:2f64 with SMTP id dv25-20020a170906b81900b007c0f20c2f64mr3607797ejb.230.1670230215024; Mon, 05 Dec 2022 00:50:15 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1670230215; cv=none; d=google.com; s=arc-20160816; b=K8OuDeS6Sn9aD4BiFAxTmS9d1mwFekde03Rovnn4hENVJbat/xMLqkhd9GlajDBMg/ hMD232FaUsEBwamRSW7czFqoV+4lfrNczEbrUQg2A0i5x6lTWvmxm3h6HyK6fsqjmvVe IHJd1ljUmfRTc6f42bAlCZg0S/bTrQbeVNPI0ME5+yPlRZjOTmDcr5XQwAaCVD202t4g /moUFDOAztTHe1P4zRJQ/2ji2sxsz8vO8RV9vwQBlcyOj8UgJyFDgQI3IthTjc13thqB XMMExm7ojSwxur+cBN3LExKEOXqinn+nJHKWhNoTkwxEjw49Q6AYgKIpXoaEuLFgi+Ci fxWQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=cO+KU63uNOWhYuRIkuFEXhxzLMtKvTfecVGe+9pJx48=; b=xUBiK1kNMCePUhQFpfsLg/LaNUzbcWyTWeUPtK2AkDuy/6hMhQEZrVc3i3VOAwM+F3 Tsuiaf0n+4FknTy+DNCa6gJAychUrHOUngwB4hlSQA0Fv2KNjSQlbNrlGXecsIcjj6h4 ioEyLYEa1KN1r+wsQBwuqxoxLY4IY0YGEjqrd9EmGhX2xgx24Wxa5GPhwvuleU+FZbk/ 6+kTDCIBdadiA/qCNirCdK/NZNES/v1XSZZlwJZ8REqPve3fnw6Y1a9/DLt5QNxJoN2u UCeF7NM8cQK9Tl69p8Oj6lWGKhEgnTZaHJfxOr7SDzRxN/6yMMWMInc8agluDCVzYPWV Of/Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=DhvH+Pir; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=bytedance.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id ji13-20020a170907980d00b007c098f5e5f4si13273093ejc.233.2022.12.05.00.49.51; Mon, 05 Dec 2022 00:50:15 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=DhvH+Pir; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=bytedance.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232221AbiLEInz (ORCPT <rfc822;jaysivo@gmail.com> + 99 others); Mon, 5 Dec 2022 03:43:55 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36030 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231753AbiLEIno (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Mon, 5 Dec 2022 03:43:44 -0500 Received: from mail-pl1-x633.google.com (mail-pl1-x633.google.com [IPv6:2607:f8b0:4864:20::633]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DAE7BF6D for <linux-kernel@vger.kernel.org>; Mon, 5 Dec 2022 00:43:43 -0800 (PST) Received: by mail-pl1-x633.google.com with SMTP id jl24so10153412plb.8 for <linux-kernel@vger.kernel.org>; Mon, 05 Dec 2022 00:43:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=cO+KU63uNOWhYuRIkuFEXhxzLMtKvTfecVGe+9pJx48=; b=DhvH+PirhSMpFBT0JD9NNMmsymRnTivAVV2txwqvLAmz2gZL/tiC09x3ysYplVGVOr rLr3wHTZr0W1MUAL/AegwzZ5IlETMRfuV0hrxpx0sP6rP+s5uOovAtJSDNF/cBLfjR3D k1lqXE1PeD0Pa6oMjZNUyNwMxF4xnJEFazUNyF++JBJabk1UeQc6D/AWxqpcxTWzBkNG 07VLm2boc8fgX9OBw9QbFTAko0AnFCqUi4x07j3nIhWvwit8y5AG45Uar+IlTEzKLCtR 2fqFmUk6M0OtrWZLw4fZuiZx2SnskerlOnQiLdZEeTU4FB8m6gXGti/S6LQeeu5nNdR1 54dg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=cO+KU63uNOWhYuRIkuFEXhxzLMtKvTfecVGe+9pJx48=; b=D15HZuRRhDygMmjdQzXBAyJewfHwJIjVkKlHGbX2yo2CaOQanAeRkXNCu+K98IkHiu 3GT2zEbZIO+mzN/HdFIesJ8EYjI7piy8zjcjPgXGWNSBmiesqT2MMA4DgNu5E9ZRqHxv 7vz5SV/x3reALf5Cpjsk74YVMuspnoOskUSxz+SMYj6a5Zimpl2sVPTFXSOawmU6FIPf mq4UJnA/AG5Db44tzVhHwQqjb3qI0DSKqWnvgGHkiHHUdIwqjp4IXb24rEd53V6842y1 tLOHKtE+BHWjx7ZuX6DKIrPagz+60M/BuxrEH0nn4S+CY3odD7AEe4U2fqzRYyAdZA0U l+Jg== X-Gm-Message-State: ANoB5ploLnkbUsFBS3/OePDpAOPep4aFxfDmhdmRDPNuG3r/TL9F0sN4 dgx7e5WqWjYicS3kXjzGuaBw X-Received: by 2002:a17:90b:2503:b0:219:baef:3d0 with SMTP id ns3-20020a17090b250300b00219baef03d0mr7903735pjb.95.1670229823405; Mon, 05 Dec 2022 00:43:43 -0800 (PST) Received: from localhost ([101.229.232.206]) by smtp.gmail.com with ESMTPSA id n6-20020a170902d2c600b0017c19d7c89bsm10005499plc.269.2022.12.05.00.43.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 05 Dec 2022 00:43:43 -0800 (PST) From: Xie Yongji <xieyongji@bytedance.com> To: mst@redhat.com, jasowang@redhat.com, tglx@linutronix.de, hch@lst.de Cc: virtualization@lists.linux-foundation.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 03/11] vdpa: Add set_irq_affinity callback in vdpa_config_ops Date: Mon, 5 Dec 2022 16:41:19 +0800 Message-Id: <20221205084127.535-4-xieyongji@bytedance.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20221205084127.535-1-xieyongji@bytedance.com> References: <20221205084127.535-1-xieyongji@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1751363317565243021?= X-GMAIL-MSGID: =?utf-8?q?1751363317565243021?= |
Series |
VDUSE: Improve performance
|
|
Commit Message
Yongji Xie
Dec. 5, 2022, 8:41 a.m. UTC
This introduces set_irq_affinity callback in
vdpa_config_ops so that vdpa device driver can
get the interrupt affinity hint from the virtio
device driver. The interrupt affinity hint would
be needed by the interrupt affinity spreading
mechanism.
Signed-off-by: Xie Yongji <xieyongji@bytedance.com>
---
drivers/virtio/virtio_vdpa.c | 4 ++++
include/linux/vdpa.h | 8 ++++++++
2 files changed, 12 insertions(+)
Comments
On Mon, Dec 5, 2022 at 4:43 PM Xie Yongji <xieyongji@bytedance.com> wrote: > > This introduces set_irq_affinity callback in > vdpa_config_ops so that vdpa device driver can > get the interrupt affinity hint from the virtio > device driver. The interrupt affinity hint would > be needed by the interrupt affinity spreading > mechanism. > > Signed-off-by: Xie Yongji <xieyongji@bytedance.com> > --- > drivers/virtio/virtio_vdpa.c | 4 ++++ > include/linux/vdpa.h | 8 ++++++++ > 2 files changed, 12 insertions(+) > > diff --git a/drivers/virtio/virtio_vdpa.c b/drivers/virtio/virtio_vdpa.c > index 08084b49e5a1..4731e4616ee0 100644 > --- a/drivers/virtio/virtio_vdpa.c > +++ b/drivers/virtio/virtio_vdpa.c > @@ -275,9 +275,13 @@ static int virtio_vdpa_find_vqs(struct virtio_device *vdev, unsigned int nvqs, > struct virtio_vdpa_device *vd_dev = to_virtio_vdpa_device(vdev); > struct vdpa_device *vdpa = vd_get_vdpa(vdev); > const struct vdpa_config_ops *ops = vdpa->config; > + struct irq_affinity default_affd = { 0 }; > struct vdpa_callback cb; > int i, err, queue_idx = 0; > > + if (ops->set_irq_affinity) > + ops->set_irq_affinity(vdpa, desc ? desc : &default_affd); I wonder if we need to do this in vhost-vDPA. Or it's better to have a default affinity by the vDPA parent itself. (Looking at virtio-pci, it doesn't do something like this). Thanks > + > for (i = 0; i < nvqs; ++i) { > if (!names[i]) { > vqs[i] = NULL; > diff --git a/include/linux/vdpa.h b/include/linux/vdpa.h > index 0ff6c9363356..482ff7d0206f 100644 > --- a/include/linux/vdpa.h > +++ b/include/linux/vdpa.h > @@ -256,6 +256,12 @@ struct vdpa_map_file { > * @vdev: vdpa device > * @idx: virtqueue index > * Returns the irq affinity mask > + * @set_irq_affinity: Pass the irq affinity hint from the virtio > + * device driver to vdpa driver (optional). > + * Needed by the interrupt affinity spreading > + * mechanism. > + * @vdev: vdpa device > + * @desc: irq affinity hint > * @set_group_asid: Set address space identifier for a > * virtqueue group (optional) > * @vdev: vdpa device > @@ -344,6 +350,8 @@ struct vdpa_config_ops { > const struct cpumask *cpu_mask); > const struct cpumask *(*get_vq_affinity)(struct vdpa_device *vdev, > u16 idx); > + void (*set_irq_affinity)(struct vdpa_device *vdev, > + struct irq_affinity *desc); > > /* DMA ops */ > int (*set_map)(struct vdpa_device *vdev, unsigned int asid, > -- > 2.20.1 >
On Fri, Dec 16, 2022 at 11:58 AM Jason Wang <jasowang@redhat.com> wrote: > > On Mon, Dec 5, 2022 at 4:43 PM Xie Yongji <xieyongji@bytedance.com> wrote: > > > > This introduces set_irq_affinity callback in > > vdpa_config_ops so that vdpa device driver can > > get the interrupt affinity hint from the virtio > > device driver. The interrupt affinity hint would > > be needed by the interrupt affinity spreading > > mechanism. > > > > Signed-off-by: Xie Yongji <xieyongji@bytedance.com> > > --- > > drivers/virtio/virtio_vdpa.c | 4 ++++ > > include/linux/vdpa.h | 8 ++++++++ > > 2 files changed, 12 insertions(+) > > > > diff --git a/drivers/virtio/virtio_vdpa.c b/drivers/virtio/virtio_vdpa.c > > index 08084b49e5a1..4731e4616ee0 100644 > > --- a/drivers/virtio/virtio_vdpa.c > > +++ b/drivers/virtio/virtio_vdpa.c > > @@ -275,9 +275,13 @@ static int virtio_vdpa_find_vqs(struct virtio_device *vdev, unsigned int nvqs, > > struct virtio_vdpa_device *vd_dev = to_virtio_vdpa_device(vdev); > > struct vdpa_device *vdpa = vd_get_vdpa(vdev); > > const struct vdpa_config_ops *ops = vdpa->config; > > + struct irq_affinity default_affd = { 0 }; > > struct vdpa_callback cb; > > int i, err, queue_idx = 0; > > > > + if (ops->set_irq_affinity) > > + ops->set_irq_affinity(vdpa, desc ? desc : &default_affd); > > I wonder if we need to do this in vhost-vDPA. I don't get why we need to do this in vhost-vDPA? Should this be done in VM? > Or it's better to have a > default affinity by the vDPA parent > I think both are OK. But the default value should always be zero, so I put it in a common place. > (Looking at virtio-pci, it doesn't do something like this). > Yes, but we did something like this in the pci layer: pci_alloc_irq_vectors_affinity(). Thanks, Yongji
On Mon, Dec 19, 2022 at 12:39 PM Yongji Xie <xieyongji@bytedance.com> wrote: > > On Fri, Dec 16, 2022 at 11:58 AM Jason Wang <jasowang@redhat.com> wrote: > > > > On Mon, Dec 5, 2022 at 4:43 PM Xie Yongji <xieyongji@bytedance.com> wrote: > > > > > > This introduces set_irq_affinity callback in > > > vdpa_config_ops so that vdpa device driver can > > > get the interrupt affinity hint from the virtio > > > device driver. The interrupt affinity hint would > > > be needed by the interrupt affinity spreading > > > mechanism. > > > > > > Signed-off-by: Xie Yongji <xieyongji@bytedance.com> > > > --- > > > drivers/virtio/virtio_vdpa.c | 4 ++++ > > > include/linux/vdpa.h | 8 ++++++++ > > > 2 files changed, 12 insertions(+) > > > > > > diff --git a/drivers/virtio/virtio_vdpa.c b/drivers/virtio/virtio_vdpa.c > > > index 08084b49e5a1..4731e4616ee0 100644 > > > --- a/drivers/virtio/virtio_vdpa.c > > > +++ b/drivers/virtio/virtio_vdpa.c > > > @@ -275,9 +275,13 @@ static int virtio_vdpa_find_vqs(struct virtio_device *vdev, unsigned int nvqs, > > > struct virtio_vdpa_device *vd_dev = to_virtio_vdpa_device(vdev); > > > struct vdpa_device *vdpa = vd_get_vdpa(vdev); > > > const struct vdpa_config_ops *ops = vdpa->config; > > > + struct irq_affinity default_affd = { 0 }; > > > struct vdpa_callback cb; > > > int i, err, queue_idx = 0; > > > > > > + if (ops->set_irq_affinity) > > > + ops->set_irq_affinity(vdpa, desc ? desc : &default_affd); > > > > I wonder if we need to do this in vhost-vDPA. > > I don't get why we need to do this in vhost-vDPA? Should this be done in VM? If I was not wrong, this tries to set affinity on the host instead of the guest. More below. > > > Or it's better to have a > > default affinity by the vDPA parent > > > > I think both are OK. But the default value should always be zero, so I > put it in a common place. I think we should either: 1) document the zero default value in vdpa.c 2) set the zero in both vhost-vdpa and virtio-vdpa, or in the vdpa core > > > (Looking at virtio-pci, it doesn't do something like this). > > > > Yes, but we did something like this in the pci layer: > pci_alloc_irq_vectors_affinity(). Ok. Thanks > > Thanks, > Yongji >
On Mon, Dec 19, 2022 at 2:06 PM Jason Wang <jasowang@redhat.com> wrote: > > On Mon, Dec 19, 2022 at 12:39 PM Yongji Xie <xieyongji@bytedance.com> wrote: > > > > On Fri, Dec 16, 2022 at 11:58 AM Jason Wang <jasowang@redhat.com> wrote: > > > > > > On Mon, Dec 5, 2022 at 4:43 PM Xie Yongji <xieyongji@bytedance.com> wrote: > > > > > > > > This introduces set_irq_affinity callback in > > > > vdpa_config_ops so that vdpa device driver can > > > > get the interrupt affinity hint from the virtio > > > > device driver. The interrupt affinity hint would > > > > be needed by the interrupt affinity spreading > > > > mechanism. > > > > > > > > Signed-off-by: Xie Yongji <xieyongji@bytedance.com> > > > > --- > > > > drivers/virtio/virtio_vdpa.c | 4 ++++ > > > > include/linux/vdpa.h | 8 ++++++++ > > > > 2 files changed, 12 insertions(+) > > > > > > > > diff --git a/drivers/virtio/virtio_vdpa.c b/drivers/virtio/virtio_vdpa.c > > > > index 08084b49e5a1..4731e4616ee0 100644 > > > > --- a/drivers/virtio/virtio_vdpa.c > > > > +++ b/drivers/virtio/virtio_vdpa.c > > > > @@ -275,9 +275,13 @@ static int virtio_vdpa_find_vqs(struct virtio_device *vdev, unsigned int nvqs, > > > > struct virtio_vdpa_device *vd_dev = to_virtio_vdpa_device(vdev); > > > > struct vdpa_device *vdpa = vd_get_vdpa(vdev); > > > > const struct vdpa_config_ops *ops = vdpa->config; > > > > + struct irq_affinity default_affd = { 0 }; > > > > struct vdpa_callback cb; > > > > int i, err, queue_idx = 0; > > > > > > > > + if (ops->set_irq_affinity) > > > > + ops->set_irq_affinity(vdpa, desc ? desc : &default_affd); > > > > > > I wonder if we need to do this in vhost-vDPA. > > > > I don't get why we need to do this in vhost-vDPA? Should this be done in VM? > > If I was not wrong, this tries to set affinity on the host instead of > the guest. More below. > Yes, it's host stuff. This is used by the virtio device driver to pass the irq affinity hint (tell which irq vectors don't need affinity management) to the irq affinity manager. In the VM case, it should only be related to the guest's virtio device driver and pci irq affinity manager. So I don't get why we need to do this in vhost-vDPA. > > > > > Or it's better to have a > > > default affinity by the vDPA parent > > > > > > > I think both are OK. But the default value should always be zero, so I > > put it in a common place. > > I think we should either: > > 1) document the zero default value in vdpa.c > 2) set the zero in both vhost-vdpa and virtio-vdpa, or in the vdpa core > Can we only call it in the virtio-vdpa case? Thus the vdpa device driver can know whether it needs to do the automatic irq affinity management or not. In the vhost-vdpa case, we actually don't need the irq affinity management. Thanks, Yongji
On Mon, Dec 19, 2022 at 3:12 PM Yongji Xie <xieyongji@bytedance.com> wrote: > > On Mon, Dec 19, 2022 at 2:06 PM Jason Wang <jasowang@redhat.com> wrote: > > > > On Mon, Dec 19, 2022 at 12:39 PM Yongji Xie <xieyongji@bytedance.com> wrote: > > > > > > On Fri, Dec 16, 2022 at 11:58 AM Jason Wang <jasowang@redhat.com> wrote: > > > > > > > > On Mon, Dec 5, 2022 at 4:43 PM Xie Yongji <xieyongji@bytedance.com> wrote: > > > > > > > > > > This introduces set_irq_affinity callback in > > > > > vdpa_config_ops so that vdpa device driver can > > > > > get the interrupt affinity hint from the virtio > > > > > device driver. The interrupt affinity hint would > > > > > be needed by the interrupt affinity spreading > > > > > mechanism. > > > > > > > > > > Signed-off-by: Xie Yongji <xieyongji@bytedance.com> > > > > > --- > > > > > drivers/virtio/virtio_vdpa.c | 4 ++++ > > > > > include/linux/vdpa.h | 8 ++++++++ > > > > > 2 files changed, 12 insertions(+) > > > > > > > > > > diff --git a/drivers/virtio/virtio_vdpa.c b/drivers/virtio/virtio_vdpa.c > > > > > index 08084b49e5a1..4731e4616ee0 100644 > > > > > --- a/drivers/virtio/virtio_vdpa.c > > > > > +++ b/drivers/virtio/virtio_vdpa.c > > > > > @@ -275,9 +275,13 @@ static int virtio_vdpa_find_vqs(struct virtio_device *vdev, unsigned int nvqs, > > > > > struct virtio_vdpa_device *vd_dev = to_virtio_vdpa_device(vdev); > > > > > struct vdpa_device *vdpa = vd_get_vdpa(vdev); > > > > > const struct vdpa_config_ops *ops = vdpa->config; > > > > > + struct irq_affinity default_affd = { 0 }; > > > > > struct vdpa_callback cb; > > > > > int i, err, queue_idx = 0; > > > > > > > > > > + if (ops->set_irq_affinity) > > > > > + ops->set_irq_affinity(vdpa, desc ? desc : &default_affd); > > > > > > > > I wonder if we need to do this in vhost-vDPA. > > > > > > I don't get why we need to do this in vhost-vDPA? Should this be done in VM? > > > > If I was not wrong, this tries to set affinity on the host instead of > > the guest. More below. > > > > Yes, it's host stuff. This is used by the virtio device driver to pass > the irq affinity hint (tell which irq vectors don't need affinity > management) to the irq affinity manager. In the VM case, it should > only be related to the guest's virtio device driver and pci irq > affinity manager. So I don't get why we need to do this in vhost-vDPA. It's not necessarily the VM, do we have the same requirement for userspace (like DPDK) drivers? Thanks > > > > > > > > Or it's better to have a > > > > default affinity by the vDPA parent > > > > > > > > > > I think both are OK. But the default value should always be zero, so I > > > put it in a common place. > > > > I think we should either: > > > > 1) document the zero default value in vdpa.c > > 2) set the zero in both vhost-vdpa and virtio-vdpa, or in the vdpa core > > > > Can we only call it in the virtio-vdpa case? Thus the vdpa device > driver can know whether it needs to do the automatic irq affinity > management or not. In the vhost-vdpa case, we actually don't need the > irq affinity management. > > Thanks, > Yongji >
On Tue, Dec 20, 2022 at 2:31 PM Jason Wang <jasowang@redhat.com> wrote: > > On Mon, Dec 19, 2022 at 3:12 PM Yongji Xie <xieyongji@bytedance.com> wrote: > > > > On Mon, Dec 19, 2022 at 2:06 PM Jason Wang <jasowang@redhat.com> wrote: > > > > > > On Mon, Dec 19, 2022 at 12:39 PM Yongji Xie <xieyongji@bytedance.com> wrote: > > > > > > > > On Fri, Dec 16, 2022 at 11:58 AM Jason Wang <jasowang@redhat.com> wrote: > > > > > > > > > > On Mon, Dec 5, 2022 at 4:43 PM Xie Yongji <xieyongji@bytedance.com> wrote: > > > > > > > > > > > > This introduces set_irq_affinity callback in > > > > > > vdpa_config_ops so that vdpa device driver can > > > > > > get the interrupt affinity hint from the virtio > > > > > > device driver. The interrupt affinity hint would > > > > > > be needed by the interrupt affinity spreading > > > > > > mechanism. > > > > > > > > > > > > Signed-off-by: Xie Yongji <xieyongji@bytedance.com> > > > > > > --- > > > > > > drivers/virtio/virtio_vdpa.c | 4 ++++ > > > > > > include/linux/vdpa.h | 8 ++++++++ > > > > > > 2 files changed, 12 insertions(+) > > > > > > > > > > > > diff --git a/drivers/virtio/virtio_vdpa.c b/drivers/virtio/virtio_vdpa.c > > > > > > index 08084b49e5a1..4731e4616ee0 100644 > > > > > > --- a/drivers/virtio/virtio_vdpa.c > > > > > > +++ b/drivers/virtio/virtio_vdpa.c > > > > > > @@ -275,9 +275,13 @@ static int virtio_vdpa_find_vqs(struct virtio_device *vdev, unsigned int nvqs, > > > > > > struct virtio_vdpa_device *vd_dev = to_virtio_vdpa_device(vdev); > > > > > > struct vdpa_device *vdpa = vd_get_vdpa(vdev); > > > > > > const struct vdpa_config_ops *ops = vdpa->config; > > > > > > + struct irq_affinity default_affd = { 0 }; > > > > > > struct vdpa_callback cb; > > > > > > int i, err, queue_idx = 0; > > > > > > > > > > > > + if (ops->set_irq_affinity) > > > > > > + ops->set_irq_affinity(vdpa, desc ? desc : &default_affd); > > > > > > > > > > I wonder if we need to do this in vhost-vDPA. > > > > > > > > I don't get why we need to do this in vhost-vDPA? Should this be done in VM? > > > > > > If I was not wrong, this tries to set affinity on the host instead of > > > the guest. More below. > > > > > > > Yes, it's host stuff. This is used by the virtio device driver to pass > > the irq affinity hint (tell which irq vectors don't need affinity > > management) to the irq affinity manager. In the VM case, it should > > only be related to the guest's virtio device driver and pci irq > > affinity manager. So I don't get why we need to do this in vhost-vDPA. > > It's not necessarily the VM, do we have the same requirement for > userspace (like DPDK) drivers? > IIUC the vhost-vdpa's irq callback just signals the eventfd. I didn't see how to use the irq affinity hint in vdpa device driver. The real irq callback should be called in DPDK internally. Thanks, Yongji
On Tue, Dec 20, 2022 at 6:14 PM Yongji Xie <xieyongji@bytedance.com> wrote: > > On Tue, Dec 20, 2022 at 2:31 PM Jason Wang <jasowang@redhat.com> wrote: > > > > On Mon, Dec 19, 2022 at 3:12 PM Yongji Xie <xieyongji@bytedance.com> wrote: > > > > > > On Mon, Dec 19, 2022 at 2:06 PM Jason Wang <jasowang@redhat.com> wrote: > > > > > > > > On Mon, Dec 19, 2022 at 12:39 PM Yongji Xie <xieyongji@bytedance.com> wrote: > > > > > > > > > > On Fri, Dec 16, 2022 at 11:58 AM Jason Wang <jasowang@redhat.com> wrote: > > > > > > > > > > > > On Mon, Dec 5, 2022 at 4:43 PM Xie Yongji <xieyongji@bytedance.com> wrote: > > > > > > > > > > > > > > This introduces set_irq_affinity callback in > > > > > > > vdpa_config_ops so that vdpa device driver can > > > > > > > get the interrupt affinity hint from the virtio > > > > > > > device driver. The interrupt affinity hint would > > > > > > > be needed by the interrupt affinity spreading > > > > > > > mechanism. > > > > > > > > > > > > > > Signed-off-by: Xie Yongji <xieyongji@bytedance.com> > > > > > > > --- > > > > > > > drivers/virtio/virtio_vdpa.c | 4 ++++ > > > > > > > include/linux/vdpa.h | 8 ++++++++ > > > > > > > 2 files changed, 12 insertions(+) > > > > > > > > > > > > > > diff --git a/drivers/virtio/virtio_vdpa.c b/drivers/virtio/virtio_vdpa.c > > > > > > > index 08084b49e5a1..4731e4616ee0 100644 > > > > > > > --- a/drivers/virtio/virtio_vdpa.c > > > > > > > +++ b/drivers/virtio/virtio_vdpa.c > > > > > > > @@ -275,9 +275,13 @@ static int virtio_vdpa_find_vqs(struct virtio_device *vdev, unsigned int nvqs, > > > > > > > struct virtio_vdpa_device *vd_dev = to_virtio_vdpa_device(vdev); > > > > > > > struct vdpa_device *vdpa = vd_get_vdpa(vdev); > > > > > > > const struct vdpa_config_ops *ops = vdpa->config; > > > > > > > + struct irq_affinity default_affd = { 0 }; > > > > > > > struct vdpa_callback cb; > > > > > > > int i, err, queue_idx = 0; > > > > > > > > > > > > > > + if (ops->set_irq_affinity) > > > > > > > + ops->set_irq_affinity(vdpa, desc ? desc : &default_affd); > > > > > > > > > > > > I wonder if we need to do this in vhost-vDPA. > > > > > > > > > > I don't get why we need to do this in vhost-vDPA? Should this be done in VM? > > > > > > > > If I was not wrong, this tries to set affinity on the host instead of > > > > the guest. More below. > > > > > > > > > > Yes, it's host stuff. This is used by the virtio device driver to pass > > > the irq affinity hint (tell which irq vectors don't need affinity > > > management) to the irq affinity manager. In the VM case, it should > > > only be related to the guest's virtio device driver and pci irq > > > affinity manager. So I don't get why we need to do this in vhost-vDPA. > > > > It's not necessarily the VM, do we have the same requirement for > > userspace (like DPDK) drivers? > > > > IIUC the vhost-vdpa's irq callback just signals the eventfd. I didn't > see how to use the irq affinity hint in vdpa device driver. The real > irq callback should be called in DPDK internally. I agree. Thanks > > Thanks, > Yongji >
diff --git a/drivers/virtio/virtio_vdpa.c b/drivers/virtio/virtio_vdpa.c index 08084b49e5a1..4731e4616ee0 100644 --- a/drivers/virtio/virtio_vdpa.c +++ b/drivers/virtio/virtio_vdpa.c @@ -275,9 +275,13 @@ static int virtio_vdpa_find_vqs(struct virtio_device *vdev, unsigned int nvqs, struct virtio_vdpa_device *vd_dev = to_virtio_vdpa_device(vdev); struct vdpa_device *vdpa = vd_get_vdpa(vdev); const struct vdpa_config_ops *ops = vdpa->config; + struct irq_affinity default_affd = { 0 }; struct vdpa_callback cb; int i, err, queue_idx = 0; + if (ops->set_irq_affinity) + ops->set_irq_affinity(vdpa, desc ? desc : &default_affd); + for (i = 0; i < nvqs; ++i) { if (!names[i]) { vqs[i] = NULL; diff --git a/include/linux/vdpa.h b/include/linux/vdpa.h index 0ff6c9363356..482ff7d0206f 100644 --- a/include/linux/vdpa.h +++ b/include/linux/vdpa.h @@ -256,6 +256,12 @@ struct vdpa_map_file { * @vdev: vdpa device * @idx: virtqueue index * Returns the irq affinity mask + * @set_irq_affinity: Pass the irq affinity hint from the virtio + * device driver to vdpa driver (optional). + * Needed by the interrupt affinity spreading + * mechanism. + * @vdev: vdpa device + * @desc: irq affinity hint * @set_group_asid: Set address space identifier for a * virtqueue group (optional) * @vdev: vdpa device @@ -344,6 +350,8 @@ struct vdpa_config_ops { const struct cpumask *cpu_mask); const struct cpumask *(*get_vq_affinity)(struct vdpa_device *vdev, u16 idx); + void (*set_irq_affinity)(struct vdpa_device *vdev, + struct irq_affinity *desc); /* DMA ops */ int (*set_map)(struct vdpa_device *vdev, unsigned int asid,