Message ID | 1697605893-30313-1-git-send-email-si-wei.liu@oracle.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:612c:2908:b0:403:3b70:6f57 with SMTP id ib8csp4572837vqb; Tue, 17 Oct 2023 22:14:34 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHqq6m5QlGjRE+ZNwfQGXmNH2Ypuy105PzvqBVYBydr3Z3Pl89Ky/egxBcVzbCnS9Nx+twO X-Received: by 2002:a17:903:845:b0:1ca:273d:232 with SMTP id ks5-20020a170903084500b001ca273d0232mr4236728plb.0.1697606073895; Tue, 17 Oct 2023 22:14:33 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1697606073; cv=none; d=google.com; s=arc-20160816; b=m4PqP/3RzlRmiU7xIc7dg7tElDfSa/9ptlD8BawmAqjdEdJI46P6ZjAD6jFqrBNt+A 5ZfOxKBNEqOIyfcItA/torNnvdIRF5J3JUSDm6OYkvB/tqS0LCykP5ei0DKjsPkJGHsU Uk5qTOJ0zn2NxOOR+ReA2NzmkfeyR7RAMOOo68BKdXSTItViiZiZClSwv+dUG6avzKZ8 StauZ3ZR9Q2lqnFS5giHGtJowa2vnmzLbMXa675QY9OZNEihn7Lo0r8PNjWxDi87upF+ 0EelW3TBMvgnv6S67sjQ+bjN6PHTUPIBid+IxOTORSi1bmp92vD0QmaNn4UD8adsL2/V tP+A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:message-id:date:subject:cc:to:from :dkim-signature; bh=iUNSO/bYR9F/qcPJc1+lRcxI13H2YKnKpW68eyRCbEo=; fh=Ue94aJEXk9VFZlXHlJjxkTOEicDf85DzpZNDITnIKkM=; b=K076/5OZBejlXJnEL1NZs83QEVcxzkG/acMsPbBEhr5n9ynCuEZqMfH0r+orl/+B49 3mgBnN16LbmlBB3II+RnJSTrtZvRzT3n8sjRMYqXubSSn1VQXxUewpKG1ddijOepuwGp 0I6YgCCmRvZBd9Fj52wM70+QcrjoudSrVsHx8zYcvHjtFW4VQCMDt+MmZJYCnfuYzu+s xadYd2df4rMNH/i0/d5KlFnHNUw4VS9qbuqoFQIigzdWGudh1XY0YI3yYbX9F4uKFLA4 RfuLXHmuVL6FhEIXzGmRRTGz1bZH5WpD/boqBUGhDfBD5xAQeOrbLjiEOe8K02fedkGY Tt4Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2023-03-30 header.b=0A+sh3TU; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Received: from snail.vger.email (snail.vger.email. [23.128.96.37]) by mx.google.com with ESMTPS id z18-20020a170903019200b001c6183af4d3si3575836plg.332.2023.10.17.22.14.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 17 Oct 2023 22:14:33 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) client-ip=23.128.96.37; Authentication-Results: mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2023-03-30 header.b=0A+sh3TU; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by snail.vger.email (Postfix) with ESMTP id C2BA8811F914; Tue, 17 Oct 2023 22:14:32 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at snail.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229544AbjJRFOU (ORCPT <rfc822;zwp10758@gmail.com> + 23 others); Wed, 18 Oct 2023 01:14:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55468 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229455AbjJRFOT (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Wed, 18 Oct 2023 01:14:19 -0400 Received: from mx0b-00069f02.pphosted.com (mx0b-00069f02.pphosted.com [205.220.177.32]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5BC02B0 for <linux-kernel@vger.kernel.org>; Tue, 17 Oct 2023 22:14:17 -0700 (PDT) Received: from pps.filterd (m0246631.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 39HJwpYv015737; Wed, 18 Oct 2023 05:14:13 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id; s=corp-2023-03-30; bh=iUNSO/bYR9F/qcPJc1+lRcxI13H2YKnKpW68eyRCbEo=; b=0A+sh3TUnM6mQ083lKq3P5JZY8EsR8bCUvUfpFfFepiSG4V9WniDC8ua0ltWjsnAx8ih 6tzYReL5bbPR+PmCWR/me7WXd0n9Ug3T5HwoAACq3vGFwU3wJrrD1j2QWYji3Fyb02ZF obP4sgXYySfn4clhs1xunbcvKZDegr8N/R5yPyENWgU6yBlF50rROcn4a+24m1f3PiSB jmQrEBKzx5qLSl+iUe1Un6jlqSKMP3bsPlL4WND3lpt0kLSMyPSOpC/Q9eX+WibQ45Sm OZ0bP9ED+5YcJrquB6yzxyqjMhts0wn2OSKWOlE6iKjs+hvIqE0mx3tsT6LEIcjDcKmM cw== Received: from phxpaimrmta02.imrmtpd1.prodappphxaev1.oraclevcn.com (phxpaimrmta02.appoci.oracle.com [147.154.114.232]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3tqjynepeu-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 18 Oct 2023 05:14:13 +0000 Received: from pps.filterd (phxpaimrmta02.imrmtpd1.prodappphxaev1.oraclevcn.com [127.0.0.1]) by phxpaimrmta02.imrmtpd1.prodappphxaev1.oraclevcn.com (8.17.1.19/8.17.1.19) with ESMTP id 39I49CXR021681; Wed, 18 Oct 2023 05:14:12 GMT Received: from ban25x6uut24.us.oracle.com (ban25x6uut24.us.oracle.com [10.153.73.24]) by phxpaimrmta02.imrmtpd1.prodappphxaev1.oraclevcn.com (PPS) with ESMTP id 3trg51ycja-1; Wed, 18 Oct 2023 05:14:12 +0000 From: Si-Wei Liu <si-wei.liu@oracle.com> To: jasowang@redhat.com, mst@redhat.com, eperezma@redhat.com, sgarzare@redhat.com Cc: virtualization@lists.linux-foundation.org, linux-kernel@vger.kernel.org Subject: [RFC v2 PATCH] vdpa_sim: implement .reset_map support Date: Tue, 17 Oct 2023 22:11:33 -0700 Message-Id: <1697605893-30313-1-git-send-email-si-wei.liu@oracle.com> X-Mailer: git-send-email 1.8.3.1 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.980,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-10-18_03,2023-10-17_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 malwarescore=0 adultscore=0 spamscore=0 mlxscore=0 phishscore=0 bulkscore=0 suspectscore=0 mlxlogscore=999 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2309180000 definitions=main-2310180044 X-Proofpoint-GUID: 6TIiuHvcMa6pdS6WnxcIN-uqP71R4NJO X-Proofpoint-ORIG-GUID: 6TIiuHvcMa6pdS6WnxcIN-uqP71R4NJO X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF, RCVD_IN_DNSWL_BLOCKED,RCVD_IN_MSPIKE_H5,RCVD_IN_MSPIKE_WL, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (snail.vger.email [0.0.0.0]); Tue, 17 Oct 2023 22:14:32 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1780068986452164186 X-GMAIL-MSGID: 1780068986452164186 |
Series |
[RFC,v2] vdpa_sim: implement .reset_map support
|
|
Commit Message
Si-Wei Liu
Oct. 18, 2023, 5:11 a.m. UTC
RFC only. Not tested on vdpa-sim-blk with user virtual address.
Works fine with vdpa-sim-net which uses physical address to map.
This patch is based on top of [1].
[1] https://lore.kernel.org/virtualization/1696928580-7520-1-git-send-email-si-wei.liu@oracle.com/
Signed-off-by: Si-Wei Liu <si-wei.liu@oracle.com>
---
RFC v2:
- initialize iotlb to passthrough mode in device add
---
drivers/vdpa/vdpa_sim/vdpa_sim.c | 34 ++++++++++++++++++++++++--------
1 file changed, 26 insertions(+), 8 deletions(-)
Comments
On Tue, Oct 17, 2023 at 10:11:33PM -0700, Si-Wei Liu wrote: >RFC only. Not tested on vdpa-sim-blk with user virtual address. >Works fine with vdpa-sim-net which uses physical address to map. > >This patch is based on top of [1]. > >[1] https://lore.kernel.org/virtualization/1696928580-7520-1-git-send-email-si-wei.liu@oracle.com/ > >Signed-off-by: Si-Wei Liu <si-wei.liu@oracle.com> > >--- >RFC v2: > - initialize iotlb to passthrough mode in device add I tested this version and I didn't see any issue ;-) Tested-by: Stefano Garzarella <sgarzare@redhat.com> >--- > drivers/vdpa/vdpa_sim/vdpa_sim.c | 34 ++++++++++++++++++++++++-------- > 1 file changed, 26 insertions(+), 8 deletions(-) > >diff --git a/drivers/vdpa/vdpa_sim/vdpa_sim.c b/drivers/vdpa/vdpa_sim/vdpa_sim.c >index 76d41058add9..2a0a6042d61d 100644 >--- a/drivers/vdpa/vdpa_sim/vdpa_sim.c >+++ b/drivers/vdpa/vdpa_sim/vdpa_sim.c >@@ -151,13 +151,6 @@ static void vdpasim_do_reset(struct vdpasim *vdpasim) > &vdpasim->iommu_lock); > } > >- for (i = 0; i < vdpasim->dev_attr.nas; i++) { >- vhost_iotlb_reset(&vdpasim->iommu[i]); >- vhost_iotlb_add_range(&vdpasim->iommu[i], 0, ULONG_MAX, >- 0, VHOST_MAP_RW); >- vdpasim->iommu_pt[i] = true; >- } >- > vdpasim->running = true; > spin_unlock(&vdpasim->iommu_lock); > >@@ -259,8 +252,12 @@ struct vdpasim *vdpasim_create(struct vdpasim_dev_attr *dev_attr, > if (!vdpasim->iommu_pt) > goto err_iommu; > >- for (i = 0; i < vdpasim->dev_attr.nas; i++) >+ for (i = 0; i < vdpasim->dev_attr.nas; i++) { > vhost_iotlb_init(&vdpasim->iommu[i], max_iotlb_entries, 0); >+ vhost_iotlb_add_range(&vdpasim->iommu[i], 0, ULONG_MAX, 0, >+ VHOST_MAP_RW); >+ vdpasim->iommu_pt[i] = true; >+ } > > for (i = 0; i < dev_attr->nvqs; i++) > vringh_set_iotlb(&vdpasim->vqs[i].vring, &vdpasim->iommu[0], >@@ -637,6 +634,25 @@ static int vdpasim_set_map(struct vdpa_device *vdpa, unsigned int asid, > return ret; > } > >+static int vdpasim_reset_map(struct vdpa_device *vdpa, unsigned int asid) >+{ >+ struct vdpasim *vdpasim = vdpa_to_sim(vdpa); >+ >+ if (asid >= vdpasim->dev_attr.nas) >+ return -EINVAL; >+ >+ spin_lock(&vdpasim->iommu_lock); >+ if (vdpasim->iommu_pt[asid]) >+ goto out; >+ vhost_iotlb_reset(&vdpasim->iommu[asid]); >+ vhost_iotlb_add_range(&vdpasim->iommu[asid], 0, ULONG_MAX, >+ 0, VHOST_MAP_RW); >+ vdpasim->iommu_pt[asid] = true; >+out: >+ spin_unlock(&vdpasim->iommu_lock); >+ return 0; >+} >+ > static int vdpasim_bind_mm(struct vdpa_device *vdpa, struct mm_struct *mm) > { > struct vdpasim *vdpasim = vdpa_to_sim(vdpa); >@@ -759,6 +775,7 @@ static const struct vdpa_config_ops vdpasim_config_ops = { > .set_group_asid = vdpasim_set_group_asid, > .dma_map = vdpasim_dma_map, > .dma_unmap = vdpasim_dma_unmap, >+ .reset_map = vdpasim_reset_map, > .bind_mm = vdpasim_bind_mm, > .unbind_mm = vdpasim_unbind_mm, > .free = vdpasim_free, >@@ -796,6 +813,7 @@ static const struct vdpa_config_ops vdpasim_batch_config_ops = { > .get_iova_range = vdpasim_get_iova_range, > .set_group_asid = vdpasim_set_group_asid, > .set_map = vdpasim_set_map, >+ .reset_map = vdpasim_reset_map, > .bind_mm = vdpasim_bind_mm, > .unbind_mm = vdpasim_unbind_mm, > .free = vdpasim_free, >-- >2.39.3 >
On 10/18/2023 1:05 AM, Stefano Garzarella wrote: > On Tue, Oct 17, 2023 at 10:11:33PM -0700, Si-Wei Liu wrote: >> RFC only. Not tested on vdpa-sim-blk with user virtual address. >> Works fine with vdpa-sim-net which uses physical address to map. >> >> This patch is based on top of [1]. >> >> [1] >> https://lore.kernel.org/virtualization/1696928580-7520-1-git-send-email-si-wei.liu@oracle.com/ >> >> Signed-off-by: Si-Wei Liu <si-wei.liu@oracle.com> >> >> --- >> RFC v2: >> - initialize iotlb to passthrough mode in device add > > I tested this version and I didn't see any issue ;-) Great, thank you so much for your help on testing my patch, Stefano! Just for my own interest/curiosity, currently there's no vhost-vdpa backend client implemented for vdpa-sim-blk or any vdpa block device in userspace as yet, correct? So there was no test specific to vhost-vdpa that needs to be exercised, right? Thanks, -Siwei > > Tested-by: Stefano Garzarella <sgarzare@redhat.com> > >> --- >> drivers/vdpa/vdpa_sim/vdpa_sim.c | 34 ++++++++++++++++++++++++-------- >> 1 file changed, 26 insertions(+), 8 deletions(-) >> >> diff --git a/drivers/vdpa/vdpa_sim/vdpa_sim.c >> b/drivers/vdpa/vdpa_sim/vdpa_sim.c >> index 76d41058add9..2a0a6042d61d 100644 >> --- a/drivers/vdpa/vdpa_sim/vdpa_sim.c >> +++ b/drivers/vdpa/vdpa_sim/vdpa_sim.c >> @@ -151,13 +151,6 @@ static void vdpasim_do_reset(struct vdpasim >> *vdpasim) >> &vdpasim->iommu_lock); >> } >> >> - for (i = 0; i < vdpasim->dev_attr.nas; i++) { >> - vhost_iotlb_reset(&vdpasim->iommu[i]); >> - vhost_iotlb_add_range(&vdpasim->iommu[i], 0, ULONG_MAX, >> - 0, VHOST_MAP_RW); >> - vdpasim->iommu_pt[i] = true; >> - } >> - >> vdpasim->running = true; >> spin_unlock(&vdpasim->iommu_lock); >> >> @@ -259,8 +252,12 @@ struct vdpasim *vdpasim_create(struct >> vdpasim_dev_attr *dev_attr, >> if (!vdpasim->iommu_pt) >> goto err_iommu; >> >> - for (i = 0; i < vdpasim->dev_attr.nas; i++) >> + for (i = 0; i < vdpasim->dev_attr.nas; i++) { >> vhost_iotlb_init(&vdpasim->iommu[i], max_iotlb_entries, 0); >> + vhost_iotlb_add_range(&vdpasim->iommu[i], 0, ULONG_MAX, 0, >> + VHOST_MAP_RW); >> + vdpasim->iommu_pt[i] = true; >> + } >> >> for (i = 0; i < dev_attr->nvqs; i++) >> vringh_set_iotlb(&vdpasim->vqs[i].vring, &vdpasim->iommu[0], >> @@ -637,6 +634,25 @@ static int vdpasim_set_map(struct vdpa_device >> *vdpa, unsigned int asid, >> return ret; >> } >> >> +static int vdpasim_reset_map(struct vdpa_device *vdpa, unsigned int >> asid) >> +{ >> + struct vdpasim *vdpasim = vdpa_to_sim(vdpa); >> + >> + if (asid >= vdpasim->dev_attr.nas) >> + return -EINVAL; >> + >> + spin_lock(&vdpasim->iommu_lock); >> + if (vdpasim->iommu_pt[asid]) >> + goto out; >> + vhost_iotlb_reset(&vdpasim->iommu[asid]); >> + vhost_iotlb_add_range(&vdpasim->iommu[asid], 0, ULONG_MAX, >> + 0, VHOST_MAP_RW); >> + vdpasim->iommu_pt[asid] = true; >> +out: >> + spin_unlock(&vdpasim->iommu_lock); >> + return 0; >> +} >> + >> static int vdpasim_bind_mm(struct vdpa_device *vdpa, struct mm_struct >> *mm) >> { >> struct vdpasim *vdpasim = vdpa_to_sim(vdpa); >> @@ -759,6 +775,7 @@ static const struct vdpa_config_ops >> vdpasim_config_ops = { >> .set_group_asid = vdpasim_set_group_asid, >> .dma_map = vdpasim_dma_map, >> .dma_unmap = vdpasim_dma_unmap, >> + .reset_map = vdpasim_reset_map, >> .bind_mm = vdpasim_bind_mm, >> .unbind_mm = vdpasim_unbind_mm, >> .free = vdpasim_free, >> @@ -796,6 +813,7 @@ static const struct vdpa_config_ops >> vdpasim_batch_config_ops = { >> .get_iova_range = vdpasim_get_iova_range, >> .set_group_asid = vdpasim_set_group_asid, >> .set_map = vdpasim_set_map, >> + .reset_map = vdpasim_reset_map, >> .bind_mm = vdpasim_bind_mm, >> .unbind_mm = vdpasim_unbind_mm, >> .free = vdpasim_free, >> -- >> 2.39.3 >> >
On Wed, Oct 18, 2023 at 04:47:48PM -0700, Si-Wei Liu wrote: > > >On 10/18/2023 1:05 AM, Stefano Garzarella wrote: >>On Tue, Oct 17, 2023 at 10:11:33PM -0700, Si-Wei Liu wrote: >>>RFC only. Not tested on vdpa-sim-blk with user virtual address. >>>Works fine with vdpa-sim-net which uses physical address to map. >>> >>>This patch is based on top of [1]. >>> >>>[1] https://lore.kernel.org/virtualization/1696928580-7520-1-git-send-email-si-wei.liu@oracle.com/ >>> >>>Signed-off-by: Si-Wei Liu <si-wei.liu@oracle.com> >>> >>>--- >>>RFC v2: >>> - initialize iotlb to passthrough mode in device add >> >>I tested this version and I didn't see any issue ;-) >Great, thank you so much for your help on testing my patch, Stefano! You're welcome :-) >Just for my own interest/curiosity, currently there's no vhost-vdpa >backend client implemented for vdpa-sim-blk Yep, we developed libblkio [1]. libblkio exposes common API to access block devices in userspace. It supports several drivers. The one useful for this use case is `virtio-blk-vhost-vdpa`. Here [2] some examples on how to use the libblkio test suite with the vdpa-sim-blk. Since QEMU 7.2, it supports libblkio drivers, so you can use the following options to attach a vdpa-blk device to a VM: -blockdev node-name=drive_src1,driver=virtio-blk-vhost-vdpa,path=/dev/vhost-vdpa-0,cache.direct=on \ -device virtio-blk-pci,id=src1,bootindex=2,drive=drive_src1 \ For now only what we called slow-path [3][4] is supported, since the VQs are not directly exposed to the guest, but QEMU allocates other VQs (similar to shadow VQs for net) to support live-migration and QEMU storage features. Fast-path is on the agenda, but on pause for now. >or any vdpa block device in userspace as yet, correct? Do you mean with VDUSE? In this case, yes, qemu-storage-daemon supports it, and can implement a virtio-blk in user space, exposing a disk image thorough VDUSE. There is an example in libblkio as well [5] on how to start it. >So there was no test specific to vhost-vdpa that needs to be exercised, >right? > I hope I answered above :-) This reminded me that I need to write a blog post with all this information, I hope to do that soon! Stefano [1] https://gitlab.com/libblkio/libblkio [2] https://gitlab.com/libblkio/libblkio/-/blob/main/tests/meson.build?ref_type=heads#L42 [3] https://kvmforum2022.sched.com/event/15jK5/qemu-storage-daemon-and-libblkio-exploring-new-shores-for-the-qemu-block-layer-kevin-wolf-stefano-garzarella-red-hat [4] https://kvmforum2021.sched.com/event/ke3a/vdpa-blk-unified-hardware-and-software-offload-for-virtio-blk-stefano-garzarella-red-hat [5] https://gitlab.com/libblkio/libblkio/-/blob/main/tests/meson.build?ref_type=heads#L58
On 10/19/2023 2:29 AM, Stefano Garzarella wrote: > On Wed, Oct 18, 2023 at 04:47:48PM -0700, Si-Wei Liu wrote: >> >> >> On 10/18/2023 1:05 AM, Stefano Garzarella wrote: >>> On Tue, Oct 17, 2023 at 10:11:33PM -0700, Si-Wei Liu wrote: >>>> RFC only. Not tested on vdpa-sim-blk with user virtual address. >>>> Works fine with vdpa-sim-net which uses physical address to map. >>>> >>>> This patch is based on top of [1]. >>>> >>>> [1] >>>> https://lore.kernel.org/virtualization/1696928580-7520-1-git-send-email-si-wei.liu@oracle.com/ >>>> >>>> Signed-off-by: Si-Wei Liu <si-wei.liu@oracle.com> >>>> >>>> --- >>>> RFC v2: >>>> - initialize iotlb to passthrough mode in device add >>> >>> I tested this version and I didn't see any issue ;-) >> Great, thank you so much for your help on testing my patch, Stefano! > > You're welcome :-) > >> Just for my own interest/curiosity, currently there's no vhost-vdpa >> backend client implemented for vdpa-sim-blk > > Yep, we developed libblkio [1]. libblkio exposes common API to access > block devices in userspace. It supports several drivers. > The one useful for this use case is `virtio-blk-vhost-vdpa`. Here [2] > some examples on how to use the libblkio test suite with the > vdpa-sim-blk. > > Since QEMU 7.2, it supports libblkio drivers, so you can use the > following options to attach a vdpa-blk device to a VM: > > -blockdev > node-name=drive_src1,driver=virtio-blk-vhost-vdpa,path=/dev/vhost-vdpa-0,cache.direct=on > \ > -device virtio-blk-pci,id=src1,bootindex=2,drive=drive_src1 \ > > For now only what we called slow-path [3][4] is supported, since the > VQs are not directly exposed to the guest, but QEMU allocates other > VQs (similar to shadow VQs for net) to support live-migration and QEMU > storage features. Fast-path is on the agenda, but on pause for now. > >> or any vdpa block device in userspace as yet, correct? > > Do you mean with VDUSE? > In this case, yes, qemu-storage-daemon supports it, and can implement > a virtio-blk in user space, exposing a disk image thorough VDUSE. > > There is an example in libblkio as well [5] on how to start it. > >> So there was no test specific to vhost-vdpa that needs to be >> exercised, right? >> > > I hope I answered above :-) Definitely! This is exactly what I needed, it's really useful! Much appreciated for the detailed information! I hadn't been aware of the latest status on libblkio drivers and qemu support since I last checked it (it was at some point right after KVM 2022, sorry my knowledge too outdated). I followed your links below and checked a few things, looks my change shouldn't affect anything. Good to see all the desired pieces landed to QEMU and libblkio already as planned, great job done! Cheers, -Siwei > This reminded me that I need to write a blog post with all this > information, I hope to do that soon! > > Stefano > > [1] https://gitlab.com/libblkio/libblkio > [2] > https://gitlab.com/libblkio/libblkio/-/blob/main/tests/meson.build?ref_type=heads#L42 > [3] > https://kvmforum2022.sched.com/event/15jK5/qemu-storage-daemon-and-libblkio-exploring-new-shores-for-the-qemu-block-layer-kevin-wolf-stefano-garzarella-red-hat > [4] > https://kvmforum2021.sched.com/event/ke3a/vdpa-blk-unified-hardware-and-software-offload-for-virtio-blk-stefano-garzarella-red-hat > [5] > https://gitlab.com/libblkio/libblkio/-/blob/main/tests/meson.build?ref_type=heads#L58 >
diff --git a/drivers/vdpa/vdpa_sim/vdpa_sim.c b/drivers/vdpa/vdpa_sim/vdpa_sim.c index 76d41058add9..2a0a6042d61d 100644 --- a/drivers/vdpa/vdpa_sim/vdpa_sim.c +++ b/drivers/vdpa/vdpa_sim/vdpa_sim.c @@ -151,13 +151,6 @@ static void vdpasim_do_reset(struct vdpasim *vdpasim) &vdpasim->iommu_lock); } - for (i = 0; i < vdpasim->dev_attr.nas; i++) { - vhost_iotlb_reset(&vdpasim->iommu[i]); - vhost_iotlb_add_range(&vdpasim->iommu[i], 0, ULONG_MAX, - 0, VHOST_MAP_RW); - vdpasim->iommu_pt[i] = true; - } - vdpasim->running = true; spin_unlock(&vdpasim->iommu_lock); @@ -259,8 +252,12 @@ struct vdpasim *vdpasim_create(struct vdpasim_dev_attr *dev_attr, if (!vdpasim->iommu_pt) goto err_iommu; - for (i = 0; i < vdpasim->dev_attr.nas; i++) + for (i = 0; i < vdpasim->dev_attr.nas; i++) { vhost_iotlb_init(&vdpasim->iommu[i], max_iotlb_entries, 0); + vhost_iotlb_add_range(&vdpasim->iommu[i], 0, ULONG_MAX, 0, + VHOST_MAP_RW); + vdpasim->iommu_pt[i] = true; + } for (i = 0; i < dev_attr->nvqs; i++) vringh_set_iotlb(&vdpasim->vqs[i].vring, &vdpasim->iommu[0], @@ -637,6 +634,25 @@ static int vdpasim_set_map(struct vdpa_device *vdpa, unsigned int asid, return ret; } +static int vdpasim_reset_map(struct vdpa_device *vdpa, unsigned int asid) +{ + struct vdpasim *vdpasim = vdpa_to_sim(vdpa); + + if (asid >= vdpasim->dev_attr.nas) + return -EINVAL; + + spin_lock(&vdpasim->iommu_lock); + if (vdpasim->iommu_pt[asid]) + goto out; + vhost_iotlb_reset(&vdpasim->iommu[asid]); + vhost_iotlb_add_range(&vdpasim->iommu[asid], 0, ULONG_MAX, + 0, VHOST_MAP_RW); + vdpasim->iommu_pt[asid] = true; +out: + spin_unlock(&vdpasim->iommu_lock); + return 0; +} + static int vdpasim_bind_mm(struct vdpa_device *vdpa, struct mm_struct *mm) { struct vdpasim *vdpasim = vdpa_to_sim(vdpa); @@ -759,6 +775,7 @@ static const struct vdpa_config_ops vdpasim_config_ops = { .set_group_asid = vdpasim_set_group_asid, .dma_map = vdpasim_dma_map, .dma_unmap = vdpasim_dma_unmap, + .reset_map = vdpasim_reset_map, .bind_mm = vdpasim_bind_mm, .unbind_mm = vdpasim_unbind_mm, .free = vdpasim_free, @@ -796,6 +813,7 @@ static const struct vdpa_config_ops vdpasim_batch_config_ops = { .get_iova_range = vdpasim_get_iova_range, .set_group_asid = vdpasim_set_group_asid, .set_map = vdpasim_set_map, + .reset_map = vdpasim_reset_map, .bind_mm = vdpasim_bind_mm, .unbind_mm = vdpasim_unbind_mm, .free = vdpasim_free,