Message ID | 20230608090124.1807-1-angus.chen@jaguarmicro.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp158899vqr; Thu, 8 Jun 2023 02:47:42 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6JlXn5xy5/zovltMIO3ce6I6Sqa4IAAALhfbeZZDddMMQ3aflN2bAlT1CKrJViXu5oYAn4 X-Received: by 2002:a17:903:41ce:b0:1ab:675:3e0c with SMTP id u14-20020a17090341ce00b001ab06753e0cmr6689608ple.33.1686217662027; Thu, 08 Jun 2023 02:47:42 -0700 (PDT) ARC-Seal: i=2; a=rsa-sha256; t=1686217662; cv=pass; d=google.com; s=arc-20160816; b=G+suQdeGyUi+IfsnJK0sw9qg4CULOuAgtdWzoLR4IlMpC35WerOQ42bukRgGbC+8kO 2+i0BXvHKGo2+m2Nm81kcItAGFVxpGc+Se9y9pUd9qaFyVUBPpXD2i1WfRvJ+jLqk/un VDimSJ0U7nBB2gTnCT7GsUBGYLNaSAkVi4jBL9+G/c5PDP8jEovzgjsaT0wWd1Q9faZA Arec0XkEyAA32qoyPoVS4MCUw9XRFIn5UbK1gwrBVsl5WLxNHkNgH79us6RfjFZI8PZ+ HguySh95TJVBzdskveAGC5xyNiv/JKTZmNgZ9w3gQbeU2yk7IqP/mnJFCOghIy2IaD8f NyBg== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:content-transfer-encoding :message-id:date:subject:cc:to:from:dkim-signature; bh=pLpj9tpNKlVBU0q0CYNgZPgdew+2C359xKZaeg8yc7A=; b=KJ1zTGzFAoCsDXmuCo8k4Tr21fEE0WkFsuO+ec1LzmVGBlDWf36AcyanlvOxw5gqbT yCTe7+i4G8RIOMeUSOtcv/wv9+MZJp+FHpbYCOhhe/szRKMU4UzIIgqyn5R3ctLW9lXf WAELnj+sRaGpBtjcp4po7CaWL7DpxotX3qdBf1kzwRcFxqKHVAz55aD4K9u1o6NWl3Vt vlKmqx7VLiPG8FiFrbM45EEZbMLw+x4gwjuHFSUcu6ak3bILzCEUvdZHw5L5wltUsJX6 4RJmM1hSqPwMUhQvfuexap87Hy1T/KZIPloFyYj/v+im0TpwGUt0fMSEBVV9LUPwFhPF F5ew== ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@jaguarmicro.com header.s=selector2 header.b=rljqske5; arc=pass (i=1 spf=pass spfdomain=jaguarmicro.com dkim=pass dkdomain=jaguarmicro.com dmarc=pass fromdomain=jaguarmicro.com); spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=jaguarmicro.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id n7-20020a170902e54700b0019931c82e24si871860plf.195.2023.06.08.02.47.29; Thu, 08 Jun 2023 02:47:42 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@jaguarmicro.com header.s=selector2 header.b=rljqske5; arc=pass (i=1 spf=pass spfdomain=jaguarmicro.com dkim=pass dkdomain=jaguarmicro.com dmarc=pass fromdomain=jaguarmicro.com); spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=jaguarmicro.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235718AbjFHJCZ (ORCPT <rfc822;liningstudo@gmail.com> + 99 others); Thu, 8 Jun 2023 05:02:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52034 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235703AbjFHJCG (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Thu, 8 Jun 2023 05:02:06 -0400 Received: from APC01-PSA-obe.outbound.protection.outlook.com (mail-psaapc01on2078.outbound.protection.outlook.com [40.107.255.78]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 85823273E for <linux-kernel@vger.kernel.org>; Thu, 8 Jun 2023 02:02:04 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=k24TPocWyCTrlkRrF39AEJEZ6hJzaQQlw4sgwYvwQ5npsU/2g5TuGDfWq2zMm9aZlAw7qj84lD8fdxdWY8urgzzRRP0xIi3qcEv5Oeo6zZCP3QR+bInKnnLz5muXvERKgCvNpQpCAfNASVvh+7cgp9EBJ/CLbwTbBuwWlycMX2GTR1cau0of3xqwiuxJ4ezxmzfOlxqK2kzlmM6dwyi4FXcqzZoJLMjL3C52Sx4KhRaVG09sAhUhSKnfM4nGj7ga2uurEnJgjKbTLR6VkDA8JI3ZcSU11ugB0AlvDbXwW5nKS/MlLyzRadXMgwxdxJ0DFprgNVdAGej60isn49DzxQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=pLpj9tpNKlVBU0q0CYNgZPgdew+2C359xKZaeg8yc7A=; b=KwuDMMaqT/iqV0eHghsEqgqj3hl4l5qS8vmOHY7g/GyduesOVnDeRVAIIiw3pD4xaKSneDFc90F0K2nsKfJykt+B1XbRn/xH7t5LNx+TXYCUy4fEewZdklAk2s5P+aN7lJz0g+KKxihhfJNsplDv9J8dC2W2OwvKs9+cr5UaczSdfz7KgH162GPgqjImfiItdKeqBe3SO7DYMcyrtEn159fOWe9MgbihXtPdTMdOkiHFX2uQYbRvCWpVxnQrJpI8Ntq9VcNOGpYktO6B7WIGhLLHgGoX+/D18Rj0O090+o9v6FNRSIF8WftvHCdVDZd9OE9jvPFW+IQG4THlsshR0w== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=jaguarmicro.com; dmarc=pass action=none header.from=jaguarmicro.com; dkim=pass header.d=jaguarmicro.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=jaguarmicro.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=pLpj9tpNKlVBU0q0CYNgZPgdew+2C359xKZaeg8yc7A=; b=rljqske5HWI4n5H7WtigO0uCuFkdiy79fvysTvk9zSHdC334LUc18YE75K0CyhHM2UuacBiU/lHb/vX4zEF4WnzWBxdpVpyeiReYJDLr17FhcxE5E43BlbX9nnwd47sko39yvNX2S9I7aKLuHOJ2oTpyDKgr9HJRt87tg/u1+Q7/Hz9DYKnccKGODUIAWJdj0TEMTdgtbx/lL1f8tFZu8Ro3wvLM7H6p8Y0mBFl5MDwd4XlNrdf3PFpabYZAP2/PAPUu2OhRDltIcswlibB1ZNcdwhOVDJNAo31qgL/LrUuGeMvaD+H0hpa8LBe5ulAshPDAP3DOpC2GotEwPwMb6A== Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=jaguarmicro.com; Received: from TY2PR06MB3424.apcprd06.prod.outlook.com (2603:1096:404:104::19) by TYZPR06MB6467.apcprd06.prod.outlook.com (2603:1096:400:464::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6455.32; Thu, 8 Jun 2023 09:01:58 +0000 Received: from TY2PR06MB3424.apcprd06.prod.outlook.com ([fe80::57d3:a80d:905:d7be]) by TY2PR06MB3424.apcprd06.prod.outlook.com ([fe80::57d3:a80d:905:d7be%4]) with mapi id 15.20.6455.030; Thu, 8 Jun 2023 09:01:58 +0000 From: Angus Chen <angus.chen@jaguarmicro.com> To: mst@redhat.com, jasowang@redhat.com Cc: virtualization@lists.linux-foundation.org, linux-kernel@vger.kernel.org, Angus Chen <angus.chen@jaguarmicro.com> Subject: [PATCH v2] vdpa/vp_vdpa: Check queue number of vdpa device from add_config Date: Thu, 8 Jun 2023 17:01:24 +0800 Message-Id: <20230608090124.1807-1-angus.chen@jaguarmicro.com> X-Mailer: git-send-email 2.33.0.windows.1 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-ClientProxiedBy: SG2PR01CA0182.apcprd01.prod.exchangelabs.com (2603:1096:4:189::20) To TY2PR06MB3424.apcprd06.prod.outlook.com (2603:1096:404:104::19) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: TY2PR06MB3424:EE_|TYZPR06MB6467:EE_ X-MS-Office365-Filtering-Correlation-Id: 65141537-50dc-4dfc-c226-08db67ff03ff X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 2mRVHI2R/S6HOZMHo/fcnGs66UnRa/izNTUe5sxxixCHgaEAsF8XNPIfnQLT15kRMf9WiO5GWNmNQBiRuzMVLWwahU5Y4kg8/WXH4Z4MznjvFlNyy6/aUy6Uq9d+QJ7kfw2Vb2xWMvhNPawrphRuW3P7L3wBkgsIMe/dJB/f2UdC3iVNmZkh6P/GFupln9Y0PJdDUjAknpt7J4F7MWnWItmS8G2t1bREUDEZwzbzy7R3UxOjIDuxoMLzEqN/zM3AB3F5DsoHHE8vNI69dkjP0uugMf99xIoW8l9SlWqyDZSyFSSmDUSxpbYRmUUOQR0vfcHu+TST5ZujX8ZGtAyen45cJFxsIWdxSG732eji/Afe4EO1JI4b3PRLbWWugIUALerco3UlFk/t5BQQG66sVOFNzkFIaZnDvG4h5XgrXywi46436T/YUBgAqti26hvAqdxK38vqI8QJ2dhofuPftgIF2Iiq26PmDZVMLqMy8YNo2HUZKOD+giCVBPxraq/nvjXiND2reCGg5MUrTR15KGw/71Ug1n+bSdmW9pWizL1wAqfJvybiE3V7d7NLnX1ecen0WPhgKoKTcybU+a3mzN9qugnOmb68CvbmGpOTIH/CUd1YrBOxJ2cUIp4pIhJd X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:TY2PR06MB3424.apcprd06.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(346002)(366004)(136003)(376002)(39830400003)(396003)(451199021)(8936002)(6666004)(6486002)(478600001)(66556008)(2616005)(83380400001)(52116002)(66476007)(66946007)(41300700001)(316002)(8676002)(1076003)(4326008)(5660300002)(86362001)(186003)(107886003)(6506007)(6512007)(26005)(36756003)(2906002)(44832011)(38100700002)(38350700002);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: sbgN7Y4jcHgbN43g4UXl4E636RL+Ci/+DkWoyreSQF6NFtrYckWq64RLwNZqwtzluX49LDh6iC3JxSWeLV6+oyV0Yit6KtmWQKsaoGPrO50H61J0DwznuBAfLghAW4FY0vuKlAvpfOI5VTUCCj4AFsd7pHZNv7xdOAg0q6JEiyBw4vbM/rTGWz914KXhFjkJN8MR/S0qVXu8SQSVhJzckPDeDUSAeHGqYKQHlxGW+JXL4EThNAIJ+ezf1QlUr+LAGtZiLdkbt99ZTEVtfFKyuFGg+xkPnLebySPjnqJ2oAz4Xu8n4jijwe6qND1FRTMsXUnyPi1eyVkPMMuOf6nYInQ4hcLJctlaRwRz8VooyzyHUW0Dh/cPGJvtGA6G7u4SPfPIiD628yi9uzNmSs7K0JZ2q00KQmVMBonW2dKGaFiilcbM8pA6vRmoOzO0FvpoE6yn+ptmKJwJ3HSUhRi9UAvHIq2qWqosYy+PSsJUXDbyE8OLdvbC0urBb57ak8IalgEsqEAQItAIsvvfAIt/8wj5o1oxGGJeSsob5G3FIu2i4b50G7VrPQkYx0qH/ueqcsJncv8pxrtT5kTKorCqUdJ8PmgVRKHF3kn1TUWkbYJcHMYYRWxOTg85TtMsd6nyOIYXR1KAB+cU21FP2EpUrKN0PB8V3G1E9HPDUzmWybrMvVYSO2cPFW+SzJR3dz+XvJWRkll/EucNW4oohsZg2iqpFgrRDlcIxLlvp2vjs+yOMZhHm/CJG6/+Q0LeGZqZqijuDFKFu0cqHUrVjZtaYKwAUWBRJEiW7bXTSqsck2Y877Cb2pPbDI6I0ukDss6OtUGdtQswOsavgNLwG9IqYxfM68D3L4LhPQd+ohWkfMb9OEt1K1CTg1ezdDPuOxYM7vswf2L+bDGIin2u1Clmre5+o1DAj5JdB2gtQCYXAKR3ko3ezAl8wi9vbbkGaF/af6+hApOewpnrFrjLyzf1/eTmShyW2VKZ7DtgX1CCvWVvcptlrFAWyjtuJsj/OlPme6DINNS8xubnotMAyXPihCNkHlj0et+NCh2myEXvVRhFvBRxD2znj/QSUw+kYbT2+IQxci6IMol8qHPkmTo2B5ZwCAjv2xJJ0rzvJKoBshQtPUyWduFBRt+yJcy7oorrr7IIEVHBtWIKlHV+Pq1OjaJ/XwwpY/BwvigZm6qjixBXfRjd1aIxESmTcuoWJHcD9inlVpqA1+/fjJQsbLQopCjWU7h6wW8LmHnqSpuUsp1sXtxSW4qHD0ecf8huLC7YQs+sx4lMN/krovW8lZw9V14mi04D0ffP3B1Y3xIK8j9wYGPNIJxJe6g5jT7qOQQzBqpgyNvXpDsvBKiugbk4JbuWg7iyweNvDVQOhxGj5E77QqKTGwNx8uXLlhWt1R0ATF5YTiZGu4da6hou8ofQ98+FhLMMZr3DUadWqIZoaVlOfuXttVwkx4FkAXh7VuNkDIpV2n2J2oi5sEdsmkRW5nE1Wjil/hVm3nZSy6y5rKL3uNO3LoliKIP1n4uG2zhiF/Nrc26JvT+mo+1V2iF1eQB+ibxvcjWdHF2q/ECtMkeBabjKVKSyDNU0hLeRKCd0ljrM5y7cBdGLvBDwEgfqqw== X-OriginatorOrg: jaguarmicro.com X-MS-Exchange-CrossTenant-Network-Message-Id: 65141537-50dc-4dfc-c226-08db67ff03ff X-MS-Exchange-CrossTenant-AuthSource: TY2PR06MB3424.apcprd06.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 Jun 2023 09:01:58.6484 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 1e45a5c2-d3e1-46b3-a0e6-c5ebf6d8ba7b X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: 7qP7ZxKh4a6Kq6VHki1vVYwNWuGe38/dF1Gkg7F+pp3mqO7qRZSXS6Xgvo8zn7pmdUWdGjCmAbzn4h1+qBliPXHok6ZqdfVkuLT+AHnHGaA= X-MS-Exchange-Transport-CrossTenantHeadersStamped: TYZPR06MB6467 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_PASS,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1768127371509433177?= X-GMAIL-MSGID: =?utf-8?q?1768127371509433177?= |
Series |
[v2] vdpa/vp_vdpa: Check queue number of vdpa device from add_config
|
|
Commit Message
Angus Chen
June 8, 2023, 9:01 a.m. UTC
When add virtio_pci vdpa device,check the vqs number of device cap
and max_vq_pairs from add_config.
Simply starting from failing if the provisioned #qp is not
equal to the one that hardware has.
Signed-off-by: Angus Chen <angus.chen@jaguarmicro.com>
---
v1: Use max_vqs from add_config
v2: Just return fail if max_vqs from add_config is not same as device
cap. Suggested by jason.
drivers/vdpa/virtio_pci/vp_vdpa.c | 35 ++++++++++++++++++-------------
1 file changed, 21 insertions(+), 14 deletions(-)
Comments
On Thu, Jun 08, 2023 at 05:01:24PM +0800, Angus Chen wrote: > When add virtio_pci vdpa device,check the vqs number of device cap > and max_vq_pairs from add_config. > Simply starting from failing if the provisioned #qp is not > equal to the one that hardware has. > > Signed-off-by: Angus Chen <angus.chen@jaguarmicro.com> I am not sure about this one. How does userspace know which values are legal? If there's no way then maybe we should just cap the value to what device can support but otherwise keep the device working. > --- > v1: Use max_vqs from add_config > v2: Just return fail if max_vqs from add_config is not same as device > cap. Suggested by jason. > > drivers/vdpa/virtio_pci/vp_vdpa.c | 35 ++++++++++++++++++------------- > 1 file changed, 21 insertions(+), 14 deletions(-) > > diff --git a/drivers/vdpa/virtio_pci/vp_vdpa.c b/drivers/vdpa/virtio_pci/vp_vdpa.c > index 281287fae89f..c1fb6963da12 100644 > --- a/drivers/vdpa/virtio_pci/vp_vdpa.c > +++ b/drivers/vdpa/virtio_pci/vp_vdpa.c > @@ -480,32 +480,39 @@ static int vp_vdpa_dev_add(struct vdpa_mgmt_dev *v_mdev, const char *name, > u64 device_features; > int ret, i; > > - vp_vdpa = vdpa_alloc_device(struct vp_vdpa, vdpa, > - dev, &vp_vdpa_ops, 1, 1, name, false); > - > - if (IS_ERR(vp_vdpa)) { > - dev_err(dev, "vp_vdpa: Failed to allocate vDPA structure\n"); > - return PTR_ERR(vp_vdpa); > + if (add_config->mask & BIT_ULL(VDPA_ATTR_DEV_NET_CFG_MAX_VQP)) { > + if (add_config->net.max_vq_pairs != (v_mdev->max_supported_vqs / 2)) { > + dev_err(&pdev->dev, "max vqs 0x%x should be equal to 0x%x which device has\n", > + add_config->net.max_vq_pairs*2, v_mdev->max_supported_vqs); > + return -EINVAL; > + } > } > > - vp_vdpa_mgtdev->vp_vdpa = vp_vdpa; > - > - vp_vdpa->vdpa.dma_dev = &pdev->dev; > - vp_vdpa->queues = vp_modern_get_num_queues(mdev); > - vp_vdpa->mdev = mdev; > - > device_features = vp_modern_get_features(mdev); > if (add_config->mask & BIT_ULL(VDPA_ATTR_DEV_FEATURES)) { > if (add_config->device_features & ~device_features) { > - ret = -EINVAL; > dev_err(&pdev->dev, "Try to provision features " > "that are not supported by the device: " > "device_features 0x%llx provisioned 0x%llx\n", > device_features, add_config->device_features); > - goto err; > + return -EINVAL; > } > device_features = add_config->device_features; > } > + > + vp_vdpa = vdpa_alloc_device(struct vp_vdpa, vdpa, > + dev, &vp_vdpa_ops, 1, 1, name, false); > + > + if (IS_ERR(vp_vdpa)) { > + dev_err(dev, "vp_vdpa: Failed to allocate vDPA structure\n"); > + return PTR_ERR(vp_vdpa); > + } > + > + vp_vdpa_mgtdev->vp_vdpa = vp_vdpa; > + > + vp_vdpa->vdpa.dma_dev = &pdev->dev; > + vp_vdpa->queues = v_mdev->max_supported_vqs; > + vp_vdpa->mdev = mdev; > vp_vdpa->device_features = device_features; > > ret = devm_add_action_or_reset(dev, vp_vdpa_free_irq_vectors, pdev); > -- > 2.25.1
> -----Original Message----- > From: Michael S. Tsirkin <mst@redhat.com> > Sent: Friday, June 9, 2023 3:45 AM > To: Angus Chen <angus.chen@jaguarmicro.com> > Cc: jasowang@redhat.com; virtualization@lists.linux-foundation.org; > linux-kernel@vger.kernel.org > Subject: Re: [PATCH v2] vdpa/vp_vdpa: Check queue number of vdpa device from > add_config > > On Thu, Jun 08, 2023 at 05:01:24PM +0800, Angus Chen wrote: > > When add virtio_pci vdpa device,check the vqs number of device cap > > and max_vq_pairs from add_config. > > Simply starting from failing if the provisioned #qp is not > > equal to the one that hardware has. > > > > Signed-off-by: Angus Chen <angus.chen@jaguarmicro.com> > > I am not sure about this one. How does userspace know > which values are legal? Maybe we can print device cap of device in dev_err? > > If there's no way then maybe we should just cap the value > to what device can support but otherwise keep the device > working. We I use max_vqs pair to test vp_vdpa,it doesn't work as expect. And there is no any hint of this. > > > --- > > v1: Use max_vqs from add_config > > v2: Just return fail if max_vqs from add_config is not same as device > > cap. Suggested by jason. > > > > drivers/vdpa/virtio_pci/vp_vdpa.c | 35 ++++++++++++++++++------------- > > 1 file changed, 21 insertions(+), 14 deletions(-) > > > > diff --git a/drivers/vdpa/virtio_pci/vp_vdpa.c > b/drivers/vdpa/virtio_pci/vp_vdpa.c > > index 281287fae89f..c1fb6963da12 100644 > > --- a/drivers/vdpa/virtio_pci/vp_vdpa.c > > +++ b/drivers/vdpa/virtio_pci/vp_vdpa.c > > @@ -480,32 +480,39 @@ static int vp_vdpa_dev_add(struct > vdpa_mgmt_dev *v_mdev, const char *name, > > u64 device_features; > > int ret, i; > > > > - vp_vdpa = vdpa_alloc_device(struct vp_vdpa, vdpa, > > - dev, &vp_vdpa_ops, 1, 1, name, false); > > - > > - if (IS_ERR(vp_vdpa)) { > > - dev_err(dev, "vp_vdpa: Failed to allocate vDPA structure\n"); > > - return PTR_ERR(vp_vdpa); > > + if (add_config->mask & BIT_ULL(VDPA_ATTR_DEV_NET_CFG_MAX_VQP)) { > > + if (add_config->net.max_vq_pairs != (v_mdev->max_supported_vqs / > 2)) { > > + dev_err(&pdev->dev, "max vqs 0x%x should be equal to 0x%x > which device has\n", > > + add_config->net.max_vq_pairs*2, > v_mdev->max_supported_vqs); > > + return -EINVAL; > > + } > > } > > > > - vp_vdpa_mgtdev->vp_vdpa = vp_vdpa; > > - > > - vp_vdpa->vdpa.dma_dev = &pdev->dev; > > - vp_vdpa->queues = vp_modern_get_num_queues(mdev); > > - vp_vdpa->mdev = mdev; > > - > > device_features = vp_modern_get_features(mdev); > > if (add_config->mask & BIT_ULL(VDPA_ATTR_DEV_FEATURES)) { > > if (add_config->device_features & ~device_features) { > > - ret = -EINVAL; > > dev_err(&pdev->dev, "Try to provision features " > > "that are not supported by the device: " > > "device_features 0x%llx provisioned 0x%llx\n", > > device_features, add_config->device_features); > > - goto err; > > + return -EINVAL; > > } > > device_features = add_config->device_features; > > } > > + > > + vp_vdpa = vdpa_alloc_device(struct vp_vdpa, vdpa, > > + dev, &vp_vdpa_ops, 1, 1, name, false); > > + > > + if (IS_ERR(vp_vdpa)) { > > + dev_err(dev, "vp_vdpa: Failed to allocate vDPA structure\n"); > > + return PTR_ERR(vp_vdpa); > > + } > > + > > + vp_vdpa_mgtdev->vp_vdpa = vp_vdpa; > > + > > + vp_vdpa->vdpa.dma_dev = &pdev->dev; > > + vp_vdpa->queues = v_mdev->max_supported_vqs; > > + vp_vdpa->mdev = mdev; > > vp_vdpa->device_features = device_features; > > > > ret = devm_add_action_or_reset(dev, vp_vdpa_free_irq_vectors, pdev); > > -- > > 2.25.1
On Fri, Jun 9, 2023 at 3:45 AM Michael S. Tsirkin <mst@redhat.com> wrote: > > On Thu, Jun 08, 2023 at 05:01:24PM +0800, Angus Chen wrote: > > When add virtio_pci vdpa device,check the vqs number of device cap > > and max_vq_pairs from add_config. > > Simply starting from failing if the provisioned #qp is not > > equal to the one that hardware has. I think I kind of agree with Michael, I don't see any obvious advantages to allow usersapce to configure max_vqp if it can't be provisioned dynamically. What's wrong if we just stick the current approach that doesn't accept max_vqp? A better approach is to tweak the vdpa tool to display the legal attributes that can be provisioned. > > > > Signed-off-by: Angus Chen <angus.chen@jaguarmicro.com> > > I am not sure about this one. How does userspace know > which values are legal? vdpa mgmtdev show can gives hints like: max_supported_vqs 3 > > If there's no way then maybe we should just cap the value > to what device can support but otherwise keep the device > working. This seems conflict to how other drivers (like mlx5) did: if (add_config->mask & BIT_ULL(VDPA_ATTR_DEV_NET_CFG_MAX_VQP)) { if (add_config->net.max_vq_pairs > max_vqs / 2) return -EINVAL; max_vqs = min_t(u32, max_vqs, 2 * add_config->net.max_vq_pairs); } else { max_vqs = 2; } Thanks > > > --- > > v1: Use max_vqs from add_config > > v2: Just return fail if max_vqs from add_config is not same as device > > cap. Suggested by jason. > > > > drivers/vdpa/virtio_pci/vp_vdpa.c | 35 ++++++++++++++++++------------- > > 1 file changed, 21 insertions(+), 14 deletions(-) > > > > diff --git a/drivers/vdpa/virtio_pci/vp_vdpa.c b/drivers/vdpa/virtio_pci/vp_vdpa.c > > index 281287fae89f..c1fb6963da12 100644 > > --- a/drivers/vdpa/virtio_pci/vp_vdpa.c > > +++ b/drivers/vdpa/virtio_pci/vp_vdpa.c > > @@ -480,32 +480,39 @@ static int vp_vdpa_dev_add(struct vdpa_mgmt_dev *v_mdev, const char *name, > > u64 device_features; > > int ret, i; > > > > - vp_vdpa = vdpa_alloc_device(struct vp_vdpa, vdpa, > > - dev, &vp_vdpa_ops, 1, 1, name, false); > > - > > - if (IS_ERR(vp_vdpa)) { > > - dev_err(dev, "vp_vdpa: Failed to allocate vDPA structure\n"); > > - return PTR_ERR(vp_vdpa); > > + if (add_config->mask & BIT_ULL(VDPA_ATTR_DEV_NET_CFG_MAX_VQP)) { > > + if (add_config->net.max_vq_pairs != (v_mdev->max_supported_vqs / 2)) { > > + dev_err(&pdev->dev, "max vqs 0x%x should be equal to 0x%x which device has\n", > > + add_config->net.max_vq_pairs*2, v_mdev->max_supported_vqs); > > + return -EINVAL; > > + } > > } > > > > - vp_vdpa_mgtdev->vp_vdpa = vp_vdpa; > > - > > - vp_vdpa->vdpa.dma_dev = &pdev->dev; > > - vp_vdpa->queues = vp_modern_get_num_queues(mdev); > > - vp_vdpa->mdev = mdev; > > - > > device_features = vp_modern_get_features(mdev); > > if (add_config->mask & BIT_ULL(VDPA_ATTR_DEV_FEATURES)) { > > if (add_config->device_features & ~device_features) { > > - ret = -EINVAL; > > dev_err(&pdev->dev, "Try to provision features " > > "that are not supported by the device: " > > "device_features 0x%llx provisioned 0x%llx\n", > > device_features, add_config->device_features); > > - goto err; > > + return -EINVAL; > > } > > device_features = add_config->device_features; > > } > > + > > + vp_vdpa = vdpa_alloc_device(struct vp_vdpa, vdpa, > > + dev, &vp_vdpa_ops, 1, 1, name, false); > > + > > + if (IS_ERR(vp_vdpa)) { > > + dev_err(dev, "vp_vdpa: Failed to allocate vDPA structure\n"); > > + return PTR_ERR(vp_vdpa); > > + } > > + > > + vp_vdpa_mgtdev->vp_vdpa = vp_vdpa; > > + > > + vp_vdpa->vdpa.dma_dev = &pdev->dev; > > + vp_vdpa->queues = v_mdev->max_supported_vqs; > > + vp_vdpa->mdev = mdev; > > vp_vdpa->device_features = device_features; > > > > ret = devm_add_action_or_reset(dev, vp_vdpa_free_irq_vectors, pdev); > > -- > > 2.25.1 >
On Fri, Jun 09, 2023 at 12:42:22AM +0000, Angus Chen wrote: > > > > -----Original Message----- > > From: Michael S. Tsirkin <mst@redhat.com> > > Sent: Friday, June 9, 2023 3:45 AM > > To: Angus Chen <angus.chen@jaguarmicro.com> > > Cc: jasowang@redhat.com; virtualization@lists.linux-foundation.org; > > linux-kernel@vger.kernel.org > > Subject: Re: [PATCH v2] vdpa/vp_vdpa: Check queue number of vdpa device from > > add_config > > > > On Thu, Jun 08, 2023 at 05:01:24PM +0800, Angus Chen wrote: > > > When add virtio_pci vdpa device,check the vqs number of device cap > > > and max_vq_pairs from add_config. > > > Simply starting from failing if the provisioned #qp is not > > > equal to the one that hardware has. > > > > > > Signed-off-by: Angus Chen <angus.chen@jaguarmicro.com> > > > > I am not sure about this one. How does userspace know > > which values are legal? > Maybe we can print device cap of device in dev_err? No one reads these except kernel devs. Surely not userspace. > > > > If there's no way then maybe we should just cap the value > > to what device can support but otherwise keep the device > > working. > We I use max_vqs pair to test vp_vdpa,it doesn't work as expect. > And there is no any hint of this. So things don't work either way just differently. Let's come up with a way for userspace to know what's legal so things can start working. > > > > > --- > > > v1: Use max_vqs from add_config > > > v2: Just return fail if max_vqs from add_config is not same as device > > > cap. Suggested by jason. > > > > > > drivers/vdpa/virtio_pci/vp_vdpa.c | 35 ++++++++++++++++++------------- > > > 1 file changed, 21 insertions(+), 14 deletions(-) > > > > > > diff --git a/drivers/vdpa/virtio_pci/vp_vdpa.c > > b/drivers/vdpa/virtio_pci/vp_vdpa.c > > > index 281287fae89f..c1fb6963da12 100644 > > > --- a/drivers/vdpa/virtio_pci/vp_vdpa.c > > > +++ b/drivers/vdpa/virtio_pci/vp_vdpa.c > > > @@ -480,32 +480,39 @@ static int vp_vdpa_dev_add(struct > > vdpa_mgmt_dev *v_mdev, const char *name, > > > u64 device_features; > > > int ret, i; > > > > > > - vp_vdpa = vdpa_alloc_device(struct vp_vdpa, vdpa, > > > - dev, &vp_vdpa_ops, 1, 1, name, false); > > > - > > > - if (IS_ERR(vp_vdpa)) { > > > - dev_err(dev, "vp_vdpa: Failed to allocate vDPA structure\n"); > > > - return PTR_ERR(vp_vdpa); > > > + if (add_config->mask & BIT_ULL(VDPA_ATTR_DEV_NET_CFG_MAX_VQP)) { > > > + if (add_config->net.max_vq_pairs != (v_mdev->max_supported_vqs / > > 2)) { > > > + dev_err(&pdev->dev, "max vqs 0x%x should be equal to 0x%x > > which device has\n", > > > + add_config->net.max_vq_pairs*2, > > v_mdev->max_supported_vqs); > > > + return -EINVAL; > > > + } > > > } > > > > > > - vp_vdpa_mgtdev->vp_vdpa = vp_vdpa; > > > - > > > - vp_vdpa->vdpa.dma_dev = &pdev->dev; > > > - vp_vdpa->queues = vp_modern_get_num_queues(mdev); > > > - vp_vdpa->mdev = mdev; > > > - > > > device_features = vp_modern_get_features(mdev); > > > if (add_config->mask & BIT_ULL(VDPA_ATTR_DEV_FEATURES)) { > > > if (add_config->device_features & ~device_features) { > > > - ret = -EINVAL; > > > dev_err(&pdev->dev, "Try to provision features " > > > "that are not supported by the device: " > > > "device_features 0x%llx provisioned 0x%llx\n", > > > device_features, add_config->device_features); > > > - goto err; > > > + return -EINVAL; > > > } > > > device_features = add_config->device_features; > > > } > > > + > > > + vp_vdpa = vdpa_alloc_device(struct vp_vdpa, vdpa, > > > + dev, &vp_vdpa_ops, 1, 1, name, false); > > > + > > > + if (IS_ERR(vp_vdpa)) { > > > + dev_err(dev, "vp_vdpa: Failed to allocate vDPA structure\n"); > > > + return PTR_ERR(vp_vdpa); > > > + } > > > + > > > + vp_vdpa_mgtdev->vp_vdpa = vp_vdpa; > > > + > > > + vp_vdpa->vdpa.dma_dev = &pdev->dev; > > > + vp_vdpa->queues = v_mdev->max_supported_vqs; > > > + vp_vdpa->mdev = mdev; > > > vp_vdpa->device_features = device_features; > > > > > > ret = devm_add_action_or_reset(dev, vp_vdpa_free_irq_vectors, pdev); > > > -- > > > 2.25.1 >
On Thu, Jun 8, 2023 at 5:02 PM Angus Chen <angus.chen@jaguarmicro.com> wrote: > > When add virtio_pci vdpa device,check the vqs number of device cap > and max_vq_pairs from add_config. > Simply starting from failing if the provisioned #qp is not > equal to the one that hardware has. > > Signed-off-by: Angus Chen <angus.chen@jaguarmicro.com> > --- > v1: Use max_vqs from add_config > v2: Just return fail if max_vqs from add_config is not same as device > cap. Suggested by jason. > > drivers/vdpa/virtio_pci/vp_vdpa.c | 35 ++++++++++++++++++------------- > 1 file changed, 21 insertions(+), 14 deletions(-) > > diff --git a/drivers/vdpa/virtio_pci/vp_vdpa.c b/drivers/vdpa/virtio_pci/vp_vdpa.c > index 281287fae89f..c1fb6963da12 100644 > --- a/drivers/vdpa/virtio_pci/vp_vdpa.c > +++ b/drivers/vdpa/virtio_pci/vp_vdpa.c > @@ -480,32 +480,39 @@ static int vp_vdpa_dev_add(struct vdpa_mgmt_dev *v_mdev, const char *name, > u64 device_features; > int ret, i; > > - vp_vdpa = vdpa_alloc_device(struct vp_vdpa, vdpa, > - dev, &vp_vdpa_ops, 1, 1, name, false); > - > - if (IS_ERR(vp_vdpa)) { > - dev_err(dev, "vp_vdpa: Failed to allocate vDPA structure\n"); > - return PTR_ERR(vp_vdpa); > + if (add_config->mask & BIT_ULL(VDPA_ATTR_DEV_NET_CFG_MAX_VQP)) { > + if (add_config->net.max_vq_pairs != (v_mdev->max_supported_vqs / 2)) { > + dev_err(&pdev->dev, "max vqs 0x%x should be equal to 0x%x which device has\n", > + add_config->net.max_vq_pairs*2, v_mdev->max_supported_vqs); > + return -EINVAL; > + } > } > > - vp_vdpa_mgtdev->vp_vdpa = vp_vdpa; > - > - vp_vdpa->vdpa.dma_dev = &pdev->dev; > - vp_vdpa->queues = vp_modern_get_num_queues(mdev); > - vp_vdpa->mdev = mdev; > - > device_features = vp_modern_get_features(mdev); > if (add_config->mask & BIT_ULL(VDPA_ATTR_DEV_FEATURES)) { > if (add_config->device_features & ~device_features) { > - ret = -EINVAL; > dev_err(&pdev->dev, "Try to provision features " > "that are not supported by the device: " > "device_features 0x%llx provisioned 0x%llx\n", > device_features, add_config->device_features); > - goto err; > + return -EINVAL; > } > device_features = add_config->device_features; > } > + > + vp_vdpa = vdpa_alloc_device(struct vp_vdpa, vdpa, > + dev, &vp_vdpa_ops, 1, 1, name, false); > + > + if (IS_ERR(vp_vdpa)) { > + dev_err(dev, "vp_vdpa: Failed to allocate vDPA structure\n"); > + return PTR_ERR(vp_vdpa); > + } > + > + vp_vdpa_mgtdev->vp_vdpa = vp_vdpa; > + > + vp_vdpa->vdpa.dma_dev = &pdev->dev; > + vp_vdpa->queues = v_mdev->max_supported_vqs; Why bother with those changes? mgtdev->max_supported_vqs = vp_modern_get_num_queues(mdev); Thanks > + vp_vdpa->mdev = mdev; > vp_vdpa->device_features = device_features; > > ret = devm_add_action_or_reset(dev, vp_vdpa_free_irq_vectors, pdev); > -- > 2.25.1 >
Hi,jason. > -----Original Message----- > From: Jason Wang <jasowang@redhat.com> > Sent: Monday, June 26, 2023 10:30 AM > To: Angus Chen <angus.chen@jaguarmicro.com> > Cc: mst@redhat.com; virtualization@lists.linux-foundation.org; > linux-kernel@vger.kernel.org > Subject: Re: [PATCH v2] vdpa/vp_vdpa: Check queue number of vdpa device from > add_config > > On Thu, Jun 8, 2023 at 5:02 PM Angus Chen <angus.chen@jaguarmicro.com> > wrote: > > > > When add virtio_pci vdpa device,check the vqs number of device cap > > and max_vq_pairs from add_config. > > Simply starting from failing if the provisioned #qp is not > > equal to the one that hardware has. > > > > Signed-off-by: Angus Chen <angus.chen@jaguarmicro.com> > > --- > > v1: Use max_vqs from add_config > > v2: Just return fail if max_vqs from add_config is not same as device > > cap. Suggested by jason. > > > > drivers/vdpa/virtio_pci/vp_vdpa.c | 35 ++++++++++++++++++------------- > > 1 file changed, 21 insertions(+), 14 deletions(-) > > > > diff --git a/drivers/vdpa/virtio_pci/vp_vdpa.c > b/drivers/vdpa/virtio_pci/vp_vdpa.c > > index 281287fae89f..c1fb6963da12 100644 > > --- a/drivers/vdpa/virtio_pci/vp_vdpa.c > > +++ b/drivers/vdpa/virtio_pci/vp_vdpa.c > > @@ -480,32 +480,39 @@ static int vp_vdpa_dev_add(struct > vdpa_mgmt_dev *v_mdev, const char *name, > > u64 device_features; > > int ret, i; > > > > - vp_vdpa = vdpa_alloc_device(struct vp_vdpa, vdpa, > > - dev, &vp_vdpa_ops, 1, 1, name, > false); > > - > > - if (IS_ERR(vp_vdpa)) { > > - dev_err(dev, "vp_vdpa: Failed to allocate vDPA > structure\n"); > > - return PTR_ERR(vp_vdpa); > > + if (add_config->mask & > BIT_ULL(VDPA_ATTR_DEV_NET_CFG_MAX_VQP)) { > > + if (add_config->net.max_vq_pairs != > (v_mdev->max_supported_vqs / 2)) { > > + dev_err(&pdev->dev, "max vqs 0x%x should be > equal to 0x%x which device has\n", > > + add_config->net.max_vq_pairs*2, > v_mdev->max_supported_vqs); > > + return -EINVAL; > > + } > > } > > > > - vp_vdpa_mgtdev->vp_vdpa = vp_vdpa; > > - > > - vp_vdpa->vdpa.dma_dev = &pdev->dev; > > - vp_vdpa->queues = vp_modern_get_num_queues(mdev); > > - vp_vdpa->mdev = mdev; > > - > > device_features = vp_modern_get_features(mdev); > > if (add_config->mask & BIT_ULL(VDPA_ATTR_DEV_FEATURES)) { > > if (add_config->device_features & ~device_features) { > > - ret = -EINVAL; > > dev_err(&pdev->dev, "Try to provision features > " > > "that are not supported by the device: > " > > "device_features 0x%llx provisioned > 0x%llx\n", > > device_features, > add_config->device_features); > > - goto err; > > + return -EINVAL; > > } > > device_features = add_config->device_features; > > } > > + > > + vp_vdpa = vdpa_alloc_device(struct vp_vdpa, vdpa, > > + dev, &vp_vdpa_ops, 1, 1, name, > false); > > + > > + if (IS_ERR(vp_vdpa)) { > > + dev_err(dev, "vp_vdpa: Failed to allocate vDPA > structure\n"); > > + return PTR_ERR(vp_vdpa); > > + } > > + > > + vp_vdpa_mgtdev->vp_vdpa = vp_vdpa; > > + > > + vp_vdpa->vdpa.dma_dev = &pdev->dev; > > + vp_vdpa->queues = v_mdev->max_supported_vqs; > > Why bother with those changes? > > mgtdev->max_supported_vqs = vp_modern_get_num_queues(mdev); max_supported_vqs will not be changed, so we can get max_supported_vqs from mgtdev->max_supported_vqs. If we use vp_modern_get_num_queues(mdev),it will use tlp to communicate with device. It just reduce some tlp . > > Thanks > > > > + vp_vdpa->mdev = mdev; > > vp_vdpa->device_features = device_features; > > > > ret = devm_add_action_or_reset(dev, vp_vdpa_free_irq_vectors, > pdev); > > -- > > 2.25.1 > >
On Mon, Jun 26, 2023 at 10:42 AM Angus Chen <angus.chen@jaguarmicro.com> wrote: > > > Hi,jason. > > -----Original Message----- > > From: Jason Wang <jasowang@redhat.com> > > Sent: Monday, June 26, 2023 10:30 AM > > To: Angus Chen <angus.chen@jaguarmicro.com> > > Cc: mst@redhat.com; virtualization@lists.linux-foundation.org; > > linux-kernel@vger.kernel.org > > Subject: Re: [PATCH v2] vdpa/vp_vdpa: Check queue number of vdpa device from > > add_config > > > > On Thu, Jun 8, 2023 at 5:02 PM Angus Chen <angus.chen@jaguarmicro.com> > > wrote: > > > > > > When add virtio_pci vdpa device,check the vqs number of device cap > > > and max_vq_pairs from add_config. > > > Simply starting from failing if the provisioned #qp is not > > > equal to the one that hardware has. > > > > > > Signed-off-by: Angus Chen <angus.chen@jaguarmicro.com> > > > --- > > > v1: Use max_vqs from add_config > > > v2: Just return fail if max_vqs from add_config is not same as device > > > cap. Suggested by jason. > > > > > > drivers/vdpa/virtio_pci/vp_vdpa.c | 35 ++++++++++++++++++------------- > > > 1 file changed, 21 insertions(+), 14 deletions(-) > > > > > > diff --git a/drivers/vdpa/virtio_pci/vp_vdpa.c > > b/drivers/vdpa/virtio_pci/vp_vdpa.c > > > index 281287fae89f..c1fb6963da12 100644 > > > --- a/drivers/vdpa/virtio_pci/vp_vdpa.c > > > +++ b/drivers/vdpa/virtio_pci/vp_vdpa.c > > > @@ -480,32 +480,39 @@ static int vp_vdpa_dev_add(struct > > vdpa_mgmt_dev *v_mdev, const char *name, > > > u64 device_features; > > > int ret, i; > > > > > > - vp_vdpa = vdpa_alloc_device(struct vp_vdpa, vdpa, > > > - dev, &vp_vdpa_ops, 1, 1, name, > > false); > > > - > > > - if (IS_ERR(vp_vdpa)) { > > > - dev_err(dev, "vp_vdpa: Failed to allocate vDPA > > structure\n"); > > > - return PTR_ERR(vp_vdpa); > > > + if (add_config->mask & > > BIT_ULL(VDPA_ATTR_DEV_NET_CFG_MAX_VQP)) { > > > + if (add_config->net.max_vq_pairs != > > (v_mdev->max_supported_vqs / 2)) { > > > + dev_err(&pdev->dev, "max vqs 0x%x should be > > equal to 0x%x which device has\n", > > > + add_config->net.max_vq_pairs*2, > > v_mdev->max_supported_vqs); > > > + return -EINVAL; > > > + } > > > } > > > > > > - vp_vdpa_mgtdev->vp_vdpa = vp_vdpa; > > > - > > > - vp_vdpa->vdpa.dma_dev = &pdev->dev; > > > - vp_vdpa->queues = vp_modern_get_num_queues(mdev); > > > - vp_vdpa->mdev = mdev; > > > - > > > device_features = vp_modern_get_features(mdev); > > > if (add_config->mask & BIT_ULL(VDPA_ATTR_DEV_FEATURES)) { > > > if (add_config->device_features & ~device_features) { > > > - ret = -EINVAL; > > > dev_err(&pdev->dev, "Try to provision features > > " > > > "that are not supported by the device: > > " > > > "device_features 0x%llx provisioned > > 0x%llx\n", > > > device_features, > > add_config->device_features); > > > - goto err; > > > + return -EINVAL; > > > } > > > device_features = add_config->device_features; > > > } > > > + > > > + vp_vdpa = vdpa_alloc_device(struct vp_vdpa, vdpa, > > > + dev, &vp_vdpa_ops, 1, 1, name, > > false); > > > + > > > + if (IS_ERR(vp_vdpa)) { > > > + dev_err(dev, "vp_vdpa: Failed to allocate vDPA > > structure\n"); > > > + return PTR_ERR(vp_vdpa); > > > + } > > > + > > > + vp_vdpa_mgtdev->vp_vdpa = vp_vdpa; > > > + > > > + vp_vdpa->vdpa.dma_dev = &pdev->dev; > > > + vp_vdpa->queues = v_mdev->max_supported_vqs; > > > > Why bother with those changes? > > > > mgtdev->max_supported_vqs = vp_modern_get_num_queues(mdev); > max_supported_vqs will not be changed, so we can get max_supported_vqs from mgtdev->max_supported_vqs. > If we use vp_modern_get_num_queues(mdev),it will use tlp to communicate with device. > It just reduce some tlp . Ok, but 1) I think we don't care the performance here 2) If we did, let's use a separate patch to do that as an optimization Thanks > > > > Thanks > > > > > > > + vp_vdpa->mdev = mdev; > > > vp_vdpa->device_features = device_features; > > > > > > ret = devm_add_action_or_reset(dev, vp_vdpa_free_irq_vectors, > > pdev); > > > -- > > > 2.25.1 > > > >
> -----Original Message----- > From: Jason Wang <jasowang@redhat.com> > Sent: Monday, June 26, 2023 10:51 AM > To: Angus Chen <angus.chen@jaguarmicro.com> > Cc: mst@redhat.com; virtualization@lists.linux-foundation.org; > linux-kernel@vger.kernel.org > Subject: Re: [PATCH v2] vdpa/vp_vdpa: Check queue number of vdpa device from > add_config > > On Mon, Jun 26, 2023 at 10:42 AM Angus Chen <angus.chen@jaguarmicro.com> > wrote: > > > > > > Hi,jason. > > > -----Original Message----- > > > From: Jason Wang <jasowang@redhat.com> > > > Sent: Monday, June 26, 2023 10:30 AM > > > To: Angus Chen <angus.chen@jaguarmicro.com> > > > Cc: mst@redhat.com; virtualization@lists.linux-foundation.org; > > > linux-kernel@vger.kernel.org > > > Subject: Re: [PATCH v2] vdpa/vp_vdpa: Check queue number of vdpa device > from > > > add_config > > > > > > On Thu, Jun 8, 2023 at 5:02 PM Angus Chen > <angus.chen@jaguarmicro.com> > > > wrote: > > > > > > > > When add virtio_pci vdpa device,check the vqs number of device cap > > > > and max_vq_pairs from add_config. > > > > Simply starting from failing if the provisioned #qp is not > > > > equal to the one that hardware has. > > > > > > > > Signed-off-by: Angus Chen <angus.chen@jaguarmicro.com> > > > > --- > > > > v1: Use max_vqs from add_config > > > > v2: Just return fail if max_vqs from add_config is not same as device > > > > cap. Suggested by jason. > > > > > > > > drivers/vdpa/virtio_pci/vp_vdpa.c | 35 ++++++++++++++++++------------- > > > > 1 file changed, 21 insertions(+), 14 deletions(-) > > > > > > > > diff --git a/drivers/vdpa/virtio_pci/vp_vdpa.c > > > b/drivers/vdpa/virtio_pci/vp_vdpa.c > > > > index 281287fae89f..c1fb6963da12 100644 > > > > --- a/drivers/vdpa/virtio_pci/vp_vdpa.c > > > > +++ b/drivers/vdpa/virtio_pci/vp_vdpa.c > > > > @@ -480,32 +480,39 @@ static int vp_vdpa_dev_add(struct > > > vdpa_mgmt_dev *v_mdev, const char *name, > > > > u64 device_features; > > > > int ret, i; > > > > > > > > - vp_vdpa = vdpa_alloc_device(struct vp_vdpa, vdpa, > > > > - dev, &vp_vdpa_ops, 1, 1, > name, > > > false); > > > > - > > > > - if (IS_ERR(vp_vdpa)) { > > > > - dev_err(dev, "vp_vdpa: Failed to allocate vDPA > > > structure\n"); > > > > - return PTR_ERR(vp_vdpa); > > > > + if (add_config->mask & > > > BIT_ULL(VDPA_ATTR_DEV_NET_CFG_MAX_VQP)) { > > > > + if (add_config->net.max_vq_pairs != > > > (v_mdev->max_supported_vqs / 2)) { > > > > + dev_err(&pdev->dev, "max vqs 0x%x should > be > > > equal to 0x%x which device has\n", > > > > + add_config->net.max_vq_pairs*2, > > > v_mdev->max_supported_vqs); > > > > + return -EINVAL; > > > > + } > > > > } > > > > > > > > - vp_vdpa_mgtdev->vp_vdpa = vp_vdpa; > > > > - > > > > - vp_vdpa->vdpa.dma_dev = &pdev->dev; > > > > - vp_vdpa->queues = vp_modern_get_num_queues(mdev); > > > > - vp_vdpa->mdev = mdev; > > > > - > > > > device_features = vp_modern_get_features(mdev); > > > > if (add_config->mask & BIT_ULL(VDPA_ATTR_DEV_FEATURES)) > { > > > > if (add_config->device_features & ~device_features) { > > > > - ret = -EINVAL; > > > > dev_err(&pdev->dev, "Try to provision > features > > > " > > > > "that are not supported by the > device: > > > " > > > > "device_features 0x%llx > provisioned > > > 0x%llx\n", > > > > device_features, > > > add_config->device_features); > > > > - goto err; > > > > + return -EINVAL; > > > > } > > > > device_features = add_config->device_features; > > > > } > > > > + > > > > + vp_vdpa = vdpa_alloc_device(struct vp_vdpa, vdpa, > > > > + dev, &vp_vdpa_ops, 1, 1, > name, > > > false); > > > > + > > > > + if (IS_ERR(vp_vdpa)) { > > > > + dev_err(dev, "vp_vdpa: Failed to allocate vDPA > > > structure\n"); > > > > + return PTR_ERR(vp_vdpa); > > > > + } > > > > + > > > > + vp_vdpa_mgtdev->vp_vdpa = vp_vdpa; > > > > + > > > > + vp_vdpa->vdpa.dma_dev = &pdev->dev; > > > > + vp_vdpa->queues = v_mdev->max_supported_vqs; > > > > > > Why bother with those changes? > > > > > > mgtdev->max_supported_vqs = > vp_modern_get_num_queues(mdev); > > max_supported_vqs will not be changed, so we can get max_supported_vqs > from mgtdev->max_supported_vqs. > > If we use vp_modern_get_num_queues(mdev),it will use tlp to communicate > with device. > > It just reduce some tlp . > > Ok, but > > 1) I think we don't care the performance here > 2) If we did, let's use a separate patch to do that as an optimization > Thank you.As mst did not support this patch some days ago,so this patch will be dropped. I plan to push a dependent driver of our product rather than reuse vp_vdpa. By the way ,if I want to add sriov support in our vdpa pci driver,would it be accepted or not? > Thanks > > > > > > > Thanks > > > > > > > > > > + vp_vdpa->mdev = mdev; > > > > vp_vdpa->device_features = device_features; > > > > > > > > ret = devm_add_action_or_reset(dev, > vp_vdpa_free_irq_vectors, > > > pdev); > > > > -- > > > > 2.25.1 > > > > > >
On Mon, Jun 26, 2023 at 11:02 AM Angus Chen <angus.chen@jaguarmicro.com> wrote: > > > > > -----Original Message----- > > From: Jason Wang <jasowang@redhat.com> > > Sent: Monday, June 26, 2023 10:51 AM > > To: Angus Chen <angus.chen@jaguarmicro.com> > > Cc: mst@redhat.com; virtualization@lists.linux-foundation.org; > > linux-kernel@vger.kernel.org > > Subject: Re: [PATCH v2] vdpa/vp_vdpa: Check queue number of vdpa device from > > add_config > > > > On Mon, Jun 26, 2023 at 10:42 AM Angus Chen <angus.chen@jaguarmicro.com> > > wrote: > > > > > > > > > Hi,jason. > > > > -----Original Message----- > > > > From: Jason Wang <jasowang@redhat.com> > > > > Sent: Monday, June 26, 2023 10:30 AM > > > > To: Angus Chen <angus.chen@jaguarmicro.com> > > > > Cc: mst@redhat.com; virtualization@lists.linux-foundation.org; > > > > linux-kernel@vger.kernel.org > > > > Subject: Re: [PATCH v2] vdpa/vp_vdpa: Check queue number of vdpa device > > from > > > > add_config > > > > > > > > On Thu, Jun 8, 2023 at 5:02 PM Angus Chen > > <angus.chen@jaguarmicro.com> > > > > wrote: > > > > > > > > > > When add virtio_pci vdpa device,check the vqs number of device cap > > > > > and max_vq_pairs from add_config. > > > > > Simply starting from failing if the provisioned #qp is not > > > > > equal to the one that hardware has. > > > > > > > > > > Signed-off-by: Angus Chen <angus.chen@jaguarmicro.com> > > > > > --- > > > > > v1: Use max_vqs from add_config > > > > > v2: Just return fail if max_vqs from add_config is not same as device > > > > > cap. Suggested by jason. > > > > > > > > > > drivers/vdpa/virtio_pci/vp_vdpa.c | 35 ++++++++++++++++++------------- > > > > > 1 file changed, 21 insertions(+), 14 deletions(-) > > > > > > > > > > diff --git a/drivers/vdpa/virtio_pci/vp_vdpa.c > > > > b/drivers/vdpa/virtio_pci/vp_vdpa.c > > > > > index 281287fae89f..c1fb6963da12 100644 > > > > > --- a/drivers/vdpa/virtio_pci/vp_vdpa.c > > > > > +++ b/drivers/vdpa/virtio_pci/vp_vdpa.c > > > > > @@ -480,32 +480,39 @@ static int vp_vdpa_dev_add(struct > > > > vdpa_mgmt_dev *v_mdev, const char *name, > > > > > u64 device_features; > > > > > int ret, i; > > > > > > > > > > - vp_vdpa = vdpa_alloc_device(struct vp_vdpa, vdpa, > > > > > - dev, &vp_vdpa_ops, 1, 1, > > name, > > > > false); > > > > > - > > > > > - if (IS_ERR(vp_vdpa)) { > > > > > - dev_err(dev, "vp_vdpa: Failed to allocate vDPA > > > > structure\n"); > > > > > - return PTR_ERR(vp_vdpa); > > > > > + if (add_config->mask & > > > > BIT_ULL(VDPA_ATTR_DEV_NET_CFG_MAX_VQP)) { > > > > > + if (add_config->net.max_vq_pairs != > > > > (v_mdev->max_supported_vqs / 2)) { > > > > > + dev_err(&pdev->dev, "max vqs 0x%x should > > be > > > > equal to 0x%x which device has\n", > > > > > + add_config->net.max_vq_pairs*2, > > > > v_mdev->max_supported_vqs); > > > > > + return -EINVAL; > > > > > + } > > > > > } > > > > > > > > > > - vp_vdpa_mgtdev->vp_vdpa = vp_vdpa; > > > > > - > > > > > - vp_vdpa->vdpa.dma_dev = &pdev->dev; > > > > > - vp_vdpa->queues = vp_modern_get_num_queues(mdev); > > > > > - vp_vdpa->mdev = mdev; > > > > > - > > > > > device_features = vp_modern_get_features(mdev); > > > > > if (add_config->mask & BIT_ULL(VDPA_ATTR_DEV_FEATURES)) > > { > > > > > if (add_config->device_features & ~device_features) { > > > > > - ret = -EINVAL; > > > > > dev_err(&pdev->dev, "Try to provision > > features > > > > " > > > > > "that are not supported by the > > device: > > > > " > > > > > "device_features 0x%llx > > provisioned > > > > 0x%llx\n", > > > > > device_features, > > > > add_config->device_features); > > > > > - goto err; > > > > > + return -EINVAL; > > > > > } > > > > > device_features = add_config->device_features; > > > > > } > > > > > + > > > > > + vp_vdpa = vdpa_alloc_device(struct vp_vdpa, vdpa, > > > > > + dev, &vp_vdpa_ops, 1, 1, > > name, > > > > false); > > > > > + > > > > > + if (IS_ERR(vp_vdpa)) { > > > > > + dev_err(dev, "vp_vdpa: Failed to allocate vDPA > > > > structure\n"); > > > > > + return PTR_ERR(vp_vdpa); > > > > > + } > > > > > + > > > > > + vp_vdpa_mgtdev->vp_vdpa = vp_vdpa; > > > > > + > > > > > + vp_vdpa->vdpa.dma_dev = &pdev->dev; > > > > > + vp_vdpa->queues = v_mdev->max_supported_vqs; > > > > > > > > Why bother with those changes? > > > > > > > > mgtdev->max_supported_vqs = > > vp_modern_get_num_queues(mdev); > > > max_supported_vqs will not be changed, so we can get max_supported_vqs > > from mgtdev->max_supported_vqs. > > > If we use vp_modern_get_num_queues(mdev),it will use tlp to communicate > > with device. > > > It just reduce some tlp . > > > > Ok, but > > > > 1) I think we don't care the performance here > > 2) If we did, let's use a separate patch to do that as an optimization > > > Thank you.As mst did not support this patch some days ago,so this patch will be dropped. > I plan to push a dependent driver of our product rather than reuse vp_vdpa. That would be fine. But please try best to reuse modern virtio-pci library. > By the way ,if I want to add sriov support in our vdpa pci driver,would it be accepted or not? I think the answer is yes. Thanks > > Thanks > > > > > > > > > > Thanks > > > > > > > > > > > > > + vp_vdpa->mdev = mdev; > > > > > vp_vdpa->device_features = device_features; > > > > > > > > > > ret = devm_add_action_or_reset(dev, > > vp_vdpa_free_irq_vectors, > > > > pdev); > > > > > -- > > > > > 2.25.1 > > > > > > > > >
diff --git a/drivers/vdpa/virtio_pci/vp_vdpa.c b/drivers/vdpa/virtio_pci/vp_vdpa.c index 281287fae89f..c1fb6963da12 100644 --- a/drivers/vdpa/virtio_pci/vp_vdpa.c +++ b/drivers/vdpa/virtio_pci/vp_vdpa.c @@ -480,32 +480,39 @@ static int vp_vdpa_dev_add(struct vdpa_mgmt_dev *v_mdev, const char *name, u64 device_features; int ret, i; - vp_vdpa = vdpa_alloc_device(struct vp_vdpa, vdpa, - dev, &vp_vdpa_ops, 1, 1, name, false); - - if (IS_ERR(vp_vdpa)) { - dev_err(dev, "vp_vdpa: Failed to allocate vDPA structure\n"); - return PTR_ERR(vp_vdpa); + if (add_config->mask & BIT_ULL(VDPA_ATTR_DEV_NET_CFG_MAX_VQP)) { + if (add_config->net.max_vq_pairs != (v_mdev->max_supported_vqs / 2)) { + dev_err(&pdev->dev, "max vqs 0x%x should be equal to 0x%x which device has\n", + add_config->net.max_vq_pairs*2, v_mdev->max_supported_vqs); + return -EINVAL; + } } - vp_vdpa_mgtdev->vp_vdpa = vp_vdpa; - - vp_vdpa->vdpa.dma_dev = &pdev->dev; - vp_vdpa->queues = vp_modern_get_num_queues(mdev); - vp_vdpa->mdev = mdev; - device_features = vp_modern_get_features(mdev); if (add_config->mask & BIT_ULL(VDPA_ATTR_DEV_FEATURES)) { if (add_config->device_features & ~device_features) { - ret = -EINVAL; dev_err(&pdev->dev, "Try to provision features " "that are not supported by the device: " "device_features 0x%llx provisioned 0x%llx\n", device_features, add_config->device_features); - goto err; + return -EINVAL; } device_features = add_config->device_features; } + + vp_vdpa = vdpa_alloc_device(struct vp_vdpa, vdpa, + dev, &vp_vdpa_ops, 1, 1, name, false); + + if (IS_ERR(vp_vdpa)) { + dev_err(dev, "vp_vdpa: Failed to allocate vDPA structure\n"); + return PTR_ERR(vp_vdpa); + } + + vp_vdpa_mgtdev->vp_vdpa = vp_vdpa; + + vp_vdpa->vdpa.dma_dev = &pdev->dev; + vp_vdpa->queues = v_mdev->max_supported_vqs; + vp_vdpa->mdev = mdev; vp_vdpa->device_features = device_features; ret = devm_add_action_or_reset(dev, vp_vdpa_free_irq_vectors, pdev);