From patchwork Thu Dec 14 01:51:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: longli@linuxonhyperv.com X-Patchwork-Id: 178403 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:3b04:b0:fb:cd0c:d3e with SMTP id c4csp8261176dys; Wed, 13 Dec 2023 17:52:04 -0800 (PST) X-Google-Smtp-Source: AGHT+IH4dLDIgxH1JiW3kkrxFx8/iZf13WgIkY+wPMBpeoqGAwHpXplic0lNk2ASd26IY6GXby1B X-Received: by 2002:a17:902:ec92:b0:1cf:edd5:f783 with SMTP id x18-20020a170902ec9200b001cfedd5f783mr9682534plg.15.1702518724027; Wed, 13 Dec 2023 17:52:04 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1702518724; cv=none; d=google.com; s=arc-20160816; b=LKwEC2TJKHA/fMKP1d0tOkRU/G20tj/EGSxXYdGUP/RMXvR6/woJ1Pl9IyMuro2XkU gxS2YLKMu0GmFDjodTy/X6Y47ow9CKJ/A1BetnnoKLTO1XDvOpEgPLv54ly5ib61Z5R9 62bsReR/BNX2NPO1YeT8jJziNdi7quUM1yRiO8L1mzxUpmX+TjU1iASbd7PYS0ul1fdJ GxJmPhs687675lGeTomb/M2wSE7GpxShgCyjgQvdnk7dT34K6xfIWb4veOyJhCLHAH/R ISQbKEGDQRbfVJCmQGOCcHq6E2GfMroa2zjK4jhgkmLYRdNRYVBG8gru+nNsE8CtKKrM mkBQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from:dkim-signature:dkim-filter; bh=kFG2UJA7zqgwhnAvb8WqdQxi8BRc3fmUEAvO+u1bgXo=; fh=uNLUDd9eMsTSVEvEY0MjopzAsOI0Sfr5Wvsh0Zza+hA=; b=nhml/u1uWc/pOtZMF/gHOBM3sRfWRzp9nhG7VlFjYfL7NXUfOppBZCXatOFqpbTPWI 49Bt5s1HEr4FElUjjSgm4nriAylSGQn+qzs3NVo/NkCKQC8s/n/E5hfc/lcoYN1Q6+bT qS/T4uNwFcYqslnXHSKVmoZaugXHB4I8BbUsbQPoA762mz2Jx66QiaSjmarwZMm5AaVh i8sHw6RW+9dKtrLomL0MuRMV5gMfAVyKb6qu/E/u0WPH6g7fUIELgAyFH0zaxB5gMBuq UkxyXMaq/Alstqr0vQ7kn1pm9vVImPb1X062ZzdET5luHnquxRWvHgHUH2fcH3mxHNZx IxDQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linuxonhyperv.com header.s=default header.b=SEIyFnli; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:2 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxonhyperv.com Received: from agentk.vger.email (agentk.vger.email. [2620:137:e000::3:2]) by mx.google.com with ESMTPS id k9-20020a170902c40900b001d0baf665b3si10861213plk.358.2023.12.13.17.52.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 13 Dec 2023 17:52:04 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:2 as permitted sender) client-ip=2620:137:e000::3:2; Authentication-Results: mx.google.com; dkim=pass header.i=@linuxonhyperv.com header.s=default header.b=SEIyFnli; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:2 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxonhyperv.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by agentk.vger.email (Postfix) with ESMTP id C26B0803103D; Wed, 13 Dec 2023 17:52:00 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at agentk.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1442937AbjLNBvu (ORCPT + 99 others); Wed, 13 Dec 2023 20:51:50 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52852 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234228AbjLNBvs (ORCPT ); Wed, 13 Dec 2023 20:51:48 -0500 Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id DF8B1D5; Wed, 13 Dec 2023 17:51:54 -0800 (PST) Received: by linux.microsoft.com (Postfix, from userid 1004) id 7691420B74C0; Wed, 13 Dec 2023 17:51:54 -0800 (PST) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com 7691420B74C0 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linuxonhyperv.com; s=default; t=1702518714; bh=kFG2UJA7zqgwhnAvb8WqdQxi8BRc3fmUEAvO+u1bgXo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=SEIyFnlis48TeVn0WT7B/HPi2fqTnDpe+H9JxDuhfcf1vivnnlvAKNbeXUXHumcf8 +KvAeKR7lAMQ/sTzhrF+X6N5aM9G++WCGMd9ROwehjlJpEUqkcqYVF3Js8ceroKq07 oB9173ia/Ya+Qwflo/9mGt4kEt9iQ7Zrn6aTjzJc= From: longli@linuxonhyperv.com To: Jason Gunthorpe , Leon Romanovsky , Ajay Sharma , Dexuan Cui , "K. Y. Srinivasan" , Haiyang Zhang , Wei Liu , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: linux-rdma@vger.kernel.org, linux-hyperv@vger.kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Long Li Subject: [Patch v3 1/3] RDMA/mana_ib: register RDMA device with GDMA Date: Wed, 13 Dec 2023 17:51:42 -0800 Message-Id: <1702518704-15886-2-git-send-email-longli@linuxonhyperv.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1702518704-15886-1-git-send-email-longli@linuxonhyperv.com> References: <1702518704-15886-1-git-send-email-longli@linuxonhyperv.com> X-Spam-Status: No, score=-0.9 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on agentk.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (agentk.vger.email [0.0.0.0]); Wed, 13 Dec 2023 17:52:00 -0800 (PST) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1785220273196671415 X-GMAIL-MSGID: 1785220273196671415 From: Long Li Software client needs to register with the RDMA management interface on the SoC to access more features, including querying device capabilities and RC queue pair. Signed-off-by: Long Li --- drivers/infiniband/hw/mana/device.c | 24 +++++++++++++++---- drivers/infiniband/hw/mana/main.c | 4 ++-- drivers/infiniband/hw/mana/qp.c | 15 ++++++------ .../net/ethernet/microsoft/mana/gdma_main.c | 5 ++++ include/net/mana/gdma.h | 4 ++++ 5 files changed, 38 insertions(+), 14 deletions(-) diff --git a/drivers/infiniband/hw/mana/device.c b/drivers/infiniband/hw/mana/device.c index d4541b8707e4..fe025e13a45c 100644 --- a/drivers/infiniband/hw/mana/device.c +++ b/drivers/infiniband/hw/mana/device.c @@ -68,7 +68,6 @@ static int mana_ib_probe(struct auxiliary_device *adev, ibdev_dbg(&dev->ib_dev, "mdev=%p id=%d num_ports=%d\n", mdev, mdev->dev_id.as_uint32, dev->ib_dev.phys_port_cnt); - dev->gdma_dev = mdev; dev->ib_dev.node_type = RDMA_NODE_IB_CA; /* @@ -78,16 +77,28 @@ static int mana_ib_probe(struct auxiliary_device *adev, dev->ib_dev.num_comp_vectors = 1; dev->ib_dev.dev.parent = mdev->gdma_context->dev; - ret = ib_register_device(&dev->ib_dev, "mana_%d", - mdev->gdma_context->dev); + ret = mana_gd_register_device(&mdev->gdma_context->mana_ib); if (ret) { - ib_dealloc_device(&dev->ib_dev); - return ret; + ibdev_err(&dev->ib_dev, "Failed to register device, ret %d", + ret); + goto free_ib_device; } + dev->gdma_dev = &mdev->gdma_context->mana_ib; + + ret = ib_register_device(&dev->ib_dev, "mana_%d", + mdev->gdma_context->dev); + if (ret) + goto deregister_device; dev_set_drvdata(&adev->dev, dev); return 0; + +deregister_device: + mana_gd_deregister_device(dev->gdma_dev); +free_ib_device: + ib_dealloc_device(&dev->ib_dev); + return ret; } static void mana_ib_remove(struct auxiliary_device *adev) @@ -95,6 +106,9 @@ static void mana_ib_remove(struct auxiliary_device *adev) struct mana_ib_dev *dev = dev_get_drvdata(&adev->dev); ib_unregister_device(&dev->ib_dev); + + mana_gd_deregister_device(dev->gdma_dev); + ib_dealloc_device(&dev->ib_dev); } diff --git a/drivers/infiniband/hw/mana/main.c b/drivers/infiniband/hw/mana/main.c index 7be4c3adb4e2..53730306ed9b 100644 --- a/drivers/infiniband/hw/mana/main.c +++ b/drivers/infiniband/hw/mana/main.c @@ -8,7 +8,7 @@ void mana_ib_uncfg_vport(struct mana_ib_dev *dev, struct mana_ib_pd *pd, u32 port) { - struct gdma_dev *gd = dev->gdma_dev; + struct gdma_dev *gd = &dev->gdma_dev->gdma_context->mana; struct mana_port_context *mpc; struct net_device *ndev; struct mana_context *mc; @@ -31,7 +31,7 @@ void mana_ib_uncfg_vport(struct mana_ib_dev *dev, struct mana_ib_pd *pd, int mana_ib_cfg_vport(struct mana_ib_dev *dev, u32 port, struct mana_ib_pd *pd, u32 doorbell_id) { - struct gdma_dev *mdev = dev->gdma_dev; + struct gdma_dev *mdev = &dev->gdma_dev->gdma_context->mana; struct mana_port_context *mpc; struct mana_context *mc; struct net_device *ndev; diff --git a/drivers/infiniband/hw/mana/qp.c b/drivers/infiniband/hw/mana/qp.c index 4b3b5b274e84..ae45d28eef5e 100644 --- a/drivers/infiniband/hw/mana/qp.c +++ b/drivers/infiniband/hw/mana/qp.c @@ -21,8 +21,8 @@ static int mana_ib_cfg_vport_steering(struct mana_ib_dev *dev, u32 req_buf_size; int i, err; - mdev = dev->gdma_dev; - gc = mdev->gdma_context; + gc = dev->gdma_dev->gdma_context; + mdev = &gc->mana; req_buf_size = sizeof(*req) + sizeof(mana_handle_t) * MANA_INDIRECT_TABLE_SIZE; @@ -102,20 +102,21 @@ static int mana_ib_create_qp_rss(struct ib_qp *ibqp, struct ib_pd *pd, struct ib_rwq_ind_table *ind_tbl = attr->rwq_ind_tbl; struct mana_ib_create_qp_rss_resp resp = {}; struct mana_ib_create_qp_rss ucmd = {}; - struct gdma_dev *gd = mdev->gdma_dev; mana_handle_t *mana_ind_table; struct mana_port_context *mpc; + unsigned int ind_tbl_size; struct mana_context *mc; struct net_device *ndev; struct mana_ib_cq *cq; struct mana_ib_wq *wq; - unsigned int ind_tbl_size; + struct gdma_dev *gd; struct ib_cq *ibcq; struct ib_wq *ibwq; int i = 0; u32 port; int ret; + gd = &mdev->gdma_dev->gdma_context->mana; mc = gd->driver_data; if (!udata || udata->inlen < sizeof(ucmd)) @@ -266,8 +267,8 @@ static int mana_ib_create_qp_raw(struct ib_qp *ibqp, struct ib_pd *ibpd, struct mana_ib_ucontext *mana_ucontext = rdma_udata_to_drv_context(udata, struct mana_ib_ucontext, ibucontext); + struct gdma_dev *gd = &mdev->gdma_dev->gdma_context->mana; struct mana_ib_create_qp_resp resp = {}; - struct gdma_dev *gd = mdev->gdma_dev; struct mana_ib_create_qp ucmd = {}; struct mana_obj_spec wq_spec = {}; struct mana_obj_spec cq_spec = {}; @@ -437,7 +438,7 @@ static int mana_ib_destroy_qp_rss(struct mana_ib_qp *qp, { struct mana_ib_dev *mdev = container_of(qp->ibqp.device, struct mana_ib_dev, ib_dev); - struct gdma_dev *gd = mdev->gdma_dev; + struct gdma_dev *gd = &mdev->gdma_dev->gdma_context->mana; struct mana_port_context *mpc; struct mana_context *mc; struct net_device *ndev; @@ -464,7 +465,7 @@ static int mana_ib_destroy_qp_raw(struct mana_ib_qp *qp, struct ib_udata *udata) { struct mana_ib_dev *mdev = container_of(qp->ibqp.device, struct mana_ib_dev, ib_dev); - struct gdma_dev *gd = mdev->gdma_dev; + struct gdma_dev *gd = &mdev->gdma_dev->gdma_context->mana; struct ib_pd *ibpd = qp->ibqp.pd; struct mana_port_context *mpc; struct mana_context *mc; diff --git a/drivers/net/ethernet/microsoft/mana/gdma_main.c b/drivers/net/ethernet/microsoft/mana/gdma_main.c index 6367de0c2c2e..e6e71e3c357c 100644 --- a/drivers/net/ethernet/microsoft/mana/gdma_main.c +++ b/drivers/net/ethernet/microsoft/mana/gdma_main.c @@ -158,6 +158,9 @@ static int mana_gd_detect_devices(struct pci_dev *pdev) if (dev_type == GDMA_DEVICE_MANA) { gc->mana.gdma_context = gc; gc->mana.dev_id = dev; + } else if (dev_type == GDMA_DEVICE_MANA_IB) { + gc->mana_ib.dev_id = dev; + gc->mana_ib.gdma_context = gc; } } @@ -971,6 +974,7 @@ int mana_gd_register_device(struct gdma_dev *gd) return 0; } +EXPORT_SYMBOL_NS(mana_gd_register_device, NET_MANA); int mana_gd_deregister_device(struct gdma_dev *gd) { @@ -1001,6 +1005,7 @@ int mana_gd_deregister_device(struct gdma_dev *gd) return err; } +EXPORT_SYMBOL_NS(mana_gd_deregister_device, NET_MANA); u32 mana_gd_wq_avail_space(struct gdma_queue *wq) { diff --git a/include/net/mana/gdma.h b/include/net/mana/gdma.h index 88b6ef7ce1a6..000f0d7670f7 100644 --- a/include/net/mana/gdma.h +++ b/include/net/mana/gdma.h @@ -66,6 +66,7 @@ enum { GDMA_DEVICE_NONE = 0, GDMA_DEVICE_HWC = 1, GDMA_DEVICE_MANA = 2, + GDMA_DEVICE_MANA_IB = 3, }; struct gdma_resource { @@ -387,6 +388,9 @@ struct gdma_context { /* Azure network adapter */ struct gdma_dev mana; + + /* Azure RDMA adapter */ + struct gdma_dev mana_ib; }; #define MAX_NUM_GDMA_DEVICES 4 From patchwork Thu Dec 14 01:51:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: longli@linuxonhyperv.com X-Patchwork-Id: 178405 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:3b04:b0:fb:cd0c:d3e with SMTP id c4csp8261225dys; Wed, 13 Dec 2023 17:52:11 -0800 (PST) X-Google-Smtp-Source: AGHT+IHNb5JJVpQ8z50/BbaGtIUoY8iJPkJlbNZah5ii31rElnxYZgubo8rr+gUfl6y5+BqHPkuP X-Received: by 2002:a05:6871:5cd:b0:203:2710:3e63 with SMTP id v13-20020a05687105cd00b0020327103e63mr2206214oan.79.1702518731504; Wed, 13 Dec 2023 17:52:11 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1702518731; cv=none; d=google.com; s=arc-20160816; b=pVa4nBRPyxuIkgxHORGL5W7+RYL/7Mtwdw0F+q+Hq7Xtaq31U+Sz8wVdEuiQcjSi4b SvRw9Qu45WKdrd7ouvo4tx/rG8E4DaKYp1RykvCGgI9fSfd1MwKgVAEKtxjH9qt3Suzp C5VV4CYghQkUgIQ4DTEmviUf8SgAvGgZiYS3LFT03uK+Fbc0+USwF/oP90Ek3ib8OzF8 fpVlaTaHpmJEDJyURieuwsH/3YAjanusIPg2dK6nLyRpgS7Oe7mO1swJScO+CHly6rkM bP7OilBpGqCVE4hdGdPogAl9VUjsYWnuhUMuUO+gSK6OOa2SxhDMD7xtuuIFmvw3O5JX 6BPw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from:dkim-signature:dkim-filter; bh=KZ53rh4Ej3x/cVSA+BR5/VypWc8KvcoOEKi9H+/nGFg=; fh=uNLUDd9eMsTSVEvEY0MjopzAsOI0Sfr5Wvsh0Zza+hA=; b=G5YYy2a+YId2nXieGdVMHXNYNefXT9rhghsQRjSvRH7KE0XdgYEs2ilxibXbEk357K K0DE9t3p6ffmQ51zjkeExalpITMvtiBjKXtvF/9/M0LBFwjAf3UxbuerdmxSx4fWFy+S 6fuudyMA8Qp5xGqOx76b8WV07de6oXOiYYXh/h1h5iaVFF+vw2kqsvD6pufhAlHVhM87 3kf9whGvlY0uSGrcWcz3l/ay1y4ie27x6Rm/qaj5SOon3c9EJJ5GZSMXGUyd95KbDwJf eonAEwCtjNo4s32cT90cf8/TuSGhTEL93b3/Obl3Ug37tV+04F0in40mujrS63QcvDsK SPuA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linuxonhyperv.com header.s=default header.b=AKO2bE4h; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:8 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxonhyperv.com Received: from fry.vger.email (fry.vger.email. [2620:137:e000::3:8]) by mx.google.com with ESMTPS id l22-20020a656816000000b005c626128e67si10585141pgt.494.2023.12.13.17.52.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 13 Dec 2023 17:52:11 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:8 as permitted sender) client-ip=2620:137:e000::3:8; Authentication-Results: mx.google.com; dkim=pass header.i=@linuxonhyperv.com header.s=default header.b=AKO2bE4h; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:8 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxonhyperv.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by fry.vger.email (Postfix) with ESMTP id CEE9B8031D49; Wed, 13 Dec 2023 17:52:06 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at fry.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1442960AbjLNBvw (ORCPT + 99 others); Wed, 13 Dec 2023 20:51:52 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52860 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1442952AbjLNBvv (ORCPT ); Wed, 13 Dec 2023 20:51:51 -0500 Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 047EBE3; Wed, 13 Dec 2023 17:51:57 -0800 (PST) Received: by linux.microsoft.com (Postfix, from userid 1004) id 93B6720B74C1; Wed, 13 Dec 2023 17:51:56 -0800 (PST) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com 93B6720B74C1 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linuxonhyperv.com; s=default; t=1702518716; bh=KZ53rh4Ej3x/cVSA+BR5/VypWc8KvcoOEKi9H+/nGFg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=AKO2bE4hQTqlUk5i9ARl0N//YZR/ydMx0F0HRTU1pCts13eGVuZGTLhfUULlBJifk pJy93Ygq46km+jkMWLCByzdm9yvyHq09Gmuxf0cBGmRHul7RXB8qQS0dHYZe7yR/fe TE2xdA2ut46woeaMPBJXGsYnPeC6ZdycSmTTW3Ls= From: longli@linuxonhyperv.com To: Jason Gunthorpe , Leon Romanovsky , Ajay Sharma , Dexuan Cui , "K. Y. Srinivasan" , Haiyang Zhang , Wei Liu , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: linux-rdma@vger.kernel.org, linux-hyperv@vger.kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Long Li Subject: [Patch v3 2/3] RDMA/mana_ib: query device capabilities Date: Wed, 13 Dec 2023 17:51:43 -0800 Message-Id: <1702518704-15886-3-git-send-email-longli@linuxonhyperv.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1702518704-15886-1-git-send-email-longli@linuxonhyperv.com> References: <1702518704-15886-1-git-send-email-longli@linuxonhyperv.com> X-Spam-Status: No, score=-0.9 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on fry.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (fry.vger.email [0.0.0.0]); Wed, 13 Dec 2023 17:52:06 -0800 (PST) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1785220281313426305 X-GMAIL-MSGID: 1785220281313426305 From: Long Li With RDMA device registered, use it to query on hardware capabilities and cache this information for future query requests to the driver. Signed-off-by: Long Li --- drivers/infiniband/hw/mana/cq.c | 2 +- drivers/infiniband/hw/mana/device.c | 7 +++ drivers/infiniband/hw/mana/main.c | 65 ++++++++++++++++++++++------ drivers/infiniband/hw/mana/mana_ib.h | 50 +++++++++++++++++++++ drivers/infiniband/hw/mana/qp.c | 4 +- include/net/mana/gdma.h | 1 + 6 files changed, 113 insertions(+), 16 deletions(-) diff --git a/drivers/infiniband/hw/mana/cq.c b/drivers/infiniband/hw/mana/cq.c index d141cab8a1e6..09a2c263e39b 100644 --- a/drivers/infiniband/hw/mana/cq.c +++ b/drivers/infiniband/hw/mana/cq.c @@ -26,7 +26,7 @@ int mana_ib_create_cq(struct ib_cq *ibcq, const struct ib_cq_init_attr *attr, return err; } - if (attr->cqe > MAX_SEND_BUFFERS_PER_QUEUE) { + if (attr->cqe > mdev->adapter_caps.max_qp_wr) { ibdev_dbg(ibdev, "CQE %d exceeding limit\n", attr->cqe); return -EINVAL; } diff --git a/drivers/infiniband/hw/mana/device.c b/drivers/infiniband/hw/mana/device.c index fe025e13a45c..e9494172195b 100644 --- a/drivers/infiniband/hw/mana/device.c +++ b/drivers/infiniband/hw/mana/device.c @@ -85,6 +85,13 @@ static int mana_ib_probe(struct auxiliary_device *adev, } dev->gdma_dev = &mdev->gdma_context->mana_ib; + ret = mana_ib_query_adapter_caps(dev); + if (ret) { + ibdev_err(&dev->ib_dev, "Failed to query device caps, ret %d", + ret); + goto free_ib_device; + } + ret = ib_register_device(&dev->ib_dev, "mana_%d", mdev->gdma_context->dev); if (ret) diff --git a/drivers/infiniband/hw/mana/main.c b/drivers/infiniband/hw/mana/main.c index 53730306ed9b..8d8f711121d2 100644 --- a/drivers/infiniband/hw/mana/main.c +++ b/drivers/infiniband/hw/mana/main.c @@ -486,20 +486,17 @@ int mana_ib_get_port_immutable(struct ib_device *ibdev, u32 port_num, int mana_ib_query_device(struct ib_device *ibdev, struct ib_device_attr *props, struct ib_udata *uhw) { - props->max_qp = MANA_MAX_NUM_QUEUES; - props->max_qp_wr = MAX_SEND_BUFFERS_PER_QUEUE; - - /* - * max_cqe could be potentially much bigger. - * As this version of driver only support RAW QP, set it to the same - * value as max_qp_wr - */ - props->max_cqe = MAX_SEND_BUFFERS_PER_QUEUE; - + struct mana_ib_dev *dev = container_of(ibdev, + struct mana_ib_dev, ib_dev); + + props->max_qp = dev->adapter_caps.max_qp_count; + props->max_qp_wr = dev->adapter_caps.max_qp_wr; + props->max_cq = dev->adapter_caps.max_cq_count; + props->max_cqe = dev->adapter_caps.max_qp_wr; + props->max_mr = dev->adapter_caps.max_mr_count; props->max_mr_size = MANA_IB_MAX_MR_SIZE; - props->max_mr = MANA_IB_MAX_MR; - props->max_send_sge = MAX_TX_WQE_SGL_ENTRIES; - props->max_recv_sge = MAX_RX_WQE_SGL_ENTRIES; + props->max_send_sge = dev->adapter_caps.max_send_sge_count; + props->max_recv_sge = dev->adapter_caps.max_recv_sge_count; return 0; } @@ -521,3 +518,45 @@ int mana_ib_query_gid(struct ib_device *ibdev, u32 port, int index, void mana_ib_disassociate_ucontext(struct ib_ucontext *ibcontext) { } + +int mana_ib_query_adapter_caps(struct mana_ib_dev *dev) +{ + struct mana_ib_adapter_caps *caps = &dev->adapter_caps; + struct mana_ib_query_adapter_caps_resp resp = {}; + struct mana_ib_query_adapter_caps_req req = {}; + int err; + + mana_gd_init_req_hdr(&req.hdr, MANA_IB_GET_ADAPTER_CAP, sizeof(req), + sizeof(resp)); + req.hdr.resp.msg_version = GDMA_MESSAGE_V3; + req.hdr.dev_id = dev->gdma_dev->dev_id; + + err = mana_gd_send_request(dev->gdma_dev->gdma_context, sizeof(req), + &req, sizeof(resp), &resp); + + if (err) { + ibdev_err(&dev->ib_dev, + "Failed to query adapter caps err %d", err); + return err; + } + + caps->max_sq_id = resp.max_sq_id; + caps->max_rq_id = resp.max_rq_id; + caps->max_cq_id = resp.max_cq_id; + caps->max_qp_count = resp.max_qp_count; + caps->max_cq_count = resp.max_cq_count; + caps->max_mr_count = resp.max_mr_count; + caps->max_pd_count = resp.max_pd_count; + caps->max_inbound_read_limit = resp.max_inbound_read_limit; + caps->max_outbound_read_limit = resp.max_outbound_read_limit; + caps->mw_count = resp.mw_count; + caps->max_srq_count = resp.max_srq_count; + caps->max_qp_wr = min_t(u32, + resp.max_requester_sq_size / GDMA_MAX_SQE_SIZE, + resp.max_requester_rq_size / GDMA_MAX_RQE_SIZE); + caps->max_inline_data_size = resp.max_inline_data_size; + caps->max_send_sge_count = resp.max_send_sge_count; + caps->max_recv_sge_count = resp.max_recv_sge_count; + + return 0; +} diff --git a/drivers/infiniband/hw/mana/mana_ib.h b/drivers/infiniband/hw/mana/mana_ib.h index 502cc8672eef..7cb3d8ee4292 100644 --- a/drivers/infiniband/hw/mana/mana_ib.h +++ b/drivers/infiniband/hw/mana/mana_ib.h @@ -27,9 +27,28 @@ */ #define MANA_IB_MAX_MR 0xFFFFFFu +struct mana_ib_adapter_caps { + u32 max_sq_id; + u32 max_rq_id; + u32 max_cq_id; + u32 max_qp_count; + u32 max_cq_count; + u32 max_mr_count; + u32 max_pd_count; + u32 max_inbound_read_limit; + u32 max_outbound_read_limit; + u32 mw_count; + u32 max_srq_count; + u32 max_qp_wr; + u32 max_send_sge_count; + u32 max_recv_sge_count; + u32 max_inline_data_size; +}; + struct mana_ib_dev { struct ib_device ib_dev; struct gdma_dev *gdma_dev; + struct mana_ib_adapter_caps adapter_caps; }; struct mana_ib_wq { @@ -92,6 +111,36 @@ struct mana_ib_rwq_ind_table { struct ib_rwq_ind_table ib_ind_table; }; +enum mana_ib_command_code { + MANA_IB_GET_ADAPTER_CAP = 0x30001, +}; + +struct mana_ib_query_adapter_caps_req { + struct gdma_req_hdr hdr; +}; /*HW Data */ + +struct mana_ib_query_adapter_caps_resp { + struct gdma_resp_hdr hdr; + u32 max_sq_id; + u32 max_rq_id; + u32 max_cq_id; + u32 max_qp_count; + u32 max_cq_count; + u32 max_mr_count; + u32 max_pd_count; + u32 max_inbound_read_limit; + u32 max_outbound_read_limit; + u32 mw_count; + u32 max_srq_count; + u32 max_requester_sq_size; + u32 max_responder_sq_size; + u32 max_requester_rq_size; + u32 max_responder_rq_size; + u32 max_send_sge_count; + u32 max_recv_sge_count; + u32 max_inline_data_size; +}; /* HW Data */ + int mana_ib_gd_create_dma_region(struct mana_ib_dev *dev, struct ib_umem *umem, mana_handle_t *gdma_region); @@ -159,4 +208,5 @@ int mana_ib_query_gid(struct ib_device *ibdev, u32 port, int index, void mana_ib_disassociate_ucontext(struct ib_ucontext *ibcontext); +int mana_ib_query_adapter_caps(struct mana_ib_dev *mdev); #endif diff --git a/drivers/infiniband/hw/mana/qp.c b/drivers/infiniband/hw/mana/qp.c index ae45d28eef5e..4667b18ec1dd 100644 --- a/drivers/infiniband/hw/mana/qp.c +++ b/drivers/infiniband/hw/mana/qp.c @@ -130,7 +130,7 @@ static int mana_ib_create_qp_rss(struct ib_qp *ibqp, struct ib_pd *pd, return ret; } - if (attr->cap.max_recv_wr > MAX_SEND_BUFFERS_PER_QUEUE) { + if (attr->cap.max_recv_wr > mdev->adapter_caps.max_qp_wr) { ibdev_dbg(&mdev->ib_dev, "Requested max_recv_wr %d exceeding limit\n", attr->cap.max_recv_wr); @@ -296,7 +296,7 @@ static int mana_ib_create_qp_raw(struct ib_qp *ibqp, struct ib_pd *ibpd, if (port < 1 || port > mc->num_ports) return -EINVAL; - if (attr->cap.max_send_wr > MAX_SEND_BUFFERS_PER_QUEUE) { + if (attr->cap.max_send_wr > mdev->adapter_caps.max_qp_wr) { ibdev_dbg(&mdev->ib_dev, "Requested max_send_wr %d exceeding limit\n", attr->cap.max_send_wr); diff --git a/include/net/mana/gdma.h b/include/net/mana/gdma.h index 000f0d7670f7..797971e2d5a5 100644 --- a/include/net/mana/gdma.h +++ b/include/net/mana/gdma.h @@ -150,6 +150,7 @@ struct gdma_general_req { #define GDMA_MESSAGE_V1 1 #define GDMA_MESSAGE_V2 2 +#define GDMA_MESSAGE_V3 3 struct gdma_general_resp { struct gdma_resp_hdr hdr; From patchwork Thu Dec 14 01:51:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: longli@linuxonhyperv.com X-Patchwork-Id: 178404 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:3b04:b0:fb:cd0c:d3e with SMTP id c4csp8261219dys; Wed, 13 Dec 2023 17:52:10 -0800 (PST) X-Google-Smtp-Source: AGHT+IGHivGfowKbJLWoLv/z1BkqtFKd6qTPxEB/YzYc3WWojHUXcV8yUuFPuM+dVsb9OKUMm8Tn X-Received: by 2002:a05:6e02:1807:b0:35d:59a2:bcf with SMTP id a7-20020a056e02180700b0035d59a20bcfmr8895381ilv.101.1702518729830; Wed, 13 Dec 2023 17:52:09 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1702518729; cv=none; d=google.com; s=arc-20160816; b=O+7EQWzY+SgR+UbIPWhC9PYHNh1e2UHrdSyO8pehlGo/cst3oaWPM1BgkZAUxMC/xu rtXYAUSh62AGDos7Cc5yYCWGURQmjeByeNvo4/CtuKT+lwTRfmPdHP/zcnIZ7yL7R5kf 7lxaAVD14FT5+OPonlvu4hW0mMxm5fdREbBcaBKdPrCJQlRAK7SGbIjlmqmQYzpEpi6g EwIdrldmgAkettCecAfA2rBwM3etcOuPV7l17wE0ZRy1QLXXDtcgNqOdH2HFj1e8zzIr IS+A1iiBXJ47CVu8LCDkUmNrIlf7iRjl6MJOOSyrwUA5A7A2MSvQIC5U9hy1ynZyiHjI 1XNw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from:dkim-signature:dkim-filter; bh=9DmiYK6vtk0UEVfpM1tbiCZ5HYhsJr5J6AvvsY08cw4=; fh=uNLUDd9eMsTSVEvEY0MjopzAsOI0Sfr5Wvsh0Zza+hA=; b=1GdLObwfT9L7U/efWcmpSXqNgyY9XYvG/XtxZXW6pXAAgWbtF0S8EcmHFv0UeWbJIr nYce057SpUxsC2MMzKu9DiYRHTgxKA11zoc2n0yKL9lN7OYgGrdZdaAM8rDFtVzyybo2 t0tCcB4ibmQ6ZQ5xOqQOlkMMhdNZq8SMnVU5Y/kKAcHIY19/KWnIT7STLB6YJ0xKgO4X rXTFP5s3pmbo+O0b/6aZQ1DSbzpcCfam8SRXSp+V46A6TzkjqZcgA82IFb8T0sxnNvm4 Mum7oPe9uuHffiIqnQaIYWRg/ZhW98wX5jxwarODNenImOuPjMvotWYt0NIhkoQuRco4 Dgcw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linuxonhyperv.com header.s=default header.b=YbMJ56a7; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:8 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxonhyperv.com Received: from fry.vger.email (fry.vger.email. [2620:137:e000::3:8]) by mx.google.com with ESMTPS id d27-20020a631d5b000000b005bd42f6085fsi10430983pgm.344.2023.12.13.17.52.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 13 Dec 2023 17:52:09 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:8 as permitted sender) client-ip=2620:137:e000::3:8; Authentication-Results: mx.google.com; dkim=pass header.i=@linuxonhyperv.com header.s=default header.b=YbMJ56a7; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:8 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxonhyperv.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by fry.vger.email (Postfix) with ESMTP id 7D3DA80275A7; Wed, 13 Dec 2023 17:52:05 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at fry.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1442952AbjLNBv5 (ORCPT + 99 others); Wed, 13 Dec 2023 20:51:57 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39096 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1442961AbjLNBvw (ORCPT ); Wed, 13 Dec 2023 20:51:52 -0500 Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 37659D5; Wed, 13 Dec 2023 17:51:58 -0800 (PST) Received: by linux.microsoft.com (Postfix, from userid 1004) id BF40820B74C2; Wed, 13 Dec 2023 17:51:57 -0800 (PST) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com BF40820B74C2 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linuxonhyperv.com; s=default; t=1702518717; bh=9DmiYK6vtk0UEVfpM1tbiCZ5HYhsJr5J6AvvsY08cw4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=YbMJ56a78S73TMrA0MoZnjg+RRB7vAjF43gAYd6W4b3QxRfcYp5DaKMA+gGSAdydF 30mAKg63OeH/1GPDLedLlseC+9e/Ea9lTaodMwnGwq3bbb4QkZJm1UHbfRh+10oV/e /szJyUYgQ3H4afvWloanYfZPbxxSfbOFAYo796eU= From: longli@linuxonhyperv.com To: Jason Gunthorpe , Leon Romanovsky , Ajay Sharma , Dexuan Cui , "K. Y. Srinivasan" , Haiyang Zhang , Wei Liu , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: linux-rdma@vger.kernel.org, linux-hyperv@vger.kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Long Li Subject: [Patch v3 3/3] RDMA/mana_ib: Add CQ interrupt support for RAW QP Date: Wed, 13 Dec 2023 17:51:44 -0800 Message-Id: <1702518704-15886-4-git-send-email-longli@linuxonhyperv.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1702518704-15886-1-git-send-email-longli@linuxonhyperv.com> References: <1702518704-15886-1-git-send-email-longli@linuxonhyperv.com> X-Spam-Status: No, score=-0.9 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on fry.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (fry.vger.email [0.0.0.0]); Wed, 13 Dec 2023 17:52:05 -0800 (PST) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1785220279548969329 X-GMAIL-MSGID: 1785220279548969329 From: Long Li At probing time, the MANA core code allocates EQs for supporting interrupts on Ethernet queues. The same interrupt mechanisum is used by RAW QP. Use the same EQs for delivering interrupts on the CQ for the RAW QP. Signed-off-by: Long Li --- Change in v3: Removed unused varaible mana_ucontext in mana_ib_create_qp_rss(). Simplified error handling in mana_ib_create_qp_rss() on failure to allocate queues for rss table. drivers/infiniband/hw/mana/cq.c | 32 +++++++++++- drivers/infiniband/hw/mana/mana_ib.h | 3 ++ drivers/infiniband/hw/mana/qp.c | 73 ++++++++++++++++++++++++++-- 3 files changed, 102 insertions(+), 6 deletions(-) diff --git a/drivers/infiniband/hw/mana/cq.c b/drivers/infiniband/hw/mana/cq.c index 09a2c263e39b..83ebd070535a 100644 --- a/drivers/infiniband/hw/mana/cq.c +++ b/drivers/infiniband/hw/mana/cq.c @@ -12,13 +12,20 @@ int mana_ib_create_cq(struct ib_cq *ibcq, const struct ib_cq_init_attr *attr, struct ib_device *ibdev = ibcq->device; struct mana_ib_create_cq ucmd = {}; struct mana_ib_dev *mdev; + struct gdma_context *gc; int err; mdev = container_of(ibdev, struct mana_ib_dev, ib_dev); + gc = mdev->gdma_dev->gdma_context; if (udata->inlen < sizeof(ucmd)) return -EINVAL; + if (attr->comp_vector > gc->max_num_queues) + return -EINVAL; + + cq->comp_vector = attr->comp_vector; + err = ib_copy_from_udata(&ucmd, udata, min(sizeof(ucmd), udata->inlen)); if (err) { ibdev_dbg(ibdev, @@ -56,6 +63,7 @@ int mana_ib_create_cq(struct ib_cq *ibcq, const struct ib_cq_init_attr *attr, /* * The CQ ID is not known at this time. The ID is generated at create_qp */ + cq->id = INVALID_QUEUE_ID; return 0; @@ -69,11 +77,33 @@ int mana_ib_destroy_cq(struct ib_cq *ibcq, struct ib_udata *udata) struct mana_ib_cq *cq = container_of(ibcq, struct mana_ib_cq, ibcq); struct ib_device *ibdev = ibcq->device; struct mana_ib_dev *mdev; + struct gdma_context *gc; + int err; mdev = container_of(ibdev, struct mana_ib_dev, ib_dev); + gc = mdev->gdma_dev->gdma_context; + + err = mana_ib_gd_destroy_dma_region(mdev, cq->gdma_region); + if (err) { + ibdev_dbg(ibdev, + "Failed to destroy dma region, %d\n", err); + return err; + } + + if (cq->id != INVALID_QUEUE_ID) { + kfree(gc->cq_table[cq->id]); + gc->cq_table[cq->id] = NULL; + } - mana_ib_gd_destroy_dma_region(mdev, cq->gdma_region); ib_umem_release(cq->umem); return 0; } + +void mana_ib_cq_handler(void *ctx, struct gdma_queue *gdma_cq) +{ + struct mana_ib_cq *cq = ctx; + + if (cq->ibcq.comp_handler) + cq->ibcq.comp_handler(&cq->ibcq, cq->ibcq.cq_context); +} diff --git a/drivers/infiniband/hw/mana/mana_ib.h b/drivers/infiniband/hw/mana/mana_ib.h index 7cb3d8ee4292..53bb4905afd5 100644 --- a/drivers/infiniband/hw/mana/mana_ib.h +++ b/drivers/infiniband/hw/mana/mana_ib.h @@ -86,6 +86,7 @@ struct mana_ib_cq { int cqe; u64 gdma_region; u64 id; + u32 comp_vector; }; struct mana_ib_qp { @@ -209,4 +210,6 @@ int mana_ib_query_gid(struct ib_device *ibdev, u32 port, int index, void mana_ib_disassociate_ucontext(struct ib_ucontext *ibcontext); int mana_ib_query_adapter_caps(struct mana_ib_dev *mdev); + +void mana_ib_cq_handler(void *ctx, struct gdma_queue *gdma_cq); #endif diff --git a/drivers/infiniband/hw/mana/qp.c b/drivers/infiniband/hw/mana/qp.c index 4667b18ec1dd..19998082a376 100644 --- a/drivers/infiniband/hw/mana/qp.c +++ b/drivers/infiniband/hw/mana/qp.c @@ -102,21 +102,26 @@ static int mana_ib_create_qp_rss(struct ib_qp *ibqp, struct ib_pd *pd, struct ib_rwq_ind_table *ind_tbl = attr->rwq_ind_tbl; struct mana_ib_create_qp_rss_resp resp = {}; struct mana_ib_create_qp_rss ucmd = {}; + struct gdma_queue **gdma_cq_allocated; mana_handle_t *mana_ind_table; struct mana_port_context *mpc; + struct gdma_queue *gdma_cq; unsigned int ind_tbl_size; struct mana_context *mc; struct net_device *ndev; + struct gdma_context *gc; struct mana_ib_cq *cq; struct mana_ib_wq *wq; struct gdma_dev *gd; + struct mana_eq *eq; struct ib_cq *ibcq; struct ib_wq *ibwq; int i = 0; u32 port; int ret; - gd = &mdev->gdma_dev->gdma_context->mana; + gc = mdev->gdma_dev->gdma_context; + gd = &gc->mana; mc = gd->driver_data; if (!udata || udata->inlen < sizeof(ucmd)) @@ -179,6 +184,13 @@ static int mana_ib_create_qp_rss(struct ib_qp *ibqp, struct ib_pd *pd, goto fail; } + gdma_cq_allocated = kcalloc(ind_tbl_size, sizeof(*gdma_cq_allocated), + GFP_KERNEL); + if (!gdma_cq_allocated) { + ret = -ENOMEM; + goto fail; + } + qp->port = port; for (i = 0; i < ind_tbl_size; i++) { @@ -197,12 +209,16 @@ static int mana_ib_create_qp_rss(struct ib_qp *ibqp, struct ib_pd *pd, cq_spec.gdma_region = cq->gdma_region; cq_spec.queue_size = cq->cqe * COMP_ENTRY_SIZE; cq_spec.modr_ctx_id = 0; - cq_spec.attached_eq = GDMA_CQ_NO_EQ; + eq = &mc->eqs[cq->comp_vector % gc->max_num_queues]; + cq_spec.attached_eq = eq->eq->id; ret = mana_create_wq_obj(mpc, mpc->port_handle, GDMA_RQ, &wq_spec, &cq_spec, &wq->rx_object); - if (ret) + if (ret) { + /* Do cleanup starting with index i-1 */ + i--; goto fail; + } /* The GDMA regions are now owned by the WQ object */ wq->gdma_region = GDMA_INVALID_DMA_REGION; @@ -219,6 +235,21 @@ static int mana_ib_create_qp_rss(struct ib_qp *ibqp, struct ib_pd *pd, resp.entries[i].wqid = wq->id; mana_ind_table[i] = wq->rx_object; + + /* Create CQ table entry */ + WARN_ON(gc->cq_table[cq->id]); + gdma_cq = kzalloc(sizeof(*gdma_cq), GFP_KERNEL); + if (!gdma_cq) { + ret = -ENOMEM; + goto fail; + } + gdma_cq_allocated[i] = gdma_cq; + + gdma_cq->cq.context = cq; + gdma_cq->type = GDMA_CQ; + gdma_cq->cq.callback = mana_ib_cq_handler; + gdma_cq->id = cq->id; + gc->cq_table[cq->id] = gdma_cq; } resp.num_entries = i; @@ -238,6 +269,7 @@ static int mana_ib_create_qp_rss(struct ib_qp *ibqp, struct ib_pd *pd, goto fail; } + kfree(gdma_cq_allocated); kfree(mana_ind_table); return 0; @@ -245,10 +277,17 @@ static int mana_ib_create_qp_rss(struct ib_qp *ibqp, struct ib_pd *pd, fail: while (i-- > 0) { ibwq = ind_tbl->ind_tbl[i]; + ibcq = ibwq->cq; wq = container_of(ibwq, struct mana_ib_wq, ibwq); + cq = container_of(ibcq, struct mana_ib_cq, ibcq); + + gc->cq_table[cq->id] = NULL; + kfree(gdma_cq_allocated[i]); + mana_destroy_wq_obj(mpc, GDMA_RQ, wq->rx_object); } + kfree(gdma_cq_allocated); kfree(mana_ind_table); return ret; @@ -270,14 +309,17 @@ static int mana_ib_create_qp_raw(struct ib_qp *ibqp, struct ib_pd *ibpd, struct gdma_dev *gd = &mdev->gdma_dev->gdma_context->mana; struct mana_ib_create_qp_resp resp = {}; struct mana_ib_create_qp ucmd = {}; + struct gdma_queue *gdma_cq = NULL; struct mana_obj_spec wq_spec = {}; struct mana_obj_spec cq_spec = {}; struct mana_port_context *mpc; struct mana_context *mc; struct net_device *ndev; struct ib_umem *umem; - int err; + struct mana_eq *eq; + int eq_vec; u32 port; + int err; mc = gd->driver_data; @@ -354,7 +396,9 @@ static int mana_ib_create_qp_raw(struct ib_qp *ibqp, struct ib_pd *ibpd, cq_spec.gdma_region = send_cq->gdma_region; cq_spec.queue_size = send_cq->cqe * COMP_ENTRY_SIZE; cq_spec.modr_ctx_id = 0; - cq_spec.attached_eq = GDMA_CQ_NO_EQ; + eq_vec = send_cq->comp_vector % gd->gdma_context->max_num_queues; + eq = &mc->eqs[eq_vec]; + cq_spec.attached_eq = eq->eq->id; err = mana_create_wq_obj(mpc, mpc->port_handle, GDMA_SQ, &wq_spec, &cq_spec, &qp->tx_object); @@ -372,6 +416,20 @@ static int mana_ib_create_qp_raw(struct ib_qp *ibqp, struct ib_pd *ibpd, qp->sq_id = wq_spec.queue_index; send_cq->id = cq_spec.queue_index; + /* Create CQ table entry */ + WARN_ON(gd->gdma_context->cq_table[send_cq->id]); + gdma_cq = kzalloc(sizeof(*gdma_cq), GFP_KERNEL); + if (!gdma_cq) { + err = -ENOMEM; + goto err_destroy_wq_obj; + } + + gdma_cq->cq.context = send_cq; + gdma_cq->type = GDMA_CQ; + gdma_cq->cq.callback = mana_ib_cq_handler; + gdma_cq->id = send_cq->id; + gd->gdma_context->cq_table[send_cq->id] = gdma_cq; + ibdev_dbg(&mdev->ib_dev, "ret %d qp->tx_object 0x%llx sq id %llu cq id %llu\n", err, qp->tx_object, qp->sq_id, send_cq->id); @@ -391,6 +449,11 @@ static int mana_ib_create_qp_raw(struct ib_qp *ibqp, struct ib_pd *ibpd, return 0; err_destroy_wq_obj: + if (gdma_cq) { + kfree(gdma_cq); + gd->gdma_context->cq_table[send_cq->id] = NULL; + } + mana_destroy_wq_obj(mpc, GDMA_SQ, qp->tx_object); err_destroy_dma_region: