From patchwork Sat Oct 22 00:01:18 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: longli@linuxonhyperv.com X-Patchwork-Id: 7034 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4242:0:0:0:0:0 with SMTP id s2csp959891wrr; Fri, 21 Oct 2022 17:03:15 -0700 (PDT) X-Google-Smtp-Source: AMsMyM5/upSOn5YAdKamBwqWiDYGc/d5SQFbj49uNRosWMWyCRDkE/KPyBFdnlyhWcPN6sGe6dE2 X-Received: by 2002:a17:907:9625:b0:78d:bb06:9072 with SMTP id gb37-20020a170907962500b0078dbb069072mr17801634ejc.472.1666396995115; Fri, 21 Oct 2022 17:03:15 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1666396995; cv=none; d=google.com; s=arc-20160816; b=EaRjjBDonLCh946FOtqiCnNNmh4OjvSwl+VBNGJ6G2bM6vvswDiruGhIbF1HmWpUVG W5Xrn/LDvKBANqPAxcnONqYOAK8Re2m11QSPyaG6R3ey7aJlk+N3+71Kji/zGIPm7Qho C7ilWdXDSP/6giP0PsTqepxYB7gXG+ogYJ7qFYAKu0lDtCrZF7AEe/5SxO6mw9ofFQ1E xA7qsQUCtt2wtgFmsBTNTQ9bqStNrG8gr2/qqhxYZa9CDDvvIDPb65gROkBHhGvNAF2i zrsXlLUOqmwTYCxar2mcXRy/xos/dr8bxCx4RxW8B4nw+ntLksTjNHubInl9h90PgVT1 HPAQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:reply-to:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:dkim-filter; bh=rwVxEsxu7JjwiZgZjI9rnYUFS4A1egO/GwxRV1e+7jQ=; b=eeUeGzQSYC9FxbhsrZxlFpl+5ff6VVqUGlkU/RBedyTu6O1QeXJexdMMGUF2Na36SH MvhnJ0FZAjDuLnmSsYdlyKNy01+UOdFSabIzz4rCmBQ2Z3OXKPPBHxk+ug30Cwv0oOW+ xMu31fXfJ1AuPoQ6bVuzJrZ0szejtoUo52bHjvD/eD6IREe0j9cwCS76SuNzM3A4/IQy iC2nlhXDl3U2qmVo1LbFpbEY/aYjRWWkrAw0yne5nQrUjj9cug0y9XjdgsiGfTh8nooj 0IjI5c5L3pE1ZkkcPye5h8sDmPpkO81EesSd5nT+LMX6uwIYbfOj1zpFyM6U2eI3DyY7 d2WA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linuxonhyperv.com header.s=default header.b=QjlkGRW4; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxonhyperv.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id oz39-20020a1709077da700b0073d581b0906si24929021ejc.278.2022.10.21.17.02.49; Fri, 21 Oct 2022 17:03:15 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@linuxonhyperv.com header.s=default header.b=QjlkGRW4; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxonhyperv.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230006AbiJVABr (ORCPT + 99 others); Fri, 21 Oct 2022 20:01:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51522 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229926AbiJVABl (ORCPT ); Fri, 21 Oct 2022 20:01:41 -0400 Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id B3F433DBF8; Fri, 21 Oct 2022 17:01:37 -0700 (PDT) Received: by linux.microsoft.com (Postfix, from userid 1004) id 81FEC20FEED2; Fri, 21 Oct 2022 17:01:37 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com 81FEC20FEED2 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linuxonhyperv.com; s=default; t=1666396897; bh=rwVxEsxu7JjwiZgZjI9rnYUFS4A1egO/GwxRV1e+7jQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:Reply-To:From; b=QjlkGRW4DfNZAOQS2rf5iH9ZKtc7Ec0/3goN8eFLWHbgp6LTWz3d9Q924B5tKGWa5 Cu0mSYkqeBi+YPO69GONW8rS80dAtojL+XvnLq/RN0/xjF15TQIEPiL7atRULR3E+P cYVNdoZ/UEs2RHdOrrLgJYFuugr2KlIUFYPsBo8E= From: longli@linuxonhyperv.com To: "K. Y. Srinivasan" , Haiyang Zhang , Stephen Hemminger , Wei Liu , Dexuan Cui , "David S. Miller" , Jakub Kicinski , Paolo Abeni , Jason Gunthorpe , Leon Romanovsky , edumazet@google.com, shiraz.saleem@intel.com, Ajay Sharma Cc: linux-hyperv@vger.kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org, Long Li Subject: [Patch v9 01/12] net: mana: Add support for auxiliary device Date: Fri, 21 Oct 2022 17:01:18 -0700 Message-Id: <1666396889-31288-2-git-send-email-longli@linuxonhyperv.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1666396889-31288-1-git-send-email-longli@linuxonhyperv.com> References: <1666396889-31288-1-git-send-email-longli@linuxonhyperv.com> Reply-To: longli@microsoft.com X-Spam-Status: No, score=-11.6 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, RCVD_IN_DNSWL_MED,SPF_HELO_PASS,SPF_PASS,USER_IN_DEF_SPF_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1747343895653750791?= X-GMAIL-MSGID: =?utf-8?q?1747343895653750791?= From: Long Li In preparation for supporting MANA RDMA driver, add support for auxiliary device in the Ethernet driver. The RDMA device is modeled as an auxiliary device to the Ethernet device. Reviewed-by: Dexuan Cui Signed-off-by: Long Li Acked-by: Haiyang Zhang --- Change log: v3: define mana_adev_idx_alloc and mana_adev_idx_free as static v7: fix a bug that may assign a negative value to adev->id drivers/net/ethernet/microsoft/Kconfig | 1 + drivers/net/ethernet/microsoft/mana/gdma.h | 2 + .../ethernet/microsoft/mana/mana_auxiliary.h | 10 +++ drivers/net/ethernet/microsoft/mana/mana_en.c | 83 ++++++++++++++++++- 4 files changed, 95 insertions(+), 1 deletion(-) create mode 100644 drivers/net/ethernet/microsoft/mana/mana_auxiliary.h diff --git a/drivers/net/ethernet/microsoft/Kconfig b/drivers/net/ethernet/microsoft/Kconfig index fe4e7a7d9c0b..090e6b983243 100644 --- a/drivers/net/ethernet/microsoft/Kconfig +++ b/drivers/net/ethernet/microsoft/Kconfig @@ -19,6 +19,7 @@ config MICROSOFT_MANA tristate "Microsoft Azure Network Adapter (MANA) support" depends on PCI_MSI && X86_64 depends on PCI_HYPERV + select AUXILIARY_BUS help This driver supports Microsoft Azure Network Adapter (MANA). So far, the driver is only supported on X86_64. diff --git a/drivers/net/ethernet/microsoft/mana/gdma.h b/drivers/net/ethernet/microsoft/mana/gdma.h index 4a6efe6ada08..f321a2616d03 100644 --- a/drivers/net/ethernet/microsoft/mana/gdma.h +++ b/drivers/net/ethernet/microsoft/mana/gdma.h @@ -204,6 +204,8 @@ struct gdma_dev { /* GDMA driver specific pointer */ void *driver_data; + + struct auxiliary_device *adev; }; #define MINIMUM_SUPPORTED_PAGE_SIZE PAGE_SIZE diff --git a/drivers/net/ethernet/microsoft/mana/mana_auxiliary.h b/drivers/net/ethernet/microsoft/mana/mana_auxiliary.h new file mode 100644 index 000000000000..373d59756846 --- /dev/null +++ b/drivers/net/ethernet/microsoft/mana/mana_auxiliary.h @@ -0,0 +1,10 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* Copyright (c) 2022, Microsoft Corporation. */ + +#include "mana.h" +#include + +struct mana_adev { + struct auxiliary_device adev; + struct gdma_dev *mdev; +}; diff --git a/drivers/net/ethernet/microsoft/mana/mana_en.c b/drivers/net/ethernet/microsoft/mana/mana_en.c index 9259a74eca40..8751e475d1ba 100644 --- a/drivers/net/ethernet/microsoft/mana/mana_en.c +++ b/drivers/net/ethernet/microsoft/mana/mana_en.c @@ -13,6 +13,19 @@ #include #include "mana.h" +#include "mana_auxiliary.h" + +static DEFINE_IDA(mana_adev_ida); + +static int mana_adev_idx_alloc(void) +{ + return ida_alloc(&mana_adev_ida, GFP_KERNEL); +} + +static void mana_adev_idx_free(int idx) +{ + ida_free(&mana_adev_ida, idx); +} /* Microsoft Azure Network Adapter (MANA) functions */ @@ -2106,6 +2119,69 @@ static int mana_probe_port(struct mana_context *ac, int port_idx, return err; } +static void adev_release(struct device *dev) +{ + struct mana_adev *madev = container_of(dev, struct mana_adev, adev.dev); + + kfree(madev); +} + +static void remove_adev(struct gdma_dev *gd) +{ + struct auxiliary_device *adev = gd->adev; + int id = adev->id; + + auxiliary_device_delete(adev); + auxiliary_device_uninit(adev); + + mana_adev_idx_free(id); + gd->adev = NULL; +} + +static int add_adev(struct gdma_dev *gd) +{ + struct auxiliary_device *adev; + struct mana_adev *madev; + int ret; + + madev = kzalloc(sizeof(*madev), GFP_KERNEL); + if (!madev) + return -ENOMEM; + + adev = &madev->adev; + ret = mana_adev_idx_alloc(); + if (ret < 0) + goto idx_fail; + adev->id = ret; + + adev->name = "rdma"; + adev->dev.parent = gd->gdma_context->dev; + adev->dev.release = adev_release; + madev->mdev = gd; + + ret = auxiliary_device_init(adev); + if (ret) + goto init_fail; + + ret = auxiliary_device_add(adev); + if (ret) + goto add_fail; + + gd->adev = adev; + return 0; + +add_fail: + auxiliary_device_uninit(adev); + +init_fail: + mana_adev_idx_free(adev->id); + +idx_fail: + kfree(madev); + + return ret; +} + int mana_probe(struct gdma_dev *gd, bool resuming) { struct gdma_context *gc = gd->gdma_context; @@ -2173,6 +2249,8 @@ int mana_probe(struct gdma_dev *gd, bool resuming) break; } } + + err = add_adev(gd); out: if (err) mana_remove(gd, false); @@ -2189,6 +2267,10 @@ void mana_remove(struct gdma_dev *gd, bool suspending) int err; int i; + /* adev currently doesn't support suspending, always remove it */ + if (gd->adev) + remove_adev(gd); + for (i = 0; i < ac->num_ports; i++) { ndev = ac->ports[i]; if (!ndev) { @@ -2221,7 +2303,6 @@ void mana_remove(struct gdma_dev *gd, bool suspending) } mana_destroy_eq(ac); - out: mana_gd_deregister_device(gd); From patchwork Sat Oct 22 00:01:19 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: longli@linuxonhyperv.com X-Patchwork-Id: 7036 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4242:0:0:0:0:0 with SMTP id s2csp960080wrr; Fri, 21 Oct 2022 17:03:42 -0700 (PDT) X-Google-Smtp-Source: AMsMyM4qt9nepPTg48TWJi/pXab6et0byPx8XkFnyDuWV3WNdBowy0eC4UG2WsqpIMK+rvZDEY5y X-Received: by 2002:a17:907:6eac:b0:78d:ce9c:3761 with SMTP id sh44-20020a1709076eac00b0078dce9c3761mr17515938ejc.738.1666397022759; Fri, 21 Oct 2022 17:03:42 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1666397022; cv=none; d=google.com; s=arc-20160816; b=kuhrOl5cBjUElqYGE1YumUjbg0Iyh81VwuVdX9RscN4JPK1jsutGA3cwgYxDI1Yx75 HoLPeqM65h0leZoJdLVBvEEAvPOcfJWTOt59rnCZktOJPP+2haTh/BRARtFCRMi6oVX3 bR1syKqUA+T49IWUcVN/h/lAiGvBdm7E3ttjB4Hx/ol+/pTz4Tt582ySkOjJlkA3cRPL 2+d3UziSQsB4kthK5pcnboGzMRz1WcTixXUtjIDsLLla0nh2aD2Vm3t9HTK71T+GPS3G FD+3Uq9LDBIPCaM6IjAYH6PTgCb4Q3ap3SeLYVqUvf5KJpSOv6PnixuSw/ib25kWeJo5 mqmw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:reply-to:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:dkim-filter; bh=8Z6yW2KqbQl4Wyror+q4zSVQuzt8hLhkxhobys217qM=; b=dOXYu3+97HCXRTRehn+Dx1EZkwAj/1nuL7UrEbzj1FJA5FIr9b3sXkl6impGL4I9EF oZIIc+TO04RxK/7A+5V7/g5gex5hqR6XfFxjgmAvDzHveunh2Lv3kUInednthj0H4yge 5WDhwrGOsYvOhDmU1kHzhzINSZv0dRCH0hvigoPyZROEiXl/sTG1RPPgTZlTw3SoezZU N5rO06jy8I8hhHN5jz4li0melZY+oQ+/0DG/7CWkmLJm+b3gmNvm3CpAnO1BvepwD5px uSOtdEkxwuCj6lfoCgDtw0uYApWoftnBSHyiTjcQwcSLSQK8i04eyhije9EpJ1S4Pgvj KlxA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linuxonhyperv.com header.s=default header.b=dbUl4LPj; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxonhyperv.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id j12-20020a05640211cc00b0045878af0adbsi21210750edw.393.2022.10.21.17.03.16; Fri, 21 Oct 2022 17:03:42 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@linuxonhyperv.com header.s=default header.b=dbUl4LPj; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxonhyperv.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230108AbiJVACG (ORCPT + 99 others); Fri, 21 Oct 2022 20:02:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51540 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229978AbiJVABm (ORCPT ); Fri, 21 Oct 2022 20:01:42 -0400 Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 6848E24FEF7; Fri, 21 Oct 2022 17:01:38 -0700 (PDT) Received: by linux.microsoft.com (Postfix, from userid 1004) id F1D1C20FEEF9; Fri, 21 Oct 2022 17:01:37 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com F1D1C20FEEF9 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linuxonhyperv.com; s=default; t=1666396898; bh=8Z6yW2KqbQl4Wyror+q4zSVQuzt8hLhkxhobys217qM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:Reply-To:From; b=dbUl4LPjwC5jlJ9jZDI+pQVcXIq5Apl41tdN1bfZnPSzGxGmmA2f8UdVyvHi7SWS+ zc73KeUdY7C2aYxUhb36fCoZH7ER02ZBid/lzavDfWZ87L+VxxkYboWbEFXgT16iNv D1maftXWAXcO0+f/uFr52ABnFqMUMsGw1w9wMPmM= From: longli@linuxonhyperv.com To: "K. Y. Srinivasan" , Haiyang Zhang , Stephen Hemminger , Wei Liu , Dexuan Cui , "David S. Miller" , Jakub Kicinski , Paolo Abeni , Jason Gunthorpe , Leon Romanovsky , edumazet@google.com, shiraz.saleem@intel.com, Ajay Sharma Cc: linux-hyperv@vger.kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org, Long Li Subject: [Patch v9 02/12] net: mana: Record the physical address for doorbell page region Date: Fri, 21 Oct 2022 17:01:19 -0700 Message-Id: <1666396889-31288-3-git-send-email-longli@linuxonhyperv.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1666396889-31288-1-git-send-email-longli@linuxonhyperv.com> References: <1666396889-31288-1-git-send-email-longli@linuxonhyperv.com> Reply-To: longli@microsoft.com X-Spam-Status: No, score=-11.6 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, RCVD_IN_DNSWL_MED,SPF_HELO_PASS,SPF_PASS,USER_IN_DEF_SPF_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1747343924551760998?= X-GMAIL-MSGID: =?utf-8?q?1747343924551760998?= From: Long Li For supporting RDMA device with multiple user contexts with their individual doorbell pages, record the start address of doorbell page region for use by the RDMA driver to allocate user context doorbell IDs. Reviewed-by: Dexuan Cui Signed-off-by: Long Li Acked-by: Haiyang Zhang --- Change log: v6: rebased to rdma-next drivers/net/ethernet/microsoft/mana/gdma.h | 2 ++ drivers/net/ethernet/microsoft/mana/gdma_main.c | 4 ++++ 2 files changed, 6 insertions(+) diff --git a/drivers/net/ethernet/microsoft/mana/gdma.h b/drivers/net/ethernet/microsoft/mana/gdma.h index f321a2616d03..72eaec2470c0 100644 --- a/drivers/net/ethernet/microsoft/mana/gdma.h +++ b/drivers/net/ethernet/microsoft/mana/gdma.h @@ -351,9 +351,11 @@ struct gdma_context { u32 test_event_eq_id; bool is_pf; + phys_addr_t bar0_pa; void __iomem *bar0_va; void __iomem *shm_base; void __iomem *db_page_base; + phys_addr_t phys_db_page_base; u32 db_page_size; /* Shared memory chanenl (used to bootstrap HWC) */ diff --git a/drivers/net/ethernet/microsoft/mana/gdma_main.c b/drivers/net/ethernet/microsoft/mana/gdma_main.c index 5f9240182351..0cfe5f15458e 100644 --- a/drivers/net/ethernet/microsoft/mana/gdma_main.c +++ b/drivers/net/ethernet/microsoft/mana/gdma_main.c @@ -44,6 +44,9 @@ static void mana_gd_init_vf_regs(struct pci_dev *pdev) gc->db_page_base = gc->bar0_va + mana_gd_r64(gc, GDMA_REG_DB_PAGE_OFFSET); + gc->phys_db_page_base = gc->bar0_pa + + mana_gd_r64(gc, GDMA_REG_DB_PAGE_OFFSET); + gc->shm_base = gc->bar0_va + mana_gd_r64(gc, GDMA_REG_SHM_OFFSET); } @@ -1367,6 +1370,7 @@ static int mana_gd_probe(struct pci_dev *pdev, const struct pci_device_id *ent) mutex_init(&gc->eq_test_event_mutex); pci_set_drvdata(pdev, gc); + gc->bar0_pa = pci_resource_start(pdev, 0); bar0_va = pci_iomap(pdev, bar, 0); if (!bar0_va) From patchwork Sat Oct 22 00:01:20 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: longli@linuxonhyperv.com X-Patchwork-Id: 7035 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4242:0:0:0:0:0 with SMTP id s2csp960037wrr; Fri, 21 Oct 2022 17:03:38 -0700 (PDT) X-Google-Smtp-Source: AMsMyM40mIBtlebx0ExI/Dlc0vbj6mAz+BMCuqYFlCb9Biqh9H/DXt0mIdyKhwKykCA8T3i49amt X-Received: by 2002:a17:907:daa:b0:78d:9bc9:7d7a with SMTP id go42-20020a1709070daa00b0078d9bc97d7amr17496879ejc.567.1666397017918; Fri, 21 Oct 2022 17:03:37 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1666397017; cv=none; d=google.com; s=arc-20160816; b=zQTkxgnfI7+jbeLV031hNu2+M15tfdIrQ7cofMeC0XRaO3mga3bh/CCIhN82u+S58d zk+Zpcidh0LIle4r75NcwIaZjnoaq8/ZzZqhbUbZT6qBY+BNrXas8IAygTum9TKkBWjc 43VN+3FLnvVi2JsrEd9Nxhr+rJfFRZSwHxsETwEbMcIVfW2vogj8YKQiVwaDpA6fzKtX 9XeR2J2hRwL0ymLIXfd00UoP7iLrjtKelH91OJoz6GO9FRt/3PVy8m+VDKlUmKW2NU12 AYP2hMw+xIjZWX766rdL6g6gwrOfkLYaN1D2Kj9mhTv4VAL7me0PRqA0K4pMCbmj6JNq umlg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:reply-to:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:dkim-filter; bh=bXAXXXWrR9V+EThSaWjY7kZ/dnC9Y8q0qjq9clyypqg=; b=ArtV5iLthR9PnVOIywXOyAg9BWUdNRYhUiZoWMy0r4iFNAgWnvVzUlLcsTO3wF/a28 yEe2+KPbYESikqXwxRAui4ZYaJgDJDOlnqvqjV/O9R/OyEjPwQBh5lEnd7jxrlFgpN76 iRn4+2utEJ/WQB/So18xL6CeytbwBmzz4PdHc5wRelrF1zAcFgzqcOcdWoeIG69gCt0r EVnRCL4oiH5CKY+7GBLRStecqi5F/7Q4tgom98JiiZkvZW5R0w3OqNmlW7QnSZ1TI+zn 9indDKl0SKbwXZY2Cxv8cm/YAnDWFMhnvs0Oo9K7uM3cpFGWsk5L04tuJKoHI1mEn+5C BM3g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linuxonhyperv.com header.s=default header.b=MALzgYjC; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxonhyperv.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id xg11-20020a170907320b00b0073d92f673f8si19754561ejb.937.2022.10.21.17.03.13; Fri, 21 Oct 2022 17:03:37 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@linuxonhyperv.com header.s=default header.b=MALzgYjC; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxonhyperv.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229585AbiJVAB7 (ORCPT + 99 others); Fri, 21 Oct 2022 20:01:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51548 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229966AbiJVABm (ORCPT ); Fri, 21 Oct 2022 20:01:42 -0400 Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id A329325028D; Fri, 21 Oct 2022 17:01:38 -0700 (PDT) Received: by linux.microsoft.com (Postfix, from userid 1004) id 722E020FEEF2; Fri, 21 Oct 2022 17:01:38 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com 722E020FEEF2 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linuxonhyperv.com; s=default; t=1666396898; bh=bXAXXXWrR9V+EThSaWjY7kZ/dnC9Y8q0qjq9clyypqg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:Reply-To:From; b=MALzgYjCugpOHoy9IC09t8UtdyV/A67gBYn/Q2AvyqzWb3chgq/mwqOgTauNz4afE eqPQRnvrDGmaGyTkRTCAXkPCK/rf6TPWd8/lwxacVxYk9zV2oQTRO3GXsYVfsSr0qS 1S/a7aSvmm6ScclI7sSQXdSdihGiZiCjlMr4rEkE= From: longli@linuxonhyperv.com To: "K. Y. Srinivasan" , Haiyang Zhang , Stephen Hemminger , Wei Liu , Dexuan Cui , "David S. Miller" , Jakub Kicinski , Paolo Abeni , Jason Gunthorpe , Leon Romanovsky , edumazet@google.com, shiraz.saleem@intel.com, Ajay Sharma Cc: linux-hyperv@vger.kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org, Long Li Subject: [Patch v9 03/12] net: mana: Handle vport sharing between devices Date: Fri, 21 Oct 2022 17:01:20 -0700 Message-Id: <1666396889-31288-4-git-send-email-longli@linuxonhyperv.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1666396889-31288-1-git-send-email-longli@linuxonhyperv.com> References: <1666396889-31288-1-git-send-email-longli@linuxonhyperv.com> Reply-To: longli@microsoft.com X-Spam-Status: No, score=-11.6 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, RCVD_IN_DNSWL_MED,SPF_HELO_PASS,SPF_PASS,USER_IN_DEF_SPF_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1747343919304046337?= X-GMAIL-MSGID: =?utf-8?q?1747343919304046337?= From: Long Li For outgoing packets, the PF requires the VF to configure the vport with corresponding protection domain and doorbell ID for the kernel or user context. The vport can't be shared between different contexts. Implement the logic to exclusively take over the vport by either the Ethernet device or RDMA device. Reviewed-by: Dexuan Cui Signed-off-by: Long Li Acked-by: Haiyang Zhang --- Change log: v2: use refcount instead of directly using atomic variables v4: change to mutex to avoid possible race with refcount v5: add detailed comments explaining vport sharing, use EXPORT_SYMBOL_NS v6: rebased to rdma-next drivers/net/ethernet/microsoft/mana/mana.h | 7 +++ drivers/net/ethernet/microsoft/mana/mana_en.c | 53 ++++++++++++++++++- 2 files changed, 58 insertions(+), 2 deletions(-) diff --git a/drivers/net/ethernet/microsoft/mana/mana.h b/drivers/net/ethernet/microsoft/mana/mana.h index d58be64374c8..2883a08dbfb5 100644 --- a/drivers/net/ethernet/microsoft/mana/mana.h +++ b/drivers/net/ethernet/microsoft/mana/mana.h @@ -380,6 +380,10 @@ struct mana_port_context { mana_handle_t port_handle; mana_handle_t pf_filter_handle; + /* Mutex for sharing access to vport_use_count */ + struct mutex vport_mutex; + int vport_use_count; + u16 port_idx; bool port_is_up; @@ -631,4 +635,7 @@ struct mana_tx_package { struct gdma_posted_wqe_info wqe_info; }; +int mana_cfg_vport(struct mana_port_context *apc, u32 protection_dom_id, + u32 doorbell_pg_id); +void mana_uncfg_vport(struct mana_port_context *apc); #endif /* _MANA_H */ diff --git a/drivers/net/ethernet/microsoft/mana/mana_en.c b/drivers/net/ethernet/microsoft/mana/mana_en.c index 8751e475d1ba..efe14a343fd1 100644 --- a/drivers/net/ethernet/microsoft/mana/mana_en.c +++ b/drivers/net/ethernet/microsoft/mana/mana_en.c @@ -646,13 +646,48 @@ static int mana_query_vport_cfg(struct mana_port_context *apc, u32 vport_index, return 0; } -static int mana_cfg_vport(struct mana_port_context *apc, u32 protection_dom_id, - u32 doorbell_pg_id) +void mana_uncfg_vport(struct mana_port_context *apc) +{ + mutex_lock(&apc->vport_mutex); + apc->vport_use_count--; + WARN_ON(apc->vport_use_count < 0); + mutex_unlock(&apc->vport_mutex); +} +EXPORT_SYMBOL_NS(mana_uncfg_vport, NET_MANA); + +int mana_cfg_vport(struct mana_port_context *apc, u32 protection_dom_id, + u32 doorbell_pg_id) { struct mana_config_vport_resp resp = {}; struct mana_config_vport_req req = {}; int err; + /* This function is used to program the Ethernet port in the hardware + * table. It can be called from the Ethernet driver or the RDMA driver. + * + * For Ethernet usage, the hardware supports only one active user on a + * physical port. The driver checks on the port usage before programming + * the hardware when creating the RAW QP (RDMA driver) or exposing the + * device to kernel NET layer (Ethernet driver). + * + * Because the RDMA driver doesn't know in advance which QP type the + * user will create, it exposes the device with all its ports. The user + * may not be able to create RAW QP on a port if this port is already + * in used by the Ethernet driver from the kernel. + * + * This physical port limitation only applies to the RAW QP. For RC QP, + * the hardware doesn't have this limitation. The user can create RC + * QPs on a physical port up to the hardware limits independent of the + * Ethernet usage on the same port. + */ + mutex_lock(&apc->vport_mutex); + if (apc->vport_use_count > 0) { + mutex_unlock(&apc->vport_mutex); + return -EBUSY; + } + apc->vport_use_count++; + mutex_unlock(&apc->vport_mutex); + mana_gd_init_req_hdr(&req.hdr, MANA_CONFIG_VPORT_TX, sizeof(req), sizeof(resp)); req.vport = apc->port_handle; @@ -679,9 +714,16 @@ static int mana_cfg_vport(struct mana_port_context *apc, u32 protection_dom_id, apc->tx_shortform_allowed = resp.short_form_allowed; apc->tx_vp_offset = resp.tx_vport_offset; + + netdev_info(apc->ndev, "Configured vPort %llu PD %u DB %u\n", + apc->port_handle, protection_dom_id, doorbell_pg_id); out: + if (err) + mana_uncfg_vport(apc); + return err; } +EXPORT_SYMBOL_NS(mana_cfg_vport, NET_MANA); static int mana_cfg_vport_steering(struct mana_port_context *apc, enum TRI_STATE rx, @@ -742,6 +784,9 @@ static int mana_cfg_vport_steering(struct mana_port_context *apc, resp.hdr.status); err = -EPROTO; } + + netdev_info(ndev, "Configured steering vPort %llu entries %u\n", + apc->port_handle, num_entries); out: kfree(req); return err; @@ -1804,6 +1849,7 @@ static void mana_destroy_vport(struct mana_port_context *apc) } mana_destroy_txq(apc); + mana_uncfg_vport(apc); if (gd->gdma_context->is_pf) mana_pf_deregister_hw_vport(apc); @@ -2076,6 +2122,9 @@ static int mana_probe_port(struct mana_context *ac, int port_idx, apc->pf_filter_handle = INVALID_MANA_HANDLE; apc->port_idx = port_idx; + mutex_init(&apc->vport_mutex); + apc->vport_use_count = 0; + ndev->netdev_ops = &mana_devops; ndev->ethtool_ops = &mana_ethtool_ops; ndev->mtu = ETH_DATA_LEN; From patchwork Sat Oct 22 00:01:21 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: longli@linuxonhyperv.com X-Patchwork-Id: 7037 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4242:0:0:0:0:0 with SMTP id s2csp960157wrr; Fri, 21 Oct 2022 17:03:55 -0700 (PDT) X-Google-Smtp-Source: AMsMyM5DDJMP77vcOBjMpMI8qZwZx64eIHtkK45knI4C1hLSSagwTFpyZymn35smMhPF0pwpPobk X-Received: by 2002:a17:907:3da2:b0:78d:51c4:5b80 with SMTP id he34-20020a1709073da200b0078d51c45b80mr17523937ejc.716.1666397035151; Fri, 21 Oct 2022 17:03:55 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1666397035; cv=none; d=google.com; s=arc-20160816; b=jP8YgEj6NGOniunlB92ZZpxufQmNANl2fiGHH1dqZszId1hhYJJkKP+IYgLpjOG8zM YBITEiILn07TMF9ci+n1qV+RIqBgap/3M0yeoSZqZzCltceOSL0bfRUaNAhgSU3ziPZI rG3X7M4RvMRrOgyxLsNcyMuBOfYrwFjSeYN731fpnHrx1PGsFl2Qy7F/knR3jDhobM6Q 3H7Uhxt3vw70Po3M98SO2VA+HSRkGfPBRFTKIc56Uxrocb4gzFgflALX/AAD5CusUb2N OE0EBQFGGgd9Dg8WSD5VNhHalyiLBSeY9MjE/0aOVbbQRiy0wDAI3zPK01a8Rt9ZY6iL WhYA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:reply-to:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:dkim-filter; bh=xJ3JkF6Vcd6GI3PssrAc3ksaEXDp0/3rJi7rJ5ZNs8A=; b=Mwp7GoeCFhDTGi1JbvqoKIEUHVbZOfZ0uzb4wYWSG1vm+qZRF9JI/f9SgUoavaeXD5 1Pw0WEAQZHXe+k58qhVzXMPVab6kkY+fzPXKiyOm7RnSymmkE8S8hAFR9knpaxfbj5x/ lCZXj0BUO3MrXArgv7yBBDCQoYNqooPHPuYrjzep3+Z8rHXr4QzvK7mK835iMIgmCz1V dOYO07MQ0yuu92RiD9lFVYCyhB7ZBY/TRRdjSa7cpBMdQwJkZXFp868lZJnvtP9X5kCK 15kchQs4Ap5Aeofu4QVPEWctpc+b/USZa8PUIeyl42P8eNwmdtiTYJurcUdEJwogUPDZ iWZQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linuxonhyperv.com header.s=default header.b=PvtffB6A; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxonhyperv.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id a7-20020a50e707000000b00459039258b2si18702872edn.289.2022.10.21.17.03.30; Fri, 21 Oct 2022 17:03:55 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@linuxonhyperv.com header.s=default header.b=PvtffB6A; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxonhyperv.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230004AbiJVACK (ORCPT + 99 others); Fri, 21 Oct 2022 20:02:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51648 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229982AbiJVABo (ORCPT ); Fri, 21 Oct 2022 20:01:44 -0400 Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 24ABD2502AC; Fri, 21 Oct 2022 17:01:39 -0700 (PDT) Received: by linux.microsoft.com (Postfix, from userid 1004) id EB94820FEEFC; Fri, 21 Oct 2022 17:01:38 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com EB94820FEEFC DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linuxonhyperv.com; s=default; t=1666396898; bh=xJ3JkF6Vcd6GI3PssrAc3ksaEXDp0/3rJi7rJ5ZNs8A=; h=From:To:Cc:Subject:Date:In-Reply-To:References:Reply-To:From; b=PvtffB6AzfZBpRbMiRIe8PyWTUfntZzirWGSjKNdRnwZdeHcQzoE7iUTdZsZ7UZ4I ZPy804DI3tAo609zQQGL6fPVkTpRm10c/YmUxP3kyFmWG7w3yLSwFQ3oUHc4WTZrBl web5BG6mUnE66OCFWFQjKGFl1nYpXjZL5zc2Ad+I= From: longli@linuxonhyperv.com To: "K. Y. Srinivasan" , Haiyang Zhang , Stephen Hemminger , Wei Liu , Dexuan Cui , "David S. Miller" , Jakub Kicinski , Paolo Abeni , Jason Gunthorpe , Leon Romanovsky , edumazet@google.com, shiraz.saleem@intel.com, Ajay Sharma Cc: linux-hyperv@vger.kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org, Long Li Subject: [Patch v9 04/12] net: mana: Set the DMA device max segment size Date: Fri, 21 Oct 2022 17:01:21 -0700 Message-Id: <1666396889-31288-5-git-send-email-longli@linuxonhyperv.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1666396889-31288-1-git-send-email-longli@linuxonhyperv.com> References: <1666396889-31288-1-git-send-email-longli@linuxonhyperv.com> Reply-To: longli@microsoft.com X-Spam-Status: No, score=-11.6 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, RCVD_IN_DNSWL_MED,SPF_HELO_PASS,SPF_PASS,USER_IN_DEF_SPF_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1747343937391240257?= X-GMAIL-MSGID: =?utf-8?q?1747343937391240257?= From: Ajay Sharma MANA hardware doesn't have any restrictions on the DMA segment size, set it to the max allowed value. Signed-off-by: Ajay Sharma Reviewed-by: Dexuan Cui Signed-off-by: Long Li Acked-by: Haiyang Zhang --- Change log: v2: Use the max allowed value as the hardware doesn't have any limit drivers/net/ethernet/microsoft/mana/gdma_main.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/drivers/net/ethernet/microsoft/mana/gdma_main.c b/drivers/net/ethernet/microsoft/mana/gdma_main.c index 0cfe5f15458e..4f041b27c07d 100644 --- a/drivers/net/ethernet/microsoft/mana/gdma_main.c +++ b/drivers/net/ethernet/microsoft/mana/gdma_main.c @@ -1363,6 +1363,12 @@ static int mana_gd_probe(struct pci_dev *pdev, const struct pci_device_id *ent) if (err) goto release_region; + err = dma_set_max_seg_size(&pdev->dev, UINT_MAX); + if (err) { + dev_err(&pdev->dev, "Failed to set dma device segment size\n"); + goto release_region; + } + err = -ENOMEM; gc = vzalloc(sizeof(*gc)); if (!gc) From patchwork Sat Oct 22 00:01:22 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: longli@linuxonhyperv.com X-Patchwork-Id: 7041 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4242:0:0:0:0:0 with SMTP id s2csp961055wrr; Fri, 21 Oct 2022 17:06:26 -0700 (PDT) X-Google-Smtp-Source: AMsMyM7D+yXm6vdrft0sWiur942rdIz/cswcsoxEQH+wbgct93C4SpR08ACNxbGjsZFQ9r8kp5l9 X-Received: by 2002:a17:907:272a:b0:791:994d:fb6a with SMTP id d10-20020a170907272a00b00791994dfb6amr17939535ejl.337.1666397186287; Fri, 21 Oct 2022 17:06:26 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1666397186; cv=none; d=google.com; s=arc-20160816; b=YQrcZdCEDE6Ruz9/E8sEGR8T01vhIA+00d7DZRA9hgSQ0866C5/c0ZoUayanDYB24t IWaVTo7YfEe7v7TRcoZnfw0FvHs+sbroE06Owu5P/Ie2MWwtBAsNIB+59HK9lK23aGB9 kH34ZTWpn+AgOvtiaoeWVrJxl4YpoBXC6CGS0dsRjF+tA8UFeJXAhTAh4YkTjbtsOkmt xrRiuP3EEj1/iEnvFk7wGCgBs2FHlmHAJCYTCOyT7YEOnsBHEk2JcztPajZs9HqHlgLc wjY4sZs3Ow28NsFgQBCcoDQZjUKz0P2B+d71QSFPl17Bq8d9To20SX5Yx4fuTMdFOtbE RG8Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:reply-to:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:dkim-filter; bh=VZLU6TkFNRW5iAmkrZyBMdtSidOZ/oflWZp0XwAvTYk=; b=gonUik7NHCTOhhXnAF3txz46IHOmp6iBmg+7avmZ4VjUIlGjoOWX/LqzRd8VWItyy6 x6j42Xd8ng54FjR7xTS3dEGxuN+skAMBjkktCjyPgnAkiG+6XbF+Es8iaHIsvhOvh2ne 6EUio7h06uwBAGwlarLvZNSAtRPTIKfxRBL71+t06r354M70L54rpCRlxd//xh//hRAm Eb6u75g3s4dSTJhtzx3DDd6EfnHqhKEa/YBM/9E/69DsLLyfuU4y2pO1ltCGGA49tM69 8fFDicuBr//uR+zyYcvi6BOSLD8p13cnHYTJsKu5hc22y0uT3+af/U162sKilqNqlxj1 9g+g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linuxonhyperv.com header.s=default header.b=rIsCcZBW; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxonhyperv.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id h10-20020a056402280a00b00453bc1fda81si21488626ede.556.2022.10.21.17.06.01; Fri, 21 Oct 2022 17:06:26 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@linuxonhyperv.com header.s=default header.b=rIsCcZBW; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxonhyperv.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230153AbiJVACV (ORCPT + 99 others); Fri, 21 Oct 2022 20:02:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51658 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230020AbiJVABs (ORCPT ); Fri, 21 Oct 2022 20:01:48 -0400 Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 1713026B49D; Fri, 21 Oct 2022 17:01:42 -0700 (PDT) Received: by linux.microsoft.com (Postfix, from userid 1004) id 720E320FEEFF; Fri, 21 Oct 2022 17:01:39 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com 720E320FEEFF DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linuxonhyperv.com; s=default; t=1666396899; bh=VZLU6TkFNRW5iAmkrZyBMdtSidOZ/oflWZp0XwAvTYk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:Reply-To:From; b=rIsCcZBWGJQQiRiMQ84Y66jMuynAhse+JfhIAuYm8BesZNiVY1b9+7aV+fwU9E448 D4Yslh+OoSWImhW3/s1OvlL8txGpvvmhKMwEzun7gJb79tqB74q3L567JXIWdC8I3p BolyFpATnmiVoJ1KLqz+2HNxlz3SzrU8HW8hUrf0= From: longli@linuxonhyperv.com To: "K. Y. Srinivasan" , Haiyang Zhang , Stephen Hemminger , Wei Liu , Dexuan Cui , "David S. Miller" , Jakub Kicinski , Paolo Abeni , Jason Gunthorpe , Leon Romanovsky , edumazet@google.com, shiraz.saleem@intel.com, Ajay Sharma Cc: linux-hyperv@vger.kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org, Long Li Subject: [Patch v9 05/12] net: mana: Export Work Queue functions for use by RDMA driver Date: Fri, 21 Oct 2022 17:01:22 -0700 Message-Id: <1666396889-31288-6-git-send-email-longli@linuxonhyperv.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1666396889-31288-1-git-send-email-longli@linuxonhyperv.com> References: <1666396889-31288-1-git-send-email-longli@linuxonhyperv.com> Reply-To: longli@microsoft.com X-Spam-Status: No, score=-11.6 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, RCVD_IN_DNSWL_MED,SPF_HELO_PASS,SPF_PASS,USER_IN_DEF_SPF_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1747344095921887152?= X-GMAIL-MSGID: =?utf-8?q?1747344095921887152?= From: Long Li RDMA device may need to create Ethernet device queues for use by Queue Pair type RAW. This allows a user-mode context accesses Ethernet hardware queues. Export the supporting functions for use by the RDMA driver. Reviewed-by: Dexuan Cui Signed-off-by: Long Li Acked-by: Haiyang Zhang --- Change log: v3: format/coding style changes v5: remove unused defintions, use EXPORT_SYMBOL_NS, rearrange some defintions to a later patch in the series drivers/net/ethernet/microsoft/mana/gdma_main.c | 1 + drivers/net/ethernet/microsoft/mana/mana.h | 9 +++++++++ drivers/net/ethernet/microsoft/mana/mana_en.c | 16 +++++++++------- 3 files changed, 19 insertions(+), 7 deletions(-) diff --git a/drivers/net/ethernet/microsoft/mana/gdma_main.c b/drivers/net/ethernet/microsoft/mana/gdma_main.c index 4f041b27c07d..aab22911f20d 100644 --- a/drivers/net/ethernet/microsoft/mana/gdma_main.c +++ b/drivers/net/ethernet/microsoft/mana/gdma_main.c @@ -152,6 +152,7 @@ int mana_gd_send_request(struct gdma_context *gc, u32 req_len, const void *req, return mana_hwc_send_request(hwc, req_len, req, resp_len, resp); } +EXPORT_SYMBOL_NS(mana_gd_send_request, NET_MANA); int mana_gd_alloc_memory(struct gdma_context *gc, unsigned int length, struct gdma_mem_info *gmi) diff --git a/drivers/net/ethernet/microsoft/mana/mana.h b/drivers/net/ethernet/microsoft/mana/mana.h index 2883a08dbfb5..6e9e86fb4c02 100644 --- a/drivers/net/ethernet/microsoft/mana/mana.h +++ b/drivers/net/ethernet/microsoft/mana/mana.h @@ -635,6 +635,15 @@ struct mana_tx_package { struct gdma_posted_wqe_info wqe_info; }; +int mana_create_wq_obj(struct mana_port_context *apc, + mana_handle_t vport, + u32 wq_type, struct mana_obj_spec *wq_spec, + struct mana_obj_spec *cq_spec, + mana_handle_t *wq_obj); + +void mana_destroy_wq_obj(struct mana_port_context *apc, u32 wq_type, + mana_handle_t wq_obj); + int mana_cfg_vport(struct mana_port_context *apc, u32 protection_dom_id, u32 doorbell_pg_id); void mana_uncfg_vport(struct mana_port_context *apc); diff --git a/drivers/net/ethernet/microsoft/mana/mana_en.c b/drivers/net/ethernet/microsoft/mana/mana_en.c index efe14a343fd1..6ad4bc8cbc99 100644 --- a/drivers/net/ethernet/microsoft/mana/mana_en.c +++ b/drivers/net/ethernet/microsoft/mana/mana_en.c @@ -792,11 +792,11 @@ static int mana_cfg_vport_steering(struct mana_port_context *apc, return err; } -static int mana_create_wq_obj(struct mana_port_context *apc, - mana_handle_t vport, - u32 wq_type, struct mana_obj_spec *wq_spec, - struct mana_obj_spec *cq_spec, - mana_handle_t *wq_obj) +int mana_create_wq_obj(struct mana_port_context *apc, + mana_handle_t vport, + u32 wq_type, struct mana_obj_spec *wq_spec, + struct mana_obj_spec *cq_spec, + mana_handle_t *wq_obj) { struct mana_create_wqobj_resp resp = {}; struct mana_create_wqobj_req req = {}; @@ -845,9 +845,10 @@ static int mana_create_wq_obj(struct mana_port_context *apc, out: return err; } +EXPORT_SYMBOL_NS(mana_create_wq_obj, NET_MANA); -static void mana_destroy_wq_obj(struct mana_port_context *apc, u32 wq_type, - mana_handle_t wq_obj) +void mana_destroy_wq_obj(struct mana_port_context *apc, u32 wq_type, + mana_handle_t wq_obj) { struct mana_destroy_wqobj_resp resp = {}; struct mana_destroy_wqobj_req req = {}; @@ -872,6 +873,7 @@ static void mana_destroy_wq_obj(struct mana_port_context *apc, u32 wq_type, netdev_err(ndev, "Failed to destroy WQ object: %d, 0x%x\n", err, resp.hdr.status); } +EXPORT_SYMBOL_NS(mana_destroy_wq_obj, NET_MANA); static void mana_destroy_eq(struct mana_context *ac) { From patchwork Sat Oct 22 00:01:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: longli@linuxonhyperv.com X-Patchwork-Id: 7039 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4242:0:0:0:0:0 with SMTP id s2csp960958wrr; Fri, 21 Oct 2022 17:06:13 -0700 (PDT) X-Google-Smtp-Source: AMsMyM6qLHHl3QhdZ55ELf/uu7uHomgD7kdFAeYN7M4qedDnKAzK2ZNAzeQ5+mhnuzpHdPQVQ7Gk X-Received: by 2002:aa7:c14f:0:b0:460:e1cc:2c29 with SMTP id r15-20020aa7c14f000000b00460e1cc2c29mr9524822edp.423.1666397162403; Fri, 21 Oct 2022 17:06:02 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1666397162; cv=none; d=google.com; s=arc-20160816; b=iw/oVAvbFBs4jfjHJTqKL+7P5+myJYTz8b3eVCDG4o4wJyYN39AFoxFd8HkN3RKsjx +0TsW83OFUJyfSI4nz3AHDUPC6mUPRhXt90f7gKNwQkMNpweFaFNSVsulbPqOMnSgVpb A1pA82qcEX/n64HLtsLoQjBX8qjEuypa/UFoI9GdIeHYtuqq8oUaxvWBRTMl9+bEWifi nhOQZ4kothA91f40ZyYP8s2a4SWSmLRMW5h10snKngyBVs1JnC54uTHkFOQ96VlC3Rdx yePYIFAKPUqhNOy0s4BGR3/JrN/ufn0y1W8558l3niqmDa+f/LKV+0HQ0AikWm57jRAg p9Gw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:reply-to:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:dkim-filter; bh=rxJ1l3XZAaUy7L0ulivQTclU3lx9TfDBngKUNfhY9VY=; b=PLBxu4by6XAleLxA0oRaPfnYMJ02/YYfmJmpbm49NJpu1OJpOC6HcWFOZohue+Lsqa uDIyH6RVQt/odZ7aWFxTPMONjVCUDnT6N/1+J1y95gNY73oNLD1dBE2Jp6tJ+QKJR/3i vuDiMJzMWJMXnkziWMz/H70a56zr6aavlMQXgkMh3ev8XM0vOEnyZp8EY9xjegdGfl2L 0kyG83ZkkQUiXjZN59rISXO0qMbduPD2Ng36Qdr5PL2XHFJmUl0s67z2xb57nkj3WUlb X7n6PMAklL3H23NZR3u4y8QIn7ZrYHAffTzDpIAekm/wprYFOQiQKeGtr3vKbpwgSa46 cu/w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linuxonhyperv.com header.s=default header.b=sRK8FtnU; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxonhyperv.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id cs11-20020a0564020c4b00b0045bce3fe831si20115627edb.522.2022.10.21.17.05.38; Fri, 21 Oct 2022 17:06:02 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@linuxonhyperv.com header.s=default header.b=sRK8FtnU; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxonhyperv.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230091AbiJVACR (ORCPT + 99 others); Fri, 21 Oct 2022 20:02:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51656 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229980AbiJVABq (ORCPT ); Fri, 21 Oct 2022 20:01:46 -0400 Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 349F826B4BF; Fri, 21 Oct 2022 17:01:43 -0700 (PDT) Received: by linux.microsoft.com (Postfix, from userid 1004) id ED17E20FF4E3; Fri, 21 Oct 2022 17:01:39 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com ED17E20FF4E3 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linuxonhyperv.com; s=default; t=1666396899; bh=rxJ1l3XZAaUy7L0ulivQTclU3lx9TfDBngKUNfhY9VY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:Reply-To:From; b=sRK8FtnUozfkLyS/YoB/nxmO73Wxfagn8q+u6lK2kMwyds8Gay+L7vAqKr52OY+hg d5VDdtG/5q52K/NwCUv0Q794yuRmzs/ohJtioO8UyCn1FgrIPE9q1FaEa7gKuPMesN ereJhPgalqf4f2GuPWE9MBPr8RSJS0OZocjgBckw= From: longli@linuxonhyperv.com To: "K. Y. Srinivasan" , Haiyang Zhang , Stephen Hemminger , Wei Liu , Dexuan Cui , "David S. Miller" , Jakub Kicinski , Paolo Abeni , Jason Gunthorpe , Leon Romanovsky , edumazet@google.com, shiraz.saleem@intel.com, Ajay Sharma Cc: linux-hyperv@vger.kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org, Long Li Subject: [Patch v9 06/12] net: mana: Record port number in netdev Date: Fri, 21 Oct 2022 17:01:23 -0700 Message-Id: <1666396889-31288-7-git-send-email-longli@linuxonhyperv.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1666396889-31288-1-git-send-email-longli@linuxonhyperv.com> References: <1666396889-31288-1-git-send-email-longli@linuxonhyperv.com> Reply-To: longli@microsoft.com X-Spam-Status: No, score=-11.6 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, RCVD_IN_DNSWL_MED,SPF_HELO_PASS,SPF_PASS,USER_IN_DEF_SPF_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1747344070537691862?= X-GMAIL-MSGID: =?utf-8?q?1747344070537691862?= From: Long Li The port number is useful for user-mode application to identify this net device based on port index. Set to the correct value in ndev. Reviewed-by: Dexuan Cui Signed-off-by: Long Li Acked-by: Haiyang Zhang --- drivers/net/ethernet/microsoft/mana/mana_en.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/net/ethernet/microsoft/mana/mana_en.c b/drivers/net/ethernet/microsoft/mana/mana_en.c index 6ad4bc8cbc99..b6303a43fa7c 100644 --- a/drivers/net/ethernet/microsoft/mana/mana_en.c +++ b/drivers/net/ethernet/microsoft/mana/mana_en.c @@ -2133,6 +2133,7 @@ static int mana_probe_port(struct mana_context *ac, int port_idx, ndev->max_mtu = ndev->mtu; ndev->min_mtu = ndev->mtu; ndev->needed_headroom = MANA_HEADROOM; + ndev->dev_port = port_idx; SET_NETDEV_DEV(ndev, gc->dev); netif_carrier_off(ndev); From patchwork Sat Oct 22 00:01:24 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: longli@linuxonhyperv.com X-Patchwork-Id: 7038 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4242:0:0:0:0:0 with SMTP id s2csp960928wrr; Fri, 21 Oct 2022 17:06:09 -0700 (PDT) X-Google-Smtp-Source: AMsMyM4v+ktY37y4v5GR+zBahGSb+cft8rTBCSzIhDCJwI04fkOOfMlRvi17pXbOf9VWg24WDYXZ X-Received: by 2002:aa7:dd45:0:b0:458:7474:1fbe with SMTP id o5-20020aa7dd45000000b0045874741fbemr20070747edw.334.1666397169249; Fri, 21 Oct 2022 17:06:09 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1666397169; cv=none; d=google.com; s=arc-20160816; b=PPfBcPIP8DVkOsneWzR8wVZ777W66TH8iD7ngsmQycENVSWHdnUJdiQk0OCvVELGEh 4TEuw/b6iy8wOnEm+fySEjuERr3558Q2vMYRMGtNET0XOm8uP8vlTI3VH0I8/1bi7iKb z/zp3LeJ0Zw0Afbrtj0XWBA4IbbNp+YNbNUHwBhjzYoGagbsi27D/SOT6Ww4WwNkEI2D fGR+q7/FF81uXg+a2WPjG4mUtjySCdPOzalEjCEggEf9e7Z8pC4QcG36SpfrVBgYu6rH V+2w0UzyuEHng78NPEgEbRuRZY+e8FwXKVxIphcSIZeiaIiV/T0yKAFs4TQaHPsDuYoO 0cIA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:reply-to:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:dkim-filter; bh=9o96KL9v/M+VdBJEWVgSsnv2eumQWNZL84myNK5sMHE=; b=aO4ZVy/jFRM4cOlYMgJbzvxPlFKCN0+H64C/izUcMxU8WXtsq32zCz0skkb2305YOl 2YUkBETKIxsz7Su5Ynfo0lCjBQsPkDfdFvBlzdWIcuBp/bbtip8eU3spVRa93rpBDjke tHgjXBfRXIaxIDA/e+BDmpqZMVkI7QZZR0FUcnvqS7BURHlO/sqHASsNdHgnSjUtr9zo n9wb6g/VvgriXrUpZZv5zvU09KGt30JJL+PN9P3WCfWrquTKZ2WpEq1vgEfCvSrcDjr7 5H3YJo7LDj09hOCZgGrk70pk00kASMH44pJA7ByrxV+9QGLMa06XdSgjfU3Vxa4v7gXm gIAg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linuxonhyperv.com header.s=default header.b=BlRiRwMz; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxonhyperv.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id ds1-20020a170907724100b00782a3405722si19107588ejc.40.2022.10.21.17.05.45; Fri, 21 Oct 2022 17:06:09 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@linuxonhyperv.com header.s=default header.b=BlRiRwMz; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxonhyperv.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230020AbiJVACZ (ORCPT + 99 others); Fri, 21 Oct 2022 20:02:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51832 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230023AbiJVABs (ORCPT ); Fri, 21 Oct 2022 20:01:48 -0400 Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 349B226B4B5; Fri, 21 Oct 2022 17:01:43 -0700 (PDT) Received: by linux.microsoft.com (Postfix, from userid 1004) id 6E54520FF4E1; Fri, 21 Oct 2022 17:01:40 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com 6E54520FF4E1 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linuxonhyperv.com; s=default; t=1666396900; bh=9o96KL9v/M+VdBJEWVgSsnv2eumQWNZL84myNK5sMHE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:Reply-To:From; b=BlRiRwMzzz00cEfpMwTHWU5jDVzF9hKFpKuIPqO2mT81Yu+zw4EJJ/PK8IR1ygwWl L687dCQRIChPUa9bpWc9GsXf7BsVuDf9gK5VCCEbJPGDg7r+9SGdMro+gdT0RnP/+u 5vG8/EbzPJDONbeSvCv6uDpnYnSFXHUOLkvgOQJs= From: longli@linuxonhyperv.com To: "K. Y. Srinivasan" , Haiyang Zhang , Stephen Hemminger , Wei Liu , Dexuan Cui , "David S. Miller" , Jakub Kicinski , Paolo Abeni , Jason Gunthorpe , Leon Romanovsky , edumazet@google.com, shiraz.saleem@intel.com, Ajay Sharma Cc: linux-hyperv@vger.kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org, Long Li Subject: [Patch v9 07/12] net: mana: Move header files to a common location Date: Fri, 21 Oct 2022 17:01:24 -0700 Message-Id: <1666396889-31288-8-git-send-email-longli@linuxonhyperv.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1666396889-31288-1-git-send-email-longli@linuxonhyperv.com> References: <1666396889-31288-1-git-send-email-longli@linuxonhyperv.com> Reply-To: longli@microsoft.com X-Spam-Status: No, score=-11.6 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, RCVD_IN_DNSWL_MED,SPF_HELO_PASS,SPF_PASS,USER_IN_DEF_SPF_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1747344078520530822?= X-GMAIL-MSGID: =?utf-8?q?1747344078520530822?= From: Long Li In preparation to add MANA RDMA driver, move all the required header files to a common location for use by both Ethernet and RDMA drivers. Reviewed-by: Dexuan Cui Signed-off-by: Long Li Acked-by: Haiyang Zhang --- Change log: v2: Move headers to include/net/mana, instead of include/linux/mana MAINTAINERS | 1 + drivers/net/ethernet/microsoft/mana/gdma_main.c | 2 +- drivers/net/ethernet/microsoft/mana/hw_channel.c | 4 ++-- drivers/net/ethernet/microsoft/mana/mana_bpf.c | 2 +- drivers/net/ethernet/microsoft/mana/mana_en.c | 4 ++-- drivers/net/ethernet/microsoft/mana/mana_ethtool.c | 2 +- drivers/net/ethernet/microsoft/mana/shm_channel.c | 2 +- {drivers/net/ethernet/microsoft => include/net}/mana/gdma.h | 0 .../net/ethernet/microsoft => include/net}/mana/hw_channel.h | 0 {drivers/net/ethernet/microsoft => include/net}/mana/mana.h | 0 .../ethernet/microsoft => include/net}/mana/mana_auxiliary.h | 0 .../net/ethernet/microsoft => include/net}/mana/shm_channel.h | 0 12 files changed, 9 insertions(+), 8 deletions(-) rename {drivers/net/ethernet/microsoft => include/net}/mana/gdma.h (100%) rename {drivers/net/ethernet/microsoft => include/net}/mana/hw_channel.h (100%) rename {drivers/net/ethernet/microsoft => include/net}/mana/mana.h (100%) rename {drivers/net/ethernet/microsoft => include/net}/mana/mana_auxiliary.h (100%) rename {drivers/net/ethernet/microsoft => include/net}/mana/shm_channel.h (100%) diff --git a/MAINTAINERS b/MAINTAINERS index 8a5012ba6ff9..8b9a50756c7e 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -9457,6 +9457,7 @@ F: include/asm-generic/hyperv-tlfs.h F: include/asm-generic/mshyperv.h F: include/clocksource/hyperv_timer.h F: include/linux/hyperv.h +F: include/net/mana F: include/uapi/linux/hyperv.h F: net/vmw_vsock/hyperv_transport.c F: tools/hv/ diff --git a/drivers/net/ethernet/microsoft/mana/gdma_main.c b/drivers/net/ethernet/microsoft/mana/gdma_main.c index aab22911f20d..93847ed0e4b3 100644 --- a/drivers/net/ethernet/microsoft/mana/gdma_main.c +++ b/drivers/net/ethernet/microsoft/mana/gdma_main.c @@ -6,7 +6,7 @@ #include #include -#include "mana.h" +#include static u32 mana_gd_r32(struct gdma_context *g, u64 offset) { diff --git a/drivers/net/ethernet/microsoft/mana/hw_channel.c b/drivers/net/ethernet/microsoft/mana/hw_channel.c index 543a5d5c304f..76829ab43d40 100644 --- a/drivers/net/ethernet/microsoft/mana/hw_channel.c +++ b/drivers/net/ethernet/microsoft/mana/hw_channel.c @@ -1,8 +1,8 @@ // SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause /* Copyright (c) 2021, Microsoft Corporation. */ -#include "gdma.h" -#include "hw_channel.h" +#include +#include static int mana_hwc_get_msg_index(struct hw_channel_context *hwc, u16 *msg_id) { diff --git a/drivers/net/ethernet/microsoft/mana/mana_bpf.c b/drivers/net/ethernet/microsoft/mana/mana_bpf.c index 421fd39ff3a8..3caea631229c 100644 --- a/drivers/net/ethernet/microsoft/mana/mana_bpf.c +++ b/drivers/net/ethernet/microsoft/mana/mana_bpf.c @@ -8,7 +8,7 @@ #include #include -#include "mana.h" +#include void mana_xdp_tx(struct sk_buff *skb, struct net_device *ndev) { diff --git a/drivers/net/ethernet/microsoft/mana/mana_en.c b/drivers/net/ethernet/microsoft/mana/mana_en.c index b6303a43fa7c..ffa2a0e2c213 100644 --- a/drivers/net/ethernet/microsoft/mana/mana_en.c +++ b/drivers/net/ethernet/microsoft/mana/mana_en.c @@ -12,8 +12,8 @@ #include #include -#include "mana.h" -#include "mana_auxiliary.h" +#include +#include static DEFINE_IDA(mana_adev_ida); diff --git a/drivers/net/ethernet/microsoft/mana/mana_ethtool.c b/drivers/net/ethernet/microsoft/mana/mana_ethtool.c index c530db76880f..6f98de6d7440 100644 --- a/drivers/net/ethernet/microsoft/mana/mana_ethtool.c +++ b/drivers/net/ethernet/microsoft/mana/mana_ethtool.c @@ -5,7 +5,7 @@ #include #include -#include "mana.h" +#include static const struct { char name[ETH_GSTRING_LEN]; diff --git a/drivers/net/ethernet/microsoft/mana/shm_channel.c b/drivers/net/ethernet/microsoft/mana/shm_channel.c index da255da62176..5553af9c8085 100644 --- a/drivers/net/ethernet/microsoft/mana/shm_channel.c +++ b/drivers/net/ethernet/microsoft/mana/shm_channel.c @@ -6,7 +6,7 @@ #include #include -#include "shm_channel.h" +#include #define PAGE_FRAME_L48_WIDTH_BYTES 6 #define PAGE_FRAME_L48_WIDTH_BITS (PAGE_FRAME_L48_WIDTH_BYTES * 8) diff --git a/drivers/net/ethernet/microsoft/mana/gdma.h b/include/net/mana/gdma.h similarity index 100% rename from drivers/net/ethernet/microsoft/mana/gdma.h rename to include/net/mana/gdma.h diff --git a/drivers/net/ethernet/microsoft/mana/hw_channel.h b/include/net/mana/hw_channel.h similarity index 100% rename from drivers/net/ethernet/microsoft/mana/hw_channel.h rename to include/net/mana/hw_channel.h diff --git a/drivers/net/ethernet/microsoft/mana/mana.h b/include/net/mana/mana.h similarity index 100% rename from drivers/net/ethernet/microsoft/mana/mana.h rename to include/net/mana/mana.h diff --git a/drivers/net/ethernet/microsoft/mana/mana_auxiliary.h b/include/net/mana/mana_auxiliary.h similarity index 100% rename from drivers/net/ethernet/microsoft/mana/mana_auxiliary.h rename to include/net/mana/mana_auxiliary.h diff --git a/drivers/net/ethernet/microsoft/mana/shm_channel.h b/include/net/mana/shm_channel.h similarity index 100% rename from drivers/net/ethernet/microsoft/mana/shm_channel.h rename to include/net/mana/shm_channel.h From patchwork Sat Oct 22 00:01:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: longli@linuxonhyperv.com X-Patchwork-Id: 7042 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4242:0:0:0:0:0 with SMTP id s2csp961185wrr; Fri, 21 Oct 2022 17:06:51 -0700 (PDT) X-Google-Smtp-Source: AMsMyM4NRuXBdhMm5QSuYBapg9tsiKqkdQic2zPRJ8mhYhTBlP0Q/7AjmOtAyG9PC6hl3vciJtny X-Received: by 2002:a17:906:dac3:b0:780:a242:2f14 with SMTP id xi3-20020a170906dac300b00780a2422f14mr17165510ejb.668.1666397210875; Fri, 21 Oct 2022 17:06:50 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1666397210; cv=none; d=google.com; s=arc-20160816; b=cmiTUtLpOBzFvChsQkNi5sq/sn5yZZUgIVzKGGXofbhfMJYxkInvohHf2TfVkPE85s +uu1RIm8zfS3BXZlXB9JtQK/R6aIdTb1BaNpg2P8xiyfx4/fqt9/g5dt2grDXHTyKajT a1LwMqOW2lWVycCpYe228cqiWcCGj1iFfuyjmtEJTQV5mII9XocB5c0rvQ28gsCzv1yn bMSk0OO9yPK4/0McLVWmvE4QhUkbdkgZhBYfOS8ASYomvmgPF73GVHz14o++ByDbu7Ri 1bOeB/GbkZWPgc4tEdBeFHZxBRggNV/IT7B2XJAe/I3FX6k14rH5tNcm5/Pttz+QOfEw vXIw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:reply-to:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:dkim-filter; bh=f/L5JWQtlV4EWOSXR6x1yXXG40XtIaw4c4cKTQrFw1k=; b=ZDdji146QNdtWOy8W/hLbm2lYG1mYlJa8wSdMhe+YqsLzR0VT2Af1dVv4vvKppdSBx sm0qi+mhEnJQzaCI97KX83sbSEjXBG/wOSxTo4gIzQvZ3aox7BFDCasmEaQ5JH0rYwa2 ldwfhp4pzwP1hlXRAZVMgfwzeyNWZWES1TjOHWh8x0wCZwCsWAhhQxnQeMgWKzFScRXG OlLMGiec7gGHah54OZZRk1UZYPs/O95scIkK+RcFTMpa913OcqMT7NbZcg5YzbdjTfxM GWAOBPAvcWHEttDTE8mR+/kNaXjsDFY5CmBPNZXSQfsy9ZgGt/d2Q4RVso3wzgLkcdju O5ig== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linuxonhyperv.com header.s=default header.b=or0ILBuZ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxonhyperv.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id fe8-20020a056402390800b004612505ece4si4408106edb.483.2022.10.21.17.06.27; Fri, 21 Oct 2022 17:06:50 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@linuxonhyperv.com header.s=default header.b=or0ILBuZ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxonhyperv.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230186AbiJVACt (ORCPT + 99 others); Fri, 21 Oct 2022 20:02:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51646 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229996AbiJVABq (ORCPT ); Fri, 21 Oct 2022 20:01:46 -0400 Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 17041263F29; Fri, 21 Oct 2022 17:01:42 -0700 (PDT) Received: by linux.microsoft.com (Postfix, from userid 1004) id ED2F420FF4E6; Fri, 21 Oct 2022 17:01:40 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com ED2F420FF4E6 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linuxonhyperv.com; s=default; t=1666396900; bh=f/L5JWQtlV4EWOSXR6x1yXXG40XtIaw4c4cKTQrFw1k=; h=From:To:Cc:Subject:Date:In-Reply-To:References:Reply-To:From; b=or0ILBuZasAw2eDR3COtxxo6hz+QpDPSfcXoS48AwK8kYoRBsoyszy5iKI9xUMqJf c4APW9aoY5a7ZPbVzPl+fA3eYScIEQTsBRdjpQucpSFbI/ZpTwCrWBFJQDoBRcghfF yodR/Nn06Q6aF8Hf67q/yMh40OZMStDc2gXvYCn8= From: longli@linuxonhyperv.com To: "K. Y. Srinivasan" , Haiyang Zhang , Stephen Hemminger , Wei Liu , Dexuan Cui , "David S. Miller" , Jakub Kicinski , Paolo Abeni , Jason Gunthorpe , Leon Romanovsky , edumazet@google.com, shiraz.saleem@intel.com, Ajay Sharma Cc: linux-hyperv@vger.kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org, Long Li Subject: [Patch v9 08/12] net: mana: Define max values for SGL entries Date: Fri, 21 Oct 2022 17:01:25 -0700 Message-Id: <1666396889-31288-9-git-send-email-longli@linuxonhyperv.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1666396889-31288-1-git-send-email-longli@linuxonhyperv.com> References: <1666396889-31288-1-git-send-email-longli@linuxonhyperv.com> Reply-To: longli@microsoft.com X-Spam-Status: No, score=-11.6 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, RCVD_IN_DNSWL_MED,SPF_HELO_PASS,SPF_PASS,USER_IN_DEF_SPF_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1747344121547994938?= X-GMAIL-MSGID: =?utf-8?q?1747344121547994938?= From: Long Li The number of maximum SGl entries should be computed from the maximum WQE size for the intended queue type and the corresponding OOB data size. This guarantees the hardware queue can successfully queue requests up to the queue depth exposed to the upper layer. Reviewed-by: Dexuan Cui Signed-off-by: Long Li Acked-by: Haiyang Zhang --- drivers/net/ethernet/microsoft/mana/mana_en.c | 2 +- include/net/mana/gdma.h | 7 +++++++ include/net/mana/mana.h | 4 +--- 3 files changed, 9 insertions(+), 4 deletions(-) diff --git a/drivers/net/ethernet/microsoft/mana/mana_en.c b/drivers/net/ethernet/microsoft/mana/mana_en.c index ffa2a0e2c213..f6bcd0cc6cda 100644 --- a/drivers/net/ethernet/microsoft/mana/mana_en.c +++ b/drivers/net/ethernet/microsoft/mana/mana_en.c @@ -189,7 +189,7 @@ int mana_start_xmit(struct sk_buff *skb, struct net_device *ndev) pkg.wqe_req.client_data_unit = 0; pkg.wqe_req.num_sge = 1 + skb_shinfo(skb)->nr_frags; - WARN_ON_ONCE(pkg.wqe_req.num_sge > 30); + WARN_ON_ONCE(pkg.wqe_req.num_sge > MAX_TX_WQE_SGL_ENTRIES); if (pkg.wqe_req.num_sge <= ARRAY_SIZE(pkg.sgl_array)) { pkg.wqe_req.sgl = pkg.sgl_array; diff --git a/include/net/mana/gdma.h b/include/net/mana/gdma.h index 72eaec2470c0..0b0c0cd0b6bd 100644 --- a/include/net/mana/gdma.h +++ b/include/net/mana/gdma.h @@ -427,6 +427,13 @@ struct gdma_wqe { #define MAX_TX_WQE_SIZE 512 #define MAX_RX_WQE_SIZE 256 +#define MAX_TX_WQE_SGL_ENTRIES ((GDMA_MAX_SQE_SIZE - \ + sizeof(struct gdma_sge) - INLINE_OOB_SMALL_SIZE) / \ + sizeof(struct gdma_sge)) + +#define MAX_RX_WQE_SGL_ENTRIES ((GDMA_MAX_RQE_SIZE - \ + sizeof(struct gdma_sge)) / sizeof(struct gdma_sge)) + struct gdma_cqe { u32 cqe_data[GDMA_COMP_DATA_SIZE / 4]; diff --git a/include/net/mana/mana.h b/include/net/mana/mana.h index 6e9e86fb4c02..713a8f8cca9a 100644 --- a/include/net/mana/mana.h +++ b/include/net/mana/mana.h @@ -265,8 +265,6 @@ struct mana_cq { int budget; }; -#define GDMA_MAX_RQE_SGES 15 - struct mana_recv_buf_oob { /* A valid GDMA work request representing the data buffer. */ struct gdma_wqe_request wqe_req; @@ -276,7 +274,7 @@ struct mana_recv_buf_oob { /* SGL of the buffer going to be sent has part of the work request. */ u32 num_sge; - struct gdma_sge sgl[GDMA_MAX_RQE_SGES]; + struct gdma_sge sgl[MAX_RX_WQE_SGL_ENTRIES]; /* Required to store the result of mana_gd_post_work_request. * gdma_posted_wqe_info.wqe_size_in_bu is required for progressing the From patchwork Sat Oct 22 00:01:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: longli@linuxonhyperv.com X-Patchwork-Id: 7040 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4242:0:0:0:0:0 with SMTP id s2csp960975wrr; Fri, 21 Oct 2022 17:06:16 -0700 (PDT) X-Google-Smtp-Source: AMsMyM4Sxv3uMWhIQVptenNpno/Ifwd8RxAtZjz0j8zLA1H3x+APXmq9CBAWT+MHr3FcbAKcJ7Tj X-Received: by 2002:aa7:c981:0:b0:461:522c:ce0d with SMTP id c1-20020aa7c981000000b00461522cce0dmr4324552edt.169.1666397176560; Fri, 21 Oct 2022 17:06:16 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1666397176; cv=none; d=google.com; s=arc-20160816; b=bRH4Qv/nbwtXvsNuH5wpLD3DKsO08bUsNTv55EC6fkP5KHvB9pq9lFmbc3xvchQ6Jc bNTLJNhXa9U1r9cxehMw0Ftf8oeCrDg1HZFB3eScnVs9ostvB+aG1O9Rznl/OxoxEI7Y I/QF4yCF5cerJhPmrl+pTjdx4vWzPuVxC2h/pgSL2i6DeuJOO6daKpTjw0ZjIiL9JQMR z+jCELfNyT4U1RwVj4BAg/2HIgYGHSeGV4DrW8k30dKcZ++jQzALIjxaFF6Go2NJ1WPi WV7jPnOlgpm3dfQrFG1SHA4lo6e4X4lxjlN7HzoLGnmkjJ4kMC3k6SwJEB1W5c+nDh6I XIJA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:reply-to:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:dkim-filter; bh=jONrZhhKFLS3dleWg98wZ6FgVfueM2/7lN9acbPbclk=; b=Z68dnMujtAkMUyBEP9yDqkEmL+qbn5iKbZs8f2fIm0RL6cl0S5ZqUi81w4/Y1PoadK xaZ59cgZzzK03jt8h+K9EIkEBQrcofpIiBa6lBWq42f+bIwJxi5p+3fjRVtt0gZi0Uir /PGd+hdRKT8TVpm8km2LP/EIbHgktc1PoH2eZL8PqjNH2ynppO3HU/iGyNelcOdQ3pMy AaWjdoNqwbPV6iH/0T16d0Mdr8PwDrm1zCKOR1GUOJH7wWeiubt+GlXGc8+CuXfkhZ4y PBRD3Ji65WSrEDx3JZxFsQ0ShjTswrlm4YWguJrz6DQi9fh1LKZOou5Dos3m3x6xSywl H2vA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linuxonhyperv.com header.s=default header.b=eybYVpy7; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxonhyperv.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id q10-20020a170906678a00b0078da83673a3si18451370ejp.138.2022.10.21.17.05.52; Fri, 21 Oct 2022 17:06:16 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@linuxonhyperv.com header.s=default header.b=eybYVpy7; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxonhyperv.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229587AbiJVAC2 (ORCPT + 99 others); Fri, 21 Oct 2022 20:02:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51844 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230024AbiJVABs (ORCPT ); Fri, 21 Oct 2022 20:01:48 -0400 Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 43DC426CDCC; Fri, 21 Oct 2022 17:01:43 -0700 (PDT) Received: by linux.microsoft.com (Postfix, from userid 1004) id 6A75120FF4E8; Fri, 21 Oct 2022 17:01:41 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com 6A75120FF4E8 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linuxonhyperv.com; s=default; t=1666396901; bh=jONrZhhKFLS3dleWg98wZ6FgVfueM2/7lN9acbPbclk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:Reply-To:From; b=eybYVpy7LQEeNtEwDUp3+xt2NwD/jxXDevZPUdORbKEjVMmzxwDzaYKRVURVqeIpd mgjIT4Gdf1DloIwhJp5A8xgWY0EwE8nW/+KNw9gShNHeJzRo3mWYZJXhuNcbs8mgKI B9pRn7Y03pXcZptIKAY7ZHKfBPkk1UVT4nQ09oBU= From: longli@linuxonhyperv.com To: "K. Y. Srinivasan" , Haiyang Zhang , Stephen Hemminger , Wei Liu , Dexuan Cui , "David S. Miller" , Jakub Kicinski , Paolo Abeni , Jason Gunthorpe , Leon Romanovsky , edumazet@google.com, shiraz.saleem@intel.com, Ajay Sharma Cc: linux-hyperv@vger.kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org, Long Li Subject: [Patch v9 09/12] net: mana: Define and process GDMA response code GDMA_STATUS_MORE_ENTRIES Date: Fri, 21 Oct 2022 17:01:26 -0700 Message-Id: <1666396889-31288-10-git-send-email-longli@linuxonhyperv.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1666396889-31288-1-git-send-email-longli@linuxonhyperv.com> References: <1666396889-31288-1-git-send-email-longli@linuxonhyperv.com> Reply-To: longli@microsoft.com X-Spam-Status: No, score=-11.6 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, RCVD_IN_DNSWL_MED,SPF_HELO_PASS,SPF_PASS,USER_IN_DEF_SPF_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1747344086035533088?= X-GMAIL-MSGID: =?utf-8?q?1747344086035533088?= From: Ajay Sharma When doing memory registration, the PF may respond with GDMA_STATUS_MORE_ENTRIES to indicate a follow request is needed. This is not an error and should be processed as expected. Signed-off-by: Ajay Sharma Reviewed-by: Dexuan Cui Signed-off-by: Long Li Acked-by: Haiyang Zhang --- drivers/net/ethernet/microsoft/mana/hw_channel.c | 2 +- include/net/mana/gdma.h | 2 ++ 2 files changed, 3 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/microsoft/mana/hw_channel.c b/drivers/net/ethernet/microsoft/mana/hw_channel.c index 76829ab43d40..9d1507eba5b9 100644 --- a/drivers/net/ethernet/microsoft/mana/hw_channel.c +++ b/drivers/net/ethernet/microsoft/mana/hw_channel.c @@ -836,7 +836,7 @@ int mana_hwc_send_request(struct hw_channel_context *hwc, u32 req_len, goto out; } - if (ctx->status_code) { + if (ctx->status_code && ctx->status_code != GDMA_STATUS_MORE_ENTRIES) { dev_err(hwc->dev, "HWC: Failed hw_channel req: 0x%x\n", ctx->status_code); err = -EPROTO; diff --git a/include/net/mana/gdma.h b/include/net/mana/gdma.h index 0b0c0cd0b6bd..a9b7930dfbf8 100644 --- a/include/net/mana/gdma.h +++ b/include/net/mana/gdma.h @@ -9,6 +9,8 @@ #include "shm_channel.h" +#define GDMA_STATUS_MORE_ENTRIES 0x00000105 + /* Structures labeled with "HW DATA" are exchanged with the hardware. All of * them are naturally aligned and hence don't need __packed. */ From patchwork Sat Oct 22 00:01:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: longli@linuxonhyperv.com X-Patchwork-Id: 7045 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4242:0:0:0:0:0 with SMTP id s2csp961223wrr; Fri, 21 Oct 2022 17:06:56 -0700 (PDT) X-Google-Smtp-Source: AMsMyM6bcrXSUSqH5ugsuMVBO7U/zXUBrNzmDnIieVPfvmAxswnUWJ5m7xFEZoodEOMh+M8Nqu3s X-Received: by 2002:a17:907:3da2:b0:78d:45df:b4f with SMTP id he34-20020a1709073da200b0078d45df0b4fmr17333307ejc.651.1666397216308; Fri, 21 Oct 2022 17:06:56 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1666397216; cv=none; d=google.com; s=arc-20160816; b=ylkrcRcF6wbPqL4/JARSlrRw/Fz3GgPbhuZCXxUw/8jM35cjpRYCSD5G6lyDpHjag5 8ad3RYBtJE61TPj/yiYM0pSas0X2+Ea7H/PZv6QJPzgaOFmFEA7Gf14ElX5luAYvArvw zDkcS6St+htNDcyENLN9QMwkvNRAAwobVyXtAu7RwpCFl4P4fPZAWNVhRX4vU6+4FBb5 gZBgkDrrIUleG2vNm+MXAQIfcp7hAOIYCVXqk42NmeGTTnQtJgqa2+C+Mv/WXTSQxvvs WO00B0anBNXBvdfXhv2r+zjN2Y8rFqn3jTjhpkm4Tp3XhBag+Xds08hcMYfaw3Jr76VJ 1d+g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:reply-to:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:dkim-filter; bh=XCdAPyY3QKtJwrvrDmLBBK0K2ZN06sCmO+Hdg9ph1/0=; b=zGhxyHwXWz1VVl04M43u2MPrPB9FacdaeWwe4qQiDfOBrfL2Z7VcEQSQiUoBSg1Hqv BnCtzXr9GB812bb62usrgYZOVbxcDL7ZvDh3wCWO7MDqUi2A8jowddSiYNNlkt9/1VyH IJ7oLwbxRh6nlED8+d5ymtoMVDY5LzznFH4/OkvacKzZufowJYRXeaz0K1J968+LYbB7 KXpS16ymXHvscpNSnqb47OPrRQcwN4qoTMFuXzD65BmB9ogaQPYdM3Mci4UkPuW3rlY6 Lla4w8Xfkox76qFNHFMmexneAP/6unEv6SgQZhMBxj+x7ixiFdqGD87A2ZrK0lu81EyM 2zVg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linuxonhyperv.com header.s=default header.b=faA+oiyA; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxonhyperv.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id g10-20020a056402090a00b0045c97ee2b01si24092122edz.616.2022.10.21.17.06.25; Fri, 21 Oct 2022 17:06:56 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@linuxonhyperv.com header.s=default header.b=faA+oiyA; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxonhyperv.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230102AbiJVACg (ORCPT + 99 others); Fri, 21 Oct 2022 20:02:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51656 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230033AbiJVABt (ORCPT ); Fri, 21 Oct 2022 20:01:49 -0400 Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 441E626D902; Fri, 21 Oct 2022 17:01:43 -0700 (PDT) Received: by linux.microsoft.com (Postfix, from userid 1004) id DDB9F20FF4EE; Fri, 21 Oct 2022 17:01:41 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com DDB9F20FF4EE DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linuxonhyperv.com; s=default; t=1666396901; bh=XCdAPyY3QKtJwrvrDmLBBK0K2ZN06sCmO+Hdg9ph1/0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:Reply-To:From; b=faA+oiyAlFd7vGuIT2sfiAWGev0+Q8HkghW02Cha5wsLc9B22gKphi71HW1es33ka ATH6a53lvZmNZ7CV1HBv25C19dklB3Ul3D5j2fNeNMaTlKTxUcoec9jDkk3f8OyLWN +E1Lg2ORB2KTc0wwOguHnZoaYJquQM6IRclES/5Y= From: longli@linuxonhyperv.com To: "K. Y. Srinivasan" , Haiyang Zhang , Stephen Hemminger , Wei Liu , Dexuan Cui , "David S. Miller" , Jakub Kicinski , Paolo Abeni , Jason Gunthorpe , Leon Romanovsky , edumazet@google.com, shiraz.saleem@intel.com, Ajay Sharma Cc: linux-hyperv@vger.kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org, Long Li Subject: [Patch v9 10/12] net: mana: Define data structures for allocating doorbell page from GDMA Date: Fri, 21 Oct 2022 17:01:27 -0700 Message-Id: <1666396889-31288-11-git-send-email-longli@linuxonhyperv.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1666396889-31288-1-git-send-email-longli@linuxonhyperv.com> References: <1666396889-31288-1-git-send-email-longli@linuxonhyperv.com> Reply-To: longli@microsoft.com X-Spam-Status: No, score=-11.6 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, RCVD_IN_DNSWL_MED,SPF_HELO_PASS,SPF_PASS,USER_IN_DEF_SPF_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1747344127907270702?= X-GMAIL-MSGID: =?utf-8?q?1747344127907270702?= From: Long Li The RDMA device needs to allocate doorbell pages for each user context. Define the GDMA data structures for use by the RDMA driver. Reviewed-by: Dexuan Cui Signed-off-by: Long Li Acked-by: Haiyang Zhang --- Change log: v4: use EXPORT_SYMBOL_NS v7: move mana_gd_allocate_doorbell_page() and mana_gd_destroy_doorbell_page() to the RDMA driver v8: remove extra line in header file include/net/mana/gdma.h | 25 +++++++++++++++++++++++++ 1 file changed, 24 insertions(+) diff --git a/include/net/mana/gdma.h b/include/net/mana/gdma.h index a9b7930dfbf8..055408a5baf3 100644 --- a/include/net/mana/gdma.h +++ b/include/net/mana/gdma.h @@ -24,11 +24,15 @@ enum gdma_request_type { GDMA_GENERATE_TEST_EQE = 10, GDMA_CREATE_QUEUE = 12, GDMA_DISABLE_QUEUE = 13, + GDMA_ALLOCATE_RESOURCE_RANGE = 22, + GDMA_DESTROY_RESOURCE_RANGE = 24, GDMA_CREATE_DMA_REGION = 25, GDMA_DMA_REGION_ADD_PAGES = 26, GDMA_DESTROY_DMA_REGION = 27, }; +#define GDMA_RESOURCE_DOORBELL_PAGE 27 + enum gdma_queue_type { GDMA_INVALID_QUEUE, GDMA_SQ, @@ -587,6 +591,26 @@ struct gdma_register_device_resp { u32 db_id; }; /* HW DATA */ +struct gdma_allocate_resource_range_req { + struct gdma_req_hdr hdr; + u32 resource_type; + u32 num_resources; + u32 alignment; + u32 allocated_resources; +}; + +struct gdma_allocate_resource_range_resp { + struct gdma_resp_hdr hdr; + u32 allocated_resources; +}; + +struct gdma_destroy_resource_range_req { + struct gdma_req_hdr hdr; + u32 resource_type; + u32 num_resources; + u32 allocated_resources; +}; + /* GDMA_CREATE_QUEUE */ struct gdma_create_queue_req { struct gdma_req_hdr hdr; From patchwork Sat Oct 22 00:01:28 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: longli@linuxonhyperv.com X-Patchwork-Id: 7043 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4242:0:0:0:0:0 with SMTP id s2csp961199wrr; Fri, 21 Oct 2022 17:06:53 -0700 (PDT) X-Google-Smtp-Source: AMsMyM4EQR5LR/HIH3GlnsxJXLYN9RRBrLlDTdbscvCAiJ6n2DeDJteYvWMEpepjfqd2ouNG9/Fl X-Received: by 2002:a17:907:97cc:b0:797:c389:de5f with SMTP id js12-20020a17090797cc00b00797c389de5fmr8997007ejc.627.1666397213035; Fri, 21 Oct 2022 17:06:53 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1666397213; cv=none; d=google.com; s=arc-20160816; b=boJYPHaln2JPBXNXtNPSs+1dp94xluSlkjF38oJkD8EuWfpk40aRrnlP3aaN+pvHKU 0mDMPHHEoNgDzRBC7pTUAmk6kqwzDA55HVXGKWSsh+VlVjpFzXP+gNGBLVVcjVbhMIA+ qhfOm22QfxYTFci0gEQDhTGmsueKWGzZ0pFTZzVr0UI8SYuzcGwZ0plo0/MV/sfnRg9N FWuJXyG8TIeGPvBgXMiCeFmuAYO4cUqOg6iVHmRolvVeLmE2Sg6FcKrwV9/U00MH0yqR sKO8TkiAZTVpgd4g5Uwa7i7sNh1cNrOHD3VWdgkIL05k7y3KvxzOo3yWIs7oXPsytZgg e1NQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:reply-to:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:dkim-filter; bh=OhaOdiSYohgjWcpV3xvKysL692TUPrtrmeWzwryShrY=; b=PNAtcol2+tbpCSCTpPD1Dd96lfcnW+9YmWKjmf8x7qFO+5pIHnO2bUkR67bVth71k4 0rbx1ycgEzXGv83XrScbDtVj9dx58An6ll95oUVGH4nPPo0ij525qNWWzGbak6sW1b1c r1fDuE+zvljwfQhIwSbeD6wDwQHF+mYZ7uYYuMooGs8Qzh8a+Njbt6NrVI7ytmNsZBLu 20fw8SvxyMDRPlJV192ckL8pLJSubN9dD6wnV4blZzymzoy6ETY/s52dgu3K8TxI7JYj RUG3s9q417HHskqAdaGl3z2fJsW3CBYye6z4APFxcCVx8wQCA8gixIqurKpwbyOdVFrZ NqrQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linuxonhyperv.com header.s=default header.b=VxDe1w7e; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxonhyperv.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id hq13-20020a1709073f0d00b0073317d6b047si15275240ejc.569.2022.10.21.17.06.28; Fri, 21 Oct 2022 17:06:53 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@linuxonhyperv.com header.s=default header.b=VxDe1w7e; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxonhyperv.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230185AbiJVACp (ORCPT + 99 others); Fri, 21 Oct 2022 20:02:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51650 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230018AbiJVABs (ORCPT ); Fri, 21 Oct 2022 20:01:48 -0400 Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 43B673DBF8; Fri, 21 Oct 2022 17:01:43 -0700 (PDT) Received: by linux.microsoft.com (Postfix, from userid 1004) id 63D4620FF4F6; Fri, 21 Oct 2022 17:01:42 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com 63D4620FF4F6 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linuxonhyperv.com; s=default; t=1666396902; bh=OhaOdiSYohgjWcpV3xvKysL692TUPrtrmeWzwryShrY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:Reply-To:From; b=VxDe1w7eETLj7bPzUWIjgVi27vLmGfixbSA2MOYRlpGpXmASGciYeSTgPAf4vTmhe 6fShb057C5aLJKGDGtNpmyvdNeQVFRnfgigBBjOHnAFnM8A7Ski9pBhPtQhShHoNKH 56Xp5Bq0JBZ4WmUFgGD5Si8dIuzUCt/3yvyYqWWw= From: longli@linuxonhyperv.com To: "K. Y. Srinivasan" , Haiyang Zhang , Stephen Hemminger , Wei Liu , Dexuan Cui , "David S. Miller" , Jakub Kicinski , Paolo Abeni , Jason Gunthorpe , Leon Romanovsky , edumazet@google.com, shiraz.saleem@intel.com, Ajay Sharma Cc: linux-hyperv@vger.kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org, Long Li Subject: [Patch v9 11/12] net: mana: Define data structures for protection domain and memory registration Date: Fri, 21 Oct 2022 17:01:28 -0700 Message-Id: <1666396889-31288-12-git-send-email-longli@linuxonhyperv.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1666396889-31288-1-git-send-email-longli@linuxonhyperv.com> References: <1666396889-31288-1-git-send-email-longli@linuxonhyperv.com> Reply-To: longli@microsoft.com X-Spam-Status: No, score=-11.6 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, RCVD_IN_DNSWL_MED,SPF_HELO_PASS,SPF_PASS,USER_IN_DEF_SPF_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1747344123779902510?= X-GMAIL-MSGID: =?utf-8?q?1747344123779902510?= From: Ajay Sharma The MANA hardware support protection domain and memory registration for use in RDMA environment. Add those definitions and expose them for use by the RDMA driver. Signed-off-by: Ajay Sharma Signed-off-by: Long Li Reviewed-by: Dexuan Cui Acked-by: Haiyang Zhang --- Change log: v3: format/coding style changes v5: remove unused code dealing with kernel-mode MR .../net/ethernet/microsoft/mana/gdma_main.c | 27 ++-- drivers/net/ethernet/microsoft/mana/mana_en.c | 18 +-- include/net/mana/gdma.h | 121 +++++++++++++++++- 3 files changed, 143 insertions(+), 23 deletions(-) diff --git a/drivers/net/ethernet/microsoft/mana/gdma_main.c b/drivers/net/ethernet/microsoft/mana/gdma_main.c index 93847ed0e4b3..fae7a93c77f0 100644 --- a/drivers/net/ethernet/microsoft/mana/gdma_main.c +++ b/drivers/net/ethernet/microsoft/mana/gdma_main.c @@ -198,7 +198,7 @@ static int mana_gd_create_hw_eq(struct gdma_context *gc, req.type = queue->type; req.pdid = queue->gdma_dev->pdid; req.doolbell_id = queue->gdma_dev->doorbell; - req.gdma_region = queue->mem_info.gdma_region; + req.gdma_region = queue->mem_info.dma_region_handle; req.queue_size = queue->queue_size; req.log2_throttle_limit = queue->eq.log2_throttle_limit; req.eq_pci_msix_index = queue->eq.msix_index; @@ -212,7 +212,7 @@ static int mana_gd_create_hw_eq(struct gdma_context *gc, queue->id = resp.queue_index; queue->eq.disable_needed = true; - queue->mem_info.gdma_region = GDMA_INVALID_DMA_REGION; + queue->mem_info.dma_region_handle = GDMA_INVALID_DMA_REGION; return 0; } @@ -666,24 +666,30 @@ int mana_gd_create_hwc_queue(struct gdma_dev *gd, return err; } -static void mana_gd_destroy_dma_region(struct gdma_context *gc, u64 gdma_region) +int mana_gd_destroy_dma_region(struct gdma_context *gc, + gdma_obj_handle_t dma_region_handle) { struct gdma_destroy_dma_region_req req = {}; struct gdma_general_resp resp = {}; int err; - if (gdma_region == GDMA_INVALID_DMA_REGION) - return; + if (dma_region_handle == GDMA_INVALID_DMA_REGION) + return 0; mana_gd_init_req_hdr(&req.hdr, GDMA_DESTROY_DMA_REGION, sizeof(req), sizeof(resp)); - req.gdma_region = gdma_region; + req.dma_region_handle = dma_region_handle; err = mana_gd_send_request(gc, sizeof(req), &req, sizeof(resp), &resp); - if (err || resp.hdr.status) + if (err || resp.hdr.status) { dev_err(gc->dev, "Failed to destroy DMA region: %d, 0x%x\n", err, resp.hdr.status); + return -EPROTO; + } + + return 0; } +EXPORT_SYMBOL_NS(mana_gd_destroy_dma_region, NET_MANA); static int mana_gd_create_dma_region(struct gdma_dev *gd, struct gdma_mem_info *gmi) @@ -728,14 +734,15 @@ static int mana_gd_create_dma_region(struct gdma_dev *gd, if (err) goto out; - if (resp.hdr.status || resp.gdma_region == GDMA_INVALID_DMA_REGION) { + if (resp.hdr.status || + resp.dma_region_handle == GDMA_INVALID_DMA_REGION) { dev_err(gc->dev, "Failed to create DMA region: 0x%x\n", resp.hdr.status); err = -EPROTO; goto out; } - gmi->gdma_region = resp.gdma_region; + gmi->dma_region_handle = resp.dma_region_handle; out: kfree(req); return err; @@ -858,7 +865,7 @@ void mana_gd_destroy_queue(struct gdma_context *gc, struct gdma_queue *queue) return; } - mana_gd_destroy_dma_region(gc, gmi->gdma_region); + mana_gd_destroy_dma_region(gc, gmi->dma_region_handle); mana_gd_free_memory(gmi); kfree(queue); } diff --git a/drivers/net/ethernet/microsoft/mana/mana_en.c b/drivers/net/ethernet/microsoft/mana/mana_en.c index f6bcd0cc6cda..1c59502d34b5 100644 --- a/drivers/net/ethernet/microsoft/mana/mana_en.c +++ b/drivers/net/ethernet/microsoft/mana/mana_en.c @@ -1523,10 +1523,10 @@ static int mana_create_txq(struct mana_port_context *apc, memset(&wq_spec, 0, sizeof(wq_spec)); memset(&cq_spec, 0, sizeof(cq_spec)); - wq_spec.gdma_region = txq->gdma_sq->mem_info.gdma_region; + wq_spec.gdma_region = txq->gdma_sq->mem_info.dma_region_handle; wq_spec.queue_size = txq->gdma_sq->queue_size; - cq_spec.gdma_region = cq->gdma_cq->mem_info.gdma_region; + cq_spec.gdma_region = cq->gdma_cq->mem_info.dma_region_handle; cq_spec.queue_size = cq->gdma_cq->queue_size; cq_spec.modr_ctx_id = 0; cq_spec.attached_eq = cq->gdma_cq->cq.parent->id; @@ -1541,8 +1541,10 @@ static int mana_create_txq(struct mana_port_context *apc, txq->gdma_sq->id = wq_spec.queue_index; cq->gdma_cq->id = cq_spec.queue_index; - txq->gdma_sq->mem_info.gdma_region = GDMA_INVALID_DMA_REGION; - cq->gdma_cq->mem_info.gdma_region = GDMA_INVALID_DMA_REGION; + txq->gdma_sq->mem_info.dma_region_handle = + GDMA_INVALID_DMA_REGION; + cq->gdma_cq->mem_info.dma_region_handle = + GDMA_INVALID_DMA_REGION; txq->gdma_txq_id = txq->gdma_sq->id; @@ -1753,10 +1755,10 @@ static struct mana_rxq *mana_create_rxq(struct mana_port_context *apc, memset(&wq_spec, 0, sizeof(wq_spec)); memset(&cq_spec, 0, sizeof(cq_spec)); - wq_spec.gdma_region = rxq->gdma_rq->mem_info.gdma_region; + wq_spec.gdma_region = rxq->gdma_rq->mem_info.dma_region_handle; wq_spec.queue_size = rxq->gdma_rq->queue_size; - cq_spec.gdma_region = cq->gdma_cq->mem_info.gdma_region; + cq_spec.gdma_region = cq->gdma_cq->mem_info.dma_region_handle; cq_spec.queue_size = cq->gdma_cq->queue_size; cq_spec.modr_ctx_id = 0; cq_spec.attached_eq = cq->gdma_cq->cq.parent->id; @@ -1769,8 +1771,8 @@ static struct mana_rxq *mana_create_rxq(struct mana_port_context *apc, rxq->gdma_rq->id = wq_spec.queue_index; cq->gdma_cq->id = cq_spec.queue_index; - rxq->gdma_rq->mem_info.gdma_region = GDMA_INVALID_DMA_REGION; - cq->gdma_cq->mem_info.gdma_region = GDMA_INVALID_DMA_REGION; + rxq->gdma_rq->mem_info.dma_region_handle = GDMA_INVALID_DMA_REGION; + cq->gdma_cq->mem_info.dma_region_handle = GDMA_INVALID_DMA_REGION; rxq->gdma_id = rxq->gdma_rq->id; cq->gdma_id = cq->gdma_cq->id; diff --git a/include/net/mana/gdma.h b/include/net/mana/gdma.h index 055408a5baf3..221adc96340c 100644 --- a/include/net/mana/gdma.h +++ b/include/net/mana/gdma.h @@ -29,6 +29,10 @@ enum gdma_request_type { GDMA_CREATE_DMA_REGION = 25, GDMA_DMA_REGION_ADD_PAGES = 26, GDMA_DESTROY_DMA_REGION = 27, + GDMA_CREATE_PD = 29, + GDMA_DESTROY_PD = 30, + GDMA_CREATE_MR = 31, + GDMA_DESTROY_MR = 32, }; #define GDMA_RESOURCE_DOORBELL_PAGE 27 @@ -61,6 +65,8 @@ enum { GDMA_DEVICE_MANA = 2, }; +typedef u64 gdma_obj_handle_t; + struct gdma_resource { /* Protect the bitmap */ spinlock_t lock; @@ -194,7 +200,7 @@ struct gdma_mem_info { u64 length; /* Allocated by the PF driver */ - u64 gdma_region; + gdma_obj_handle_t dma_region_handle; }; #define REGISTER_ATB_MST_MKEY_LOWER_SIZE 8 @@ -618,7 +624,7 @@ struct gdma_create_queue_req { u32 reserved1; u32 pdid; u32 doolbell_id; - u64 gdma_region; + gdma_obj_handle_t gdma_region; u32 reserved2; u32 queue_size; u32 log2_throttle_limit; @@ -645,6 +651,28 @@ struct gdma_disable_queue_req { u32 alloc_res_id_on_creation; }; /* HW DATA */ +enum atb_page_size { + ATB_PAGE_SIZE_4K, + ATB_PAGE_SIZE_8K, + ATB_PAGE_SIZE_16K, + ATB_PAGE_SIZE_32K, + ATB_PAGE_SIZE_64K, + ATB_PAGE_SIZE_128K, + ATB_PAGE_SIZE_256K, + ATB_PAGE_SIZE_512K, + ATB_PAGE_SIZE_1M, + ATB_PAGE_SIZE_2M, + ATB_PAGE_SIZE_MAX, +}; + +enum gdma_mr_access_flags { + GDMA_ACCESS_FLAG_LOCAL_READ = BIT_ULL(0), + GDMA_ACCESS_FLAG_LOCAL_WRITE = BIT_ULL(1), + GDMA_ACCESS_FLAG_REMOTE_READ = BIT_ULL(2), + GDMA_ACCESS_FLAG_REMOTE_WRITE = BIT_ULL(3), + GDMA_ACCESS_FLAG_REMOTE_ATOMIC = BIT_ULL(4), +}; + /* GDMA_CREATE_DMA_REGION */ struct gdma_create_dma_region_req { struct gdma_req_hdr hdr; @@ -671,14 +699,14 @@ struct gdma_create_dma_region_req { struct gdma_create_dma_region_resp { struct gdma_resp_hdr hdr; - u64 gdma_region; + gdma_obj_handle_t dma_region_handle; }; /* HW DATA */ /* GDMA_DMA_REGION_ADD_PAGES */ struct gdma_dma_region_add_pages_req { struct gdma_req_hdr hdr; - u64 gdma_region; + gdma_obj_handle_t dma_region_handle; u32 page_addr_list_len; u32 reserved3; @@ -690,9 +718,88 @@ struct gdma_dma_region_add_pages_req { struct gdma_destroy_dma_region_req { struct gdma_req_hdr hdr; - u64 gdma_region; + gdma_obj_handle_t dma_region_handle; }; /* HW DATA */ +enum gdma_pd_flags { + GDMA_PD_FLAG_INVALID = 0, +}; + +struct gdma_create_pd_req { + struct gdma_req_hdr hdr; + enum gdma_pd_flags flags; + u32 reserved; +};/* HW DATA */ + +struct gdma_create_pd_resp { + struct gdma_resp_hdr hdr; + gdma_obj_handle_t pd_handle; + u32 pd_id; + u32 reserved; +};/* HW DATA */ + +struct gdma_destroy_pd_req { + struct gdma_req_hdr hdr; + gdma_obj_handle_t pd_handle; +};/* HW DATA */ + +struct gdma_destory_pd_resp { + struct gdma_resp_hdr hdr; +};/* HW DATA */ + +enum gdma_mr_type { + /* Guest Virtual Address - MRs of this type allow access + * to memory mapped by PTEs associated with this MR using a virtual + * address that is set up in the MST + */ + GDMA_MR_TYPE_GVA = 2, +}; + +struct gdma_create_mr_params { + gdma_obj_handle_t pd_handle; + enum gdma_mr_type mr_type; + union { + struct { + gdma_obj_handle_t dma_region_handle; + u64 virtual_address; + enum gdma_mr_access_flags access_flags; + } gva; + }; +}; + +struct gdma_create_mr_request { + struct gdma_req_hdr hdr; + gdma_obj_handle_t pd_handle; + enum gdma_mr_type mr_type; + u32 reserved_1; + + union { + struct { + gdma_obj_handle_t dma_region_handle; + u64 virtual_address; + enum gdma_mr_access_flags access_flags; + } gva; + + }; + u32 reserved_2; +};/* HW DATA */ + +struct gdma_create_mr_response { + struct gdma_resp_hdr hdr; + gdma_obj_handle_t mr_handle; + u32 lkey; + u32 rkey; +};/* HW DATA */ + +struct gdma_destroy_mr_request { + struct gdma_req_hdr hdr; + gdma_obj_handle_t mr_handle; +};/* HW DATA */ + +struct gdma_destroy_mr_response { + struct gdma_resp_hdr hdr; +};/* HW DATA */ + int mana_gd_verify_vf_version(struct pci_dev *pdev); int mana_gd_register_device(struct gdma_dev *gd); @@ -719,4 +826,8 @@ void mana_gd_free_memory(struct gdma_mem_info *gmi); int mana_gd_send_request(struct gdma_context *gc, u32 req_len, const void *req, u32 resp_len, void *resp); + +int mana_gd_destroy_dma_region(struct gdma_context *gc, + gdma_obj_handle_t dma_region_handle); + #endif /* _GDMA_H */ From patchwork Sat Oct 22 00:01:29 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: longli@linuxonhyperv.com X-Patchwork-Id: 7044 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4242:0:0:0:0:0 with SMTP id s2csp961204wrr; Fri, 21 Oct 2022 17:06:53 -0700 (PDT) X-Google-Smtp-Source: AMsMyM7HZ9XznKbaCdBB7JtvwI/HHSYiYuXL5YAiaDpXv2bJ7E2zurNdmScPw2+qB5fQrXLogFee X-Received: by 2002:a05:6402:d75:b0:459:fad8:fbf with SMTP id ec53-20020a0564020d7500b00459fad80fbfmr19548387edb.0.1666397213630; Fri, 21 Oct 2022 17:06:53 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1666397213; cv=none; d=google.com; s=arc-20160816; b=g2Lv9Ywum2G8+ULCR8fcul92uDQ+wggkC2JMKMSm7z6Huerq6cmoL3VZcFsm8DCJML kM7AFmCpzvjlh3WRvZYqHIvP1NniGVAf1bXULJ24E2DpLfcSgcCP3NC1PixniqSmLKD+ oezzk6SpRR2R6xRz3XAdd42kAkiHpeIAcBBT8lvEit8Oqhansbd8UDwE3XztiTiRig6w j8nc5j+LPgjFv/vj92mJzXxW5+TLE7YDZpFpltUiLL18+ovnGecSSXpgefGvdJJj0x62 CWBLscmx0BupmvzD4xVNMMsdmnvoj2RxZOdmkq0QkuwayKc24l/ObWRfI0shA+k7n+8U YHmw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:reply-to:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:dkim-filter; bh=nNq3eC/eIKa3l+Hs39Lk+Tow2d4VyLkGhkDv5BgaLio=; b=dJvUkUgo7ou9ZwNp/jPZSQzjmkIgKzIA01XTo9BynjMaS/t378lElCssuImOnxT/ZV 7X6c8nEuWD2Hu7BxMHB+MFUlPw4aah2Wq5z6SEt2ugm28suMO2y1yR1GyJ3221zLvUCL JIZ68k+OYk3DU+SCpVl+LMAgSstm8o2oo2RMSS/W5t2nXez4TgPh7d4XNpzlNsQtaBUb s5qLJo+NEARaBqEk6pTx9h0IOshY8rNrKsrCvAzYafOqOHiC3KiqAwwjWkpmxFCbdORg FHrozhN2S4puxm3OskE1rTgB+CnA/KmGQcNZSac3fZCM/+/uXvOw/CIi4Gxz12JpSclk c5gw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linuxonhyperv.com header.s=default header.b=O9JC4Npg; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxonhyperv.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id m11-20020a170906848b00b0078a86188119si15862225ejx.716.2022.10.21.17.06.29; Fri, 21 Oct 2022 17:06:53 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@linuxonhyperv.com header.s=default header.b=O9JC4Npg; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxonhyperv.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230178AbiJVACk (ORCPT + 99 others); Fri, 21 Oct 2022 20:02:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51844 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230052AbiJVABv (ORCPT ); Fri, 21 Oct 2022 20:01:51 -0400 Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 43F1626CDCE; Fri, 21 Oct 2022 17:01:43 -0700 (PDT) Received: by linux.microsoft.com (Postfix, from userid 1004) id 1B00220FF4FB; Fri, 21 Oct 2022 17:01:43 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com 1B00220FF4FB DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linuxonhyperv.com; s=default; t=1666396903; bh=nNq3eC/eIKa3l+Hs39Lk+Tow2d4VyLkGhkDv5BgaLio=; h=From:To:Cc:Subject:Date:In-Reply-To:References:Reply-To:From; b=O9JC4NpgLOqwhKnuz/cNzQCjRDdROzVp5YiGiX//MhUh4rUok8YeChzj2BuMuxiUi K82Kx4TsjICQ3ubhsb2r7tjPYucXfu6HEca3Di+Y5QjEiPjkgC+2Ya2dTDzPULhp+U 7vaLoaXh9kQZT2SL6N2Jcl0AHEDdK/VzFILomz4Q= From: longli@linuxonhyperv.com To: "K. Y. Srinivasan" , Haiyang Zhang , Stephen Hemminger , Wei Liu , Dexuan Cui , "David S. Miller" , Jakub Kicinski , Paolo Abeni , Jason Gunthorpe , Leon Romanovsky , edumazet@google.com, shiraz.saleem@intel.com, Ajay Sharma Cc: linux-hyperv@vger.kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org, Long Li Subject: [Patch v9 12/12] RDMA/mana_ib: Add a driver for Microsoft Azure Network Adapter Date: Fri, 21 Oct 2022 17:01:29 -0700 Message-Id: <1666396889-31288-13-git-send-email-longli@linuxonhyperv.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1666396889-31288-1-git-send-email-longli@linuxonhyperv.com> References: <1666396889-31288-1-git-send-email-longli@linuxonhyperv.com> Reply-To: longli@microsoft.com X-Spam-Status: No, score=-11.6 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, RCVD_IN_DNSWL_MED,SPF_HELO_PASS,SPF_PASS,USER_IN_DEF_SPF_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1747344124627503100?= X-GMAIL-MSGID: =?utf-8?q?1747344124627503100?= From: Long Li Add a RDMA VF driver for Microsoft Azure Network Adapter (MANA). Co-developed-by: Ajay Sharma Signed-off-by: Ajay Sharma Reviewed-by: Dexuan Cui Signed-off-by: Long Li --- Change log: v2: Changed coding sytles/formats Checked undersize for udata length Changed all logging to use ibdev_xxx() Avoided page array copy when doing MR Sorted driver ops Fixed warnings reported by kernel test robot v3: More coding sytle/format changes v4: Process error on hardware vport configuration v5: Change licenses to GPL-2.0-only Fix error handling in mana_ib_gd_create_dma_region() v6: rebased to rdma-next removed redundant initialization to return value in mana_ib_probe() added missing tabs at the end of mana_ib_gd_create_dma_region() v7: move mana_gd_destroy_doorbell_page() and mana_gd_allocate_doorbell_page() from GDMA to this driver use ib_umem_find_best_pgsz() for finding page size for registering dma regions with hardware fix a bug that may double free mana_ind_table in mana_ib_create_qp_rss() add Ajay Sharma to maintainer list add details to description in drivers/infiniband/hw/mana/Kconfig change multiple lines comments to use RDMA style from NETDEV style change mana_ib_dev_ops to static use module_auxiliary_driver() in place of module_init and module_exit move all user-triggerable error messages to debug messages check for ind_tbl_size overflow in mana_ib_create_qp_rss() v8: instead of EFAULT, use return code from ib_copy_from_udata() fix the race condition on mana_cfg_vport() use return code from mana_gd_allocate_doorbell_page() simplify error handling code in mana_ib_gd_create_dma_region() use U64_MAX in place of ((u64)(~(0ULL))) remove confusing debug output on vport steering failure fix uninitialized cleanup index i in mana_ib_create_qp_rss() v9: define MANA_IB_MAX_MR for max number of supported MRs MAINTAINERS | 9 + drivers/infiniband/Kconfig | 1 + drivers/infiniband/hw/Makefile | 1 + drivers/infiniband/hw/mana/Kconfig | 10 + drivers/infiniband/hw/mana/Makefile | 4 + drivers/infiniband/hw/mana/cq.c | 79 ++++ drivers/infiniband/hw/mana/device.c | 117 ++++++ drivers/infiniband/hw/mana/main.c | 507 ++++++++++++++++++++++++ drivers/infiniband/hw/mana/mana_ib.h | 162 ++++++++ drivers/infiniband/hw/mana/mr.c | 197 +++++++++ drivers/infiniband/hw/mana/qp.c | 506 +++++++++++++++++++++++ drivers/infiniband/hw/mana/wq.c | 115 ++++++ include/net/mana/mana.h | 3 + include/uapi/rdma/ib_user_ioctl_verbs.h | 1 + include/uapi/rdma/mana-abi.h | 66 +++ 15 files changed, 1778 insertions(+) create mode 100644 drivers/infiniband/hw/mana/Kconfig create mode 100644 drivers/infiniband/hw/mana/Makefile create mode 100644 drivers/infiniband/hw/mana/cq.c create mode 100644 drivers/infiniband/hw/mana/device.c create mode 100644 drivers/infiniband/hw/mana/main.c create mode 100644 drivers/infiniband/hw/mana/mana_ib.h create mode 100644 drivers/infiniband/hw/mana/mr.c create mode 100644 drivers/infiniband/hw/mana/qp.c create mode 100644 drivers/infiniband/hw/mana/wq.c create mode 100644 include/uapi/rdma/mana-abi.h diff --git a/MAINTAINERS b/MAINTAINERS index 8b9a50756c7e..81ee58f44956 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -13506,6 +13506,15 @@ F: drivers/scsi/smartpqi/smartpqi*.[ch] F: include/linux/cciss*.h F: include/uapi/linux/cciss*.h +MICROSOFT MANA RDMA DRIVER +M: Long Li +M: Ajay Sharma +L: linux-rdma@vger.kernel.org +S: Supported +F: drivers/infiniband/hw/mana/ +F: include/net/mana +F: include/uapi/rdma/mana-abi.h + MICROSOFT SURFACE AGGREGATOR TABLET-MODE SWITCH M: Maximilian Luz L: platform-driver-x86@vger.kernel.org diff --git a/drivers/infiniband/Kconfig b/drivers/infiniband/Kconfig index aa36ac618e72..ccc874478f0b 100644 --- a/drivers/infiniband/Kconfig +++ b/drivers/infiniband/Kconfig @@ -85,6 +85,7 @@ source "drivers/infiniband/hw/erdma/Kconfig" source "drivers/infiniband/hw/hfi1/Kconfig" source "drivers/infiniband/hw/hns/Kconfig" source "drivers/infiniband/hw/irdma/Kconfig" +source "drivers/infiniband/hw/mana/Kconfig" source "drivers/infiniband/hw/mlx4/Kconfig" source "drivers/infiniband/hw/mlx5/Kconfig" source "drivers/infiniband/hw/mthca/Kconfig" diff --git a/drivers/infiniband/hw/Makefile b/drivers/infiniband/hw/Makefile index 6b3a88046125..1211f4317a9f 100644 --- a/drivers/infiniband/hw/Makefile +++ b/drivers/infiniband/hw/Makefile @@ -4,6 +4,7 @@ obj-$(CONFIG_INFINIBAND_QIB) += qib/ obj-$(CONFIG_INFINIBAND_CXGB4) += cxgb4/ obj-$(CONFIG_INFINIBAND_EFA) += efa/ obj-$(CONFIG_INFINIBAND_IRDMA) += irdma/ +obj-$(CONFIG_MANA_INFINIBAND) += mana/ obj-$(CONFIG_MLX4_INFINIBAND) += mlx4/ obj-$(CONFIG_MLX5_INFINIBAND) += mlx5/ obj-$(CONFIG_INFINIBAND_OCRDMA) += ocrdma/ diff --git a/drivers/infiniband/hw/mana/Kconfig b/drivers/infiniband/hw/mana/Kconfig new file mode 100644 index 000000000000..546640657bac --- /dev/null +++ b/drivers/infiniband/hw/mana/Kconfig @@ -0,0 +1,10 @@ +# SPDX-License-Identifier: GPL-2.0-only +config MANA_INFINIBAND + tristate "Microsoft Azure Network Adapter support" + depends on NETDEVICES && ETHERNET && PCI && MICROSOFT_MANA + help + This driver provides low-level RDMA support for Microsoft Azure + Network Adapter (MANA). MANA supports RDMA features that can be used + for workloads (e.g. DPDK, MPI etc) that uses RDMA verbs to directly + access hardware from user-mode processes in Microsoft Azure cloud + environment. diff --git a/drivers/infiniband/hw/mana/Makefile b/drivers/infiniband/hw/mana/Makefile new file mode 100644 index 000000000000..88655fe5e398 --- /dev/null +++ b/drivers/infiniband/hw/mana/Makefile @@ -0,0 +1,4 @@ +# SPDX-License-Identifier: GPL-2.0-only +obj-$(CONFIG_MANA_INFINIBAND) += mana_ib.o + +mana_ib-y := device.o main.o wq.o qp.o cq.o mr.o diff --git a/drivers/infiniband/hw/mana/cq.c b/drivers/infiniband/hw/mana/cq.c new file mode 100644 index 000000000000..d141cab8a1e6 --- /dev/null +++ b/drivers/infiniband/hw/mana/cq.c @@ -0,0 +1,79 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2022, Microsoft Corporation. All rights reserved. + */ + +#include "mana_ib.h" + +int mana_ib_create_cq(struct ib_cq *ibcq, const struct ib_cq_init_attr *attr, + struct ib_udata *udata) +{ + struct mana_ib_cq *cq = container_of(ibcq, struct mana_ib_cq, ibcq); + struct ib_device *ibdev = ibcq->device; + struct mana_ib_create_cq ucmd = {}; + struct mana_ib_dev *mdev; + int err; + + mdev = container_of(ibdev, struct mana_ib_dev, ib_dev); + + if (udata->inlen < sizeof(ucmd)) + return -EINVAL; + + err = ib_copy_from_udata(&ucmd, udata, min(sizeof(ucmd), udata->inlen)); + if (err) { + ibdev_dbg(ibdev, + "Failed to copy from udata for create cq, %d\n", err); + return err; + } + + if (attr->cqe > MAX_SEND_BUFFERS_PER_QUEUE) { + ibdev_dbg(ibdev, "CQE %d exceeding limit\n", attr->cqe); + return -EINVAL; + } + + cq->cqe = attr->cqe; + cq->umem = ib_umem_get(ibdev, ucmd.buf_addr, cq->cqe * COMP_ENTRY_SIZE, + IB_ACCESS_LOCAL_WRITE); + if (IS_ERR(cq->umem)) { + err = PTR_ERR(cq->umem); + ibdev_dbg(ibdev, "Failed to get umem for create cq, err %d\n", + err); + return err; + } + + err = mana_ib_gd_create_dma_region(mdev, cq->umem, &cq->gdma_region); + if (err) { + ibdev_dbg(ibdev, + "Failed to create dma region for create cq, %d\n", + err); + goto err_release_umem; + } + + ibdev_dbg(ibdev, + "mana_ib_gd_create_dma_region ret %d gdma_region 0x%llx\n", + err, cq->gdma_region); + + /* + * The CQ ID is not known at this time. The ID is generated at create_qp + */ + + return 0; + +err_release_umem: + ib_umem_release(cq->umem); + return err; +} + +int mana_ib_destroy_cq(struct ib_cq *ibcq, struct ib_udata *udata) +{ + struct mana_ib_cq *cq = container_of(ibcq, struct mana_ib_cq, ibcq); + struct ib_device *ibdev = ibcq->device; + struct mana_ib_dev *mdev; + + mdev = container_of(ibdev, struct mana_ib_dev, ib_dev); + + mana_ib_gd_destroy_dma_region(mdev, cq->gdma_region); + ib_umem_release(cq->umem); + + return 0; +} diff --git a/drivers/infiniband/hw/mana/device.c b/drivers/infiniband/hw/mana/device.c new file mode 100644 index 000000000000..d4541b8707e4 --- /dev/null +++ b/drivers/infiniband/hw/mana/device.c @@ -0,0 +1,117 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2022, Microsoft Corporation. All rights reserved. + */ + +#include "mana_ib.h" +#include + +MODULE_DESCRIPTION("Microsoft Azure Network Adapter IB driver"); +MODULE_LICENSE("GPL"); +MODULE_IMPORT_NS(NET_MANA); + +static const struct ib_device_ops mana_ib_dev_ops = { + .owner = THIS_MODULE, + .driver_id = RDMA_DRIVER_MANA, + .uverbs_abi_ver = MANA_IB_UVERBS_ABI_VERSION, + + .alloc_pd = mana_ib_alloc_pd, + .alloc_ucontext = mana_ib_alloc_ucontext, + .create_cq = mana_ib_create_cq, + .create_qp = mana_ib_create_qp, + .create_rwq_ind_table = mana_ib_create_rwq_ind_table, + .create_wq = mana_ib_create_wq, + .dealloc_pd = mana_ib_dealloc_pd, + .dealloc_ucontext = mana_ib_dealloc_ucontext, + .dereg_mr = mana_ib_dereg_mr, + .destroy_cq = mana_ib_destroy_cq, + .destroy_qp = mana_ib_destroy_qp, + .destroy_rwq_ind_table = mana_ib_destroy_rwq_ind_table, + .destroy_wq = mana_ib_destroy_wq, + .disassociate_ucontext = mana_ib_disassociate_ucontext, + .get_port_immutable = mana_ib_get_port_immutable, + .mmap = mana_ib_mmap, + .modify_qp = mana_ib_modify_qp, + .modify_wq = mana_ib_modify_wq, + .query_device = mana_ib_query_device, + .query_gid = mana_ib_query_gid, + .query_port = mana_ib_query_port, + .reg_user_mr = mana_ib_reg_user_mr, + + INIT_RDMA_OBJ_SIZE(ib_cq, mana_ib_cq, ibcq), + INIT_RDMA_OBJ_SIZE(ib_pd, mana_ib_pd, ibpd), + INIT_RDMA_OBJ_SIZE(ib_qp, mana_ib_qp, ibqp), + INIT_RDMA_OBJ_SIZE(ib_ucontext, mana_ib_ucontext, ibucontext), + INIT_RDMA_OBJ_SIZE(ib_rwq_ind_table, mana_ib_rwq_ind_table, + ib_ind_table), +}; + +static int mana_ib_probe(struct auxiliary_device *adev, + const struct auxiliary_device_id *id) +{ + struct mana_adev *madev = container_of(adev, struct mana_adev, adev); + struct gdma_dev *mdev = madev->mdev; + struct mana_context *mc; + struct mana_ib_dev *dev; + int ret; + + mc = mdev->driver_data; + + dev = ib_alloc_device(mana_ib_dev, ib_dev); + if (!dev) + return -ENOMEM; + + ib_set_device_ops(&dev->ib_dev, &mana_ib_dev_ops); + + dev->ib_dev.phys_port_cnt = mc->num_ports; + + ibdev_dbg(&dev->ib_dev, "mdev=%p id=%d num_ports=%d\n", mdev, + mdev->dev_id.as_uint32, dev->ib_dev.phys_port_cnt); + + dev->gdma_dev = mdev; + dev->ib_dev.node_type = RDMA_NODE_IB_CA; + + /* + * num_comp_vectors needs to set to the max MSIX index + * when interrupts and event queues are implemented + */ + dev->ib_dev.num_comp_vectors = 1; + dev->ib_dev.dev.parent = mdev->gdma_context->dev; + + ret = ib_register_device(&dev->ib_dev, "mana_%d", + mdev->gdma_context->dev); + if (ret) { + ib_dealloc_device(&dev->ib_dev); + return ret; + } + + dev_set_drvdata(&adev->dev, dev); + + return 0; +} + +static void mana_ib_remove(struct auxiliary_device *adev) +{ + struct mana_ib_dev *dev = dev_get_drvdata(&adev->dev); + + ib_unregister_device(&dev->ib_dev); + ib_dealloc_device(&dev->ib_dev); +} + +static const struct auxiliary_device_id mana_id_table[] = { + { + .name = "mana.rdma", + }, + {}, +}; + +MODULE_DEVICE_TABLE(auxiliary, mana_id_table); + +static struct auxiliary_driver mana_driver = { + .name = "rdma", + .probe = mana_ib_probe, + .remove = mana_ib_remove, + .id_table = mana_id_table, +}; + +module_auxiliary_driver(mana_driver); diff --git a/drivers/infiniband/hw/mana/main.c b/drivers/infiniband/hw/mana/main.c new file mode 100644 index 000000000000..49c20f378367 --- /dev/null +++ b/drivers/infiniband/hw/mana/main.c @@ -0,0 +1,507 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2022, Microsoft Corporation. All rights reserved. + */ + +#include "mana_ib.h" + +void mana_ib_uncfg_vport(struct mana_ib_dev *dev, struct mana_ib_pd *pd, + u32 port) +{ + struct gdma_dev *gd = dev->gdma_dev; + struct mana_port_context *mpc; + struct net_device *ndev; + struct mana_context *mc; + + mc = gd->driver_data; + ndev = mc->ports[port]; + mpc = netdev_priv(ndev); + + mutex_lock(&pd->vport_mutex); + + pd->vport_use_count--; + WARN_ON(pd->vport_use_count < 0); + + if (!pd->vport_use_count) + mana_uncfg_vport(mpc); + + mutex_unlock(&pd->vport_mutex); +} + +int mana_ib_cfg_vport(struct mana_ib_dev *dev, u32 port, struct mana_ib_pd *pd, + u32 doorbell_id) +{ + struct gdma_dev *mdev = dev->gdma_dev; + struct mana_port_context *mpc; + struct mana_context *mc; + struct net_device *ndev; + int err; + + mc = mdev->driver_data; + ndev = mc->ports[port]; + mpc = netdev_priv(ndev); + + mutex_lock(&pd->vport_mutex); + + pd->vport_use_count++; + if (pd->vport_use_count > 1) { + ibdev_dbg(&dev->ib_dev, + "Skip as this PD is already configured vport\n"); + mutex_unlock(&pd->vport_mutex); + return 0; + } + + err = mana_cfg_vport(mpc, pd->pdn, doorbell_id); + if (err) { + pd->vport_use_count--; + mutex_unlock(&pd->vport_mutex); + + ibdev_dbg(&dev->ib_dev, "Failed to configure vPort %d\n", err); + return err; + } + + mutex_unlock(&pd->vport_mutex); + + pd->tx_shortform_allowed = mpc->tx_shortform_allowed; + pd->tx_vp_offset = mpc->tx_vp_offset; + + ibdev_dbg(&dev->ib_dev, "vport handle %llx pdid %x doorbell_id %x\n", + mpc->port_handle, pd->pdn, doorbell_id); + + return 0; +} + +int mana_ib_alloc_pd(struct ib_pd *ibpd, struct ib_udata *udata) +{ + struct mana_ib_pd *pd = container_of(ibpd, struct mana_ib_pd, ibpd); + struct ib_device *ibdev = ibpd->device; + struct gdma_create_pd_resp resp = {}; + struct gdma_create_pd_req req = {}; + enum gdma_pd_flags flags = 0; + struct mana_ib_dev *dev; + struct gdma_dev *mdev; + int err; + + dev = container_of(ibdev, struct mana_ib_dev, ib_dev); + mdev = dev->gdma_dev; + + mana_gd_init_req_hdr(&req.hdr, GDMA_CREATE_PD, sizeof(req), + sizeof(resp)); + + req.flags = flags; + err = mana_gd_send_request(mdev->gdma_context, sizeof(req), &req, + sizeof(resp), &resp); + + if (err || resp.hdr.status) { + ibdev_dbg(&dev->ib_dev, + "Failed to get pd_id err %d status %u\n", err, + resp.hdr.status); + if (!err) + err = -EPROTO; + + return err; + } + + pd->pd_handle = resp.pd_handle; + pd->pdn = resp.pd_id; + ibdev_dbg(&dev->ib_dev, "pd_handle 0x%llx pd_id %d\n", + pd->pd_handle, pd->pdn); + + mutex_init(&pd->vport_mutex); + pd->vport_use_count = 0; + return 0; +} + +int mana_ib_dealloc_pd(struct ib_pd *ibpd, struct ib_udata *udata) +{ + struct mana_ib_pd *pd = container_of(ibpd, struct mana_ib_pd, ibpd); + struct ib_device *ibdev = ibpd->device; + struct gdma_destory_pd_resp resp = {}; + struct gdma_destroy_pd_req req = {}; + struct mana_ib_dev *dev; + struct gdma_dev *mdev; + int err; + + dev = container_of(ibdev, struct mana_ib_dev, ib_dev); + mdev = dev->gdma_dev; + + mana_gd_init_req_hdr(&req.hdr, GDMA_DESTROY_PD, sizeof(req), + sizeof(resp)); + + req.pd_handle = pd->pd_handle; + err = mana_gd_send_request(mdev->gdma_context, sizeof(req), &req, + sizeof(resp), &resp); + + if (err || resp.hdr.status) { + ibdev_dbg(&dev->ib_dev, + "Failed to destroy pd_handle 0x%llx err %d status %u", + pd->pd_handle, err, resp.hdr.status); + if (!err) + err = -EPROTO; + } + + return err; +} + +static int mana_gd_destroy_doorbell_page(struct gdma_context *gc, + int doorbell_page) +{ + struct gdma_destroy_resource_range_req req = {}; + struct gdma_resp_hdr resp = {}; + int err; + + mana_gd_init_req_hdr(&req.hdr, GDMA_DESTROY_RESOURCE_RANGE, + sizeof(req), sizeof(resp)); + + req.resource_type = GDMA_RESOURCE_DOORBELL_PAGE; + req.num_resources = 1; + req.allocated_resources = doorbell_page; + + err = mana_gd_send_request(gc, sizeof(req), &req, sizeof(resp), &resp); + if (err || resp.status) { + dev_err(gc->dev, + "Failed to destroy doorbell page: ret %d, 0x%x\n", + err, resp.status); + return err ?: -EPROTO; + } + + return 0; +} + +static int mana_gd_allocate_doorbell_page(struct gdma_context *gc, + int *doorbell_page) +{ + struct gdma_allocate_resource_range_req req = {}; + struct gdma_allocate_resource_range_resp resp = {}; + int err; + + mana_gd_init_req_hdr(&req.hdr, GDMA_ALLOCATE_RESOURCE_RANGE, + sizeof(req), sizeof(resp)); + + req.resource_type = GDMA_RESOURCE_DOORBELL_PAGE; + req.num_resources = 1; + req.alignment = 1; + + /* Have GDMA start searching from 0 */ + req.allocated_resources = 0; + + err = mana_gd_send_request(gc, sizeof(req), &req, sizeof(resp), &resp); + if (err || resp.hdr.status) { + dev_err(gc->dev, + "Failed to allocate doorbell page: ret %d, 0x%x\n", + err, resp.hdr.status); + return err ?: -EPROTO; + } + + *doorbell_page = resp.allocated_resources; + + return 0; +} + +int mana_ib_alloc_ucontext(struct ib_ucontext *ibcontext, + struct ib_udata *udata) +{ + struct mana_ib_ucontext *ucontext = + container_of(ibcontext, struct mana_ib_ucontext, ibucontext); + struct ib_device *ibdev = ibcontext->device; + struct mana_ib_dev *mdev; + struct gdma_context *gc; + struct gdma_dev *dev; + int doorbell_page; + int ret; + + mdev = container_of(ibdev, struct mana_ib_dev, ib_dev); + dev = mdev->gdma_dev; + gc = dev->gdma_context; + + /* Allocate a doorbell page index */ + ret = mana_gd_allocate_doorbell_page(gc, &doorbell_page); + if (ret) { + ibdev_dbg(ibdev, "Failed to allocate doorbell page %d\n", ret); + return ret; + } + + ibdev_dbg(ibdev, "Doorbell page allocated %d\n", doorbell_page); + + ucontext->doorbell = doorbell_page; + + return 0; +} + +void mana_ib_dealloc_ucontext(struct ib_ucontext *ibcontext) +{ + struct mana_ib_ucontext *mana_ucontext = + container_of(ibcontext, struct mana_ib_ucontext, ibucontext); + struct ib_device *ibdev = ibcontext->device; + struct mana_ib_dev *mdev; + struct gdma_context *gc; + int ret; + + mdev = container_of(ibdev, struct mana_ib_dev, ib_dev); + gc = mdev->gdma_dev->gdma_context; + + ret = mana_gd_destroy_doorbell_page(gc, mana_ucontext->doorbell); + if (ret) + ibdev_dbg(ibdev, "Failed to destroy doorbell page %d\n", ret); +} + +int mana_ib_gd_create_dma_region(struct mana_ib_dev *dev, struct ib_umem *umem, + mana_handle_t *gdma_region) +{ + struct gdma_dma_region_add_pages_req *add_req = NULL; + struct gdma_create_dma_region_resp create_resp = {}; + struct gdma_create_dma_region_req *create_req; + size_t num_pages_cur, num_pages_to_handle; + unsigned int create_req_msg_size; + struct hw_channel_context *hwc; + struct ib_block_iter biter; + size_t max_pgs_create_cmd; + struct gdma_context *gc; + size_t num_pages_total; + struct gdma_dev *mdev; + unsigned long page_sz; + void *request_buf; + unsigned int i; + int err; + + mdev = dev->gdma_dev; + gc = mdev->gdma_context; + hwc = gc->hwc.driver_data; + + /* Hardware requires dma region to align to chosen page size */ + page_sz = ib_umem_find_best_pgsz(umem, PAGE_SZ_BM, 0); + if (!page_sz) { + ibdev_dbg(&dev->ib_dev, "failed to find page size.\n"); + return -ENOMEM; + } + num_pages_total = ib_umem_num_dma_blocks(umem, page_sz); + + max_pgs_create_cmd = + (hwc->max_req_msg_size - sizeof(*create_req)) / sizeof(u64); + num_pages_to_handle = + min_t(size_t, num_pages_total, max_pgs_create_cmd); + create_req_msg_size = + struct_size(create_req, page_addr_list, num_pages_to_handle); + + request_buf = kzalloc(hwc->max_req_msg_size, GFP_KERNEL); + if (!request_buf) + return -ENOMEM; + + create_req = request_buf; + mana_gd_init_req_hdr(&create_req->hdr, GDMA_CREATE_DMA_REGION, + create_req_msg_size, sizeof(create_resp)); + + create_req->length = umem->length; + create_req->offset_in_page = umem->address & (page_sz - 1); + create_req->gdma_page_type = order_base_2(page_sz) - PAGE_SHIFT; + create_req->page_count = num_pages_total; + create_req->page_addr_list_len = num_pages_to_handle; + + ibdev_dbg(&dev->ib_dev, "size_dma_region %lu num_pages_total %lu\n", + umem->length, num_pages_total); + + ibdev_dbg(&dev->ib_dev, "page_sz %lu offset_in_page %u\n", + page_sz, create_req->offset_in_page); + + ibdev_dbg(&dev->ib_dev, "num_pages_to_handle %lu, gdma_page_type %u", + num_pages_to_handle, create_req->gdma_page_type); + + __rdma_umem_block_iter_start(&biter, umem, page_sz); + + for (i = 0; i < num_pages_to_handle; ++i) { + dma_addr_t cur_addr; + + __rdma_block_iter_next(&biter); + cur_addr = rdma_block_iter_dma_address(&biter); + + create_req->page_addr_list[i] = cur_addr; + } + + err = mana_gd_send_request(gc, create_req_msg_size, create_req, + sizeof(create_resp), &create_resp); + if (err || create_resp.hdr.status) { + ibdev_dbg(&dev->ib_dev, + "Failed to create DMA region: %d, 0x%x\n", err, + create_resp.hdr.status); + if (!err) + err = -EPROTO; + + goto out; + } + + *gdma_region = create_resp.dma_region_handle; + ibdev_dbg(&dev->ib_dev, "Created DMA region with handle 0x%llx\n", + *gdma_region); + + num_pages_cur = num_pages_to_handle; + + if (num_pages_cur < num_pages_total) { + unsigned int add_req_msg_size; + size_t max_pgs_add_cmd = + (hwc->max_req_msg_size - sizeof(*add_req)) / + sizeof(u64); + + num_pages_to_handle = + min_t(size_t, num_pages_total - num_pages_cur, + max_pgs_add_cmd); + + /* Calculate the max num of pages that will be handled */ + add_req_msg_size = struct_size(add_req, page_addr_list, + num_pages_to_handle); + add_req = request_buf; + + while (num_pages_cur < num_pages_total) { + struct gdma_general_resp add_resp = {}; + u32 expected_status = 0; + + if (num_pages_cur + num_pages_to_handle < + num_pages_total) { + /* Status indicating more pages are needed */ + expected_status = GDMA_STATUS_MORE_ENTRIES; + } + + memset(add_req, 0, add_req_msg_size); + + mana_gd_init_req_hdr(&add_req->hdr, + GDMA_DMA_REGION_ADD_PAGES, + add_req_msg_size, + sizeof(add_resp)); + add_req->dma_region_handle = *gdma_region; + add_req->page_addr_list_len = num_pages_to_handle; + + for (i = 0; i < num_pages_to_handle; ++i) { + dma_addr_t cur_addr = + rdma_block_iter_dma_address(&biter); + add_req->page_addr_list[i] = cur_addr; + __rdma_block_iter_next(&biter); + + ibdev_dbg(&dev->ib_dev, + "page_addr_list %lu addr 0x%llx\n", + num_pages_cur + i, cur_addr); + } + + err = mana_gd_send_request(gc, add_req_msg_size, + add_req, sizeof(add_resp), + &add_resp); + if (err || add_resp.hdr.status != expected_status) { + ibdev_dbg(&dev->ib_dev, + "Failed put DMA pages %u: %d,0x%x\n", + i, err, add_resp.hdr.status); + err = -EPROTO; + break; + } + + num_pages_cur += num_pages_to_handle; + num_pages_to_handle = + min_t(size_t, num_pages_total - num_pages_cur, + max_pgs_add_cmd); + add_req_msg_size = sizeof(*add_req) + + num_pages_to_handle * sizeof(u64); + } + } + + if (err) + mana_ib_gd_destroy_dma_region(dev, create_resp.dma_region_handle); + +out: + kfree(request_buf); + return err; +} + +int mana_ib_gd_destroy_dma_region(struct mana_ib_dev *dev, u64 gdma_region) +{ + struct gdma_dev *mdev = dev->gdma_dev; + struct gdma_context *gc; + + gc = mdev->gdma_context; + ibdev_dbg(&dev->ib_dev, "destroy dma region 0x%llx\n", gdma_region); + + return mana_gd_destroy_dma_region(gc, gdma_region); +} + +int mana_ib_mmap(struct ib_ucontext *ibcontext, struct vm_area_struct *vma) +{ + struct mana_ib_ucontext *mana_ucontext = + container_of(ibcontext, struct mana_ib_ucontext, ibucontext); + struct ib_device *ibdev = ibcontext->device; + struct mana_ib_dev *mdev; + struct gdma_context *gc; + phys_addr_t pfn; + pgprot_t prot; + int ret; + + mdev = container_of(ibdev, struct mana_ib_dev, ib_dev); + gc = mdev->gdma_dev->gdma_context; + + if (vma->vm_pgoff != 0) { + ibdev_dbg(ibdev, "Unexpected vm_pgoff %lu\n", vma->vm_pgoff); + return -EINVAL; + } + + /* Map to the page indexed by ucontext->doorbell */ + pfn = (gc->phys_db_page_base + + gc->db_page_size * mana_ucontext->doorbell) >> + PAGE_SHIFT; + prot = pgprot_writecombine(vma->vm_page_prot); + + ret = rdma_user_mmap_io(ibcontext, vma, pfn, gc->db_page_size, prot, + NULL); + if (ret) + ibdev_dbg(ibdev, "can't rdma_user_mmap_io ret %d\n", ret); + else + ibdev_dbg(ibdev, "mapped I/O pfn 0x%llx page_size %u, ret %d\n", + pfn, gc->db_page_size, ret); + + return ret; +} + +int mana_ib_get_port_immutable(struct ib_device *ibdev, u32 port_num, + struct ib_port_immutable *immutable) +{ + /* + * This version only support RAW_PACKET + * other values need to be filled for other types + */ + immutable->core_cap_flags = RDMA_CORE_PORT_RAW_PACKET; + + return 0; +} + +int mana_ib_query_device(struct ib_device *ibdev, struct ib_device_attr *props, + struct ib_udata *uhw) +{ + props->max_qp = MANA_MAX_NUM_QUEUES; + props->max_qp_wr = MAX_SEND_BUFFERS_PER_QUEUE; + + /* + * max_cqe could be potentially much bigger. + * As this version of driver only support RAW QP, set it to the same + * value as max_qp_wr + */ + props->max_cqe = MAX_SEND_BUFFERS_PER_QUEUE; + + props->max_mr_size = MANA_IB_MAX_MR_SIZE; + props->max_mr = MANA_IB_MAX_MR; + props->max_send_sge = MAX_TX_WQE_SGL_ENTRIES; + props->max_recv_sge = MAX_RX_WQE_SGL_ENTRIES; + + return 0; +} + +int mana_ib_query_port(struct ib_device *ibdev, u32 port, + struct ib_port_attr *props) +{ + /* This version doesn't return port properties */ + return 0; +} + +int mana_ib_query_gid(struct ib_device *ibdev, u32 port, int index, + union ib_gid *gid) +{ + /* This version doesn't return GID properties */ + return 0; +} + +void mana_ib_disassociate_ucontext(struct ib_ucontext *ibcontext) +{ +} diff --git a/drivers/infiniband/hw/mana/mana_ib.h b/drivers/infiniband/hw/mana/mana_ib.h new file mode 100644 index 000000000000..502cc8672eef --- /dev/null +++ b/drivers/infiniband/hw/mana/mana_ib.h @@ -0,0 +1,162 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (c) 2022 Microsoft Corporation. All rights reserved. + */ + +#ifndef _MANA_IB_H_ +#define _MANA_IB_H_ + +#include +#include +#include +#include +#include + +#include + +#define PAGE_SZ_BM \ + (SZ_4K | SZ_8K | SZ_16K | SZ_32K | SZ_64K | SZ_128K | SZ_256K | \ + SZ_512K | SZ_1M | SZ_2M) + +/* MANA doesn't have any limit for MR size */ +#define MANA_IB_MAX_MR_SIZE U64_MAX + +/* + * The hardware limit of number of MRs is greater than maximum number of MRs + * that can possibly represent in 24 bits + */ +#define MANA_IB_MAX_MR 0xFFFFFFu + +struct mana_ib_dev { + struct ib_device ib_dev; + struct gdma_dev *gdma_dev; +}; + +struct mana_ib_wq { + struct ib_wq ibwq; + struct ib_umem *umem; + int wqe; + u32 wq_buf_size; + u64 gdma_region; + u64 id; + mana_handle_t rx_object; +}; + +struct mana_ib_pd { + struct ib_pd ibpd; + u32 pdn; + mana_handle_t pd_handle; + + /* Mutex for sharing access to vport_use_count */ + struct mutex vport_mutex; + int vport_use_count; + + bool tx_shortform_allowed; + u32 tx_vp_offset; +}; + +struct mana_ib_mr { + struct ib_mr ibmr; + struct ib_umem *umem; + mana_handle_t mr_handle; +}; + +struct mana_ib_cq { + struct ib_cq ibcq; + struct ib_umem *umem; + int cqe; + u64 gdma_region; + u64 id; +}; + +struct mana_ib_qp { + struct ib_qp ibqp; + + /* Work queue info */ + struct ib_umem *sq_umem; + int sqe; + u64 sq_gdma_region; + u64 sq_id; + mana_handle_t tx_object; + + /* The port on the IB device, starting with 1 */ + u32 port; +}; + +struct mana_ib_ucontext { + struct ib_ucontext ibucontext; + u32 doorbell; +}; + +struct mana_ib_rwq_ind_table { + struct ib_rwq_ind_table ib_ind_table; +}; + +int mana_ib_gd_create_dma_region(struct mana_ib_dev *dev, struct ib_umem *umem, + mana_handle_t *gdma_region); + +int mana_ib_gd_destroy_dma_region(struct mana_ib_dev *dev, + mana_handle_t gdma_region); + +struct ib_wq *mana_ib_create_wq(struct ib_pd *pd, + struct ib_wq_init_attr *init_attr, + struct ib_udata *udata); + +int mana_ib_modify_wq(struct ib_wq *wq, struct ib_wq_attr *wq_attr, + u32 wq_attr_mask, struct ib_udata *udata); + +int mana_ib_destroy_wq(struct ib_wq *ibwq, struct ib_udata *udata); + +int mana_ib_create_rwq_ind_table(struct ib_rwq_ind_table *ib_rwq_ind_table, + struct ib_rwq_ind_table_init_attr *init_attr, + struct ib_udata *udata); + +int mana_ib_destroy_rwq_ind_table(struct ib_rwq_ind_table *ib_rwq_ind_tbl); + +struct ib_mr *mana_ib_get_dma_mr(struct ib_pd *ibpd, int access_flags); + +struct ib_mr *mana_ib_reg_user_mr(struct ib_pd *pd, u64 start, u64 length, + u64 iova, int access_flags, + struct ib_udata *udata); + +int mana_ib_dereg_mr(struct ib_mr *ibmr, struct ib_udata *udata); + +int mana_ib_create_qp(struct ib_qp *qp, struct ib_qp_init_attr *qp_init_attr, + struct ib_udata *udata); + +int mana_ib_modify_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr, + int attr_mask, struct ib_udata *udata); + +int mana_ib_destroy_qp(struct ib_qp *ibqp, struct ib_udata *udata); + +int mana_ib_cfg_vport(struct mana_ib_dev *dev, u32 port_id, + struct mana_ib_pd *pd, u32 doorbell_id); +void mana_ib_uncfg_vport(struct mana_ib_dev *dev, struct mana_ib_pd *pd, + u32 port); + +int mana_ib_create_cq(struct ib_cq *ibcq, const struct ib_cq_init_attr *attr, + struct ib_udata *udata); + +int mana_ib_destroy_cq(struct ib_cq *ibcq, struct ib_udata *udata); + +int mana_ib_alloc_pd(struct ib_pd *ibpd, struct ib_udata *udata); +int mana_ib_dealloc_pd(struct ib_pd *ibpd, struct ib_udata *udata); + +int mana_ib_alloc_ucontext(struct ib_ucontext *ibcontext, + struct ib_udata *udata); +void mana_ib_dealloc_ucontext(struct ib_ucontext *ibcontext); + +int mana_ib_mmap(struct ib_ucontext *ibcontext, struct vm_area_struct *vma); + +int mana_ib_get_port_immutable(struct ib_device *ibdev, u32 port_num, + struct ib_port_immutable *immutable); +int mana_ib_query_device(struct ib_device *ibdev, struct ib_device_attr *props, + struct ib_udata *uhw); +int mana_ib_query_port(struct ib_device *ibdev, u32 port, + struct ib_port_attr *props); +int mana_ib_query_gid(struct ib_device *ibdev, u32 port, int index, + union ib_gid *gid); + +void mana_ib_disassociate_ucontext(struct ib_ucontext *ibcontext); + +#endif diff --git a/drivers/infiniband/hw/mana/mr.c b/drivers/infiniband/hw/mana/mr.c new file mode 100644 index 000000000000..f712a0ba47be --- /dev/null +++ b/drivers/infiniband/hw/mana/mr.c @@ -0,0 +1,197 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2022, Microsoft Corporation. All rights reserved. + */ + +#include "mana_ib.h" + +#define VALID_MR_FLAGS \ + (IB_ACCESS_LOCAL_WRITE | IB_ACCESS_REMOTE_WRITE | IB_ACCESS_REMOTE_READ) + +static enum gdma_mr_access_flags +mana_ib_verbs_to_gdma_access_flags(int access_flags) +{ + enum gdma_mr_access_flags flags = GDMA_ACCESS_FLAG_LOCAL_READ; + + if (access_flags & IB_ACCESS_LOCAL_WRITE) + flags |= GDMA_ACCESS_FLAG_LOCAL_WRITE; + + if (access_flags & IB_ACCESS_REMOTE_WRITE) + flags |= GDMA_ACCESS_FLAG_REMOTE_WRITE; + + if (access_flags & IB_ACCESS_REMOTE_READ) + flags |= GDMA_ACCESS_FLAG_REMOTE_READ; + + return flags; +} + +static int mana_ib_gd_create_mr(struct mana_ib_dev *dev, struct mana_ib_mr *mr, + struct gdma_create_mr_params *mr_params) +{ + struct gdma_create_mr_response resp = {}; + struct gdma_create_mr_request req = {}; + struct gdma_dev *mdev = dev->gdma_dev; + struct gdma_context *gc; + int err; + + gc = mdev->gdma_context; + + mana_gd_init_req_hdr(&req.hdr, GDMA_CREATE_MR, sizeof(req), + sizeof(resp)); + req.pd_handle = mr_params->pd_handle; + req.mr_type = mr_params->mr_type; + + switch (mr_params->mr_type) { + case GDMA_MR_TYPE_GVA: + req.gva.dma_region_handle = mr_params->gva.dma_region_handle; + req.gva.virtual_address = mr_params->gva.virtual_address; + req.gva.access_flags = mr_params->gva.access_flags; + break; + + default: + ibdev_dbg(&dev->ib_dev, + "invalid param (GDMA_MR_TYPE) passed, type %d\n", + req.mr_type); + return -EINVAL; + } + + err = mana_gd_send_request(gc, sizeof(req), &req, sizeof(resp), &resp); + + if (err || resp.hdr.status) { + ibdev_dbg(&dev->ib_dev, "Failed to create mr %d, %u", err, + resp.hdr.status); + if (!err) + err = -EPROTO; + + return err; + } + + mr->ibmr.lkey = resp.lkey; + mr->ibmr.rkey = resp.rkey; + mr->mr_handle = resp.mr_handle; + + return 0; +} + +static int mana_ib_gd_destroy_mr(struct mana_ib_dev *dev, gdma_obj_handle_t mr_handle) +{ + struct gdma_destroy_mr_response resp = {}; + struct gdma_destroy_mr_request req = {}; + struct gdma_dev *mdev = dev->gdma_dev; + struct gdma_context *gc; + int err; + + gc = mdev->gdma_context; + + mana_gd_init_req_hdr(&req.hdr, GDMA_DESTROY_MR, sizeof(req), + sizeof(resp)); + + req.mr_handle = mr_handle; + + err = mana_gd_send_request(gc, sizeof(req), &req, sizeof(resp), &resp); + if (err || resp.hdr.status) { + dev_err(gc->dev, "Failed to destroy MR: %d, 0x%x\n", err, + resp.hdr.status); + if (!err) + err = -EPROTO; + return err; + } + + return 0; +} + +struct ib_mr *mana_ib_reg_user_mr(struct ib_pd *ibpd, u64 start, u64 length, + u64 iova, int access_flags, + struct ib_udata *udata) +{ + struct mana_ib_pd *pd = container_of(ibpd, struct mana_ib_pd, ibpd); + struct gdma_create_mr_params mr_params = {}; + struct ib_device *ibdev = ibpd->device; + gdma_obj_handle_t dma_region_handle; + struct mana_ib_dev *dev; + struct mana_ib_mr *mr; + int err; + + dev = container_of(ibdev, struct mana_ib_dev, ib_dev); + + ibdev_dbg(ibdev, + "start 0x%llx, iova 0x%llx length 0x%llx access_flags 0x%x", + start, iova, length, access_flags); + + if (access_flags & ~VALID_MR_FLAGS) + return ERR_PTR(-EINVAL); + + mr = kzalloc(sizeof(*mr), GFP_KERNEL); + if (!mr) + return ERR_PTR(-ENOMEM); + + mr->umem = ib_umem_get(ibdev, start, length, access_flags); + if (IS_ERR(mr->umem)) { + err = PTR_ERR(mr->umem); + ibdev_dbg(ibdev, + "Failed to get umem for register user-mr, %d\n", err); + goto err_free; + } + + err = mana_ib_gd_create_dma_region(dev, mr->umem, &dma_region_handle); + if (err) { + ibdev_dbg(ibdev, "Failed create dma region for user-mr, %d\n", + err); + goto err_umem; + } + + ibdev_dbg(ibdev, + "mana_ib_gd_create_dma_region ret %d gdma_region %llx\n", err, + dma_region_handle); + + mr_params.pd_handle = pd->pd_handle; + mr_params.mr_type = GDMA_MR_TYPE_GVA; + mr_params.gva.dma_region_handle = dma_region_handle; + mr_params.gva.virtual_address = iova; + mr_params.gva.access_flags = + mana_ib_verbs_to_gdma_access_flags(access_flags); + + err = mana_ib_gd_create_mr(dev, mr, &mr_params); + if (err) + goto err_dma_region; + + /* + * There is no need to keep track of dma_region_handle after MR is + * successfully created. The dma_region_handle is tracked in the PF + * as part of the lifecycle of this MR. + */ + + return &mr->ibmr; + +err_dma_region: + mana_gd_destroy_dma_region(dev->gdma_dev->gdma_context, + dma_region_handle); + +err_umem: + ib_umem_release(mr->umem); + +err_free: + kfree(mr); + return ERR_PTR(err); +} + +int mana_ib_dereg_mr(struct ib_mr *ibmr, struct ib_udata *udata) +{ + struct mana_ib_mr *mr = container_of(ibmr, struct mana_ib_mr, ibmr); + struct ib_device *ibdev = ibmr->device; + struct mana_ib_dev *dev; + int err; + + dev = container_of(ibdev, struct mana_ib_dev, ib_dev); + + err = mana_ib_gd_destroy_mr(dev, mr->mr_handle); + if (err) + return err; + + if (mr->umem) + ib_umem_release(mr->umem); + + kfree(mr); + + return 0; +} diff --git a/drivers/infiniband/hw/mana/qp.c b/drivers/infiniband/hw/mana/qp.c new file mode 100644 index 000000000000..ea15ec77e321 --- /dev/null +++ b/drivers/infiniband/hw/mana/qp.c @@ -0,0 +1,506 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2022, Microsoft Corporation. All rights reserved. + */ + +#include "mana_ib.h" + +static int mana_ib_cfg_vport_steering(struct mana_ib_dev *dev, + struct net_device *ndev, + mana_handle_t default_rxobj, + mana_handle_t ind_table[], + u32 log_ind_tbl_size, u32 rx_hash_key_len, + u8 *rx_hash_key) +{ + struct mana_port_context *mpc = netdev_priv(ndev); + struct mana_cfg_rx_steer_req *req = NULL; + struct mana_cfg_rx_steer_resp resp = {}; + mana_handle_t *req_indir_tab; + struct gdma_context *gc; + struct gdma_dev *mdev; + u32 req_buf_size; + int i, err; + + mdev = dev->gdma_dev; + gc = mdev->gdma_context; + + req_buf_size = + sizeof(*req) + sizeof(mana_handle_t) * MANA_INDIRECT_TABLE_SIZE; + req = kzalloc(req_buf_size, GFP_KERNEL); + if (!req) + return -ENOMEM; + + mana_gd_init_req_hdr(&req->hdr, MANA_CONFIG_VPORT_RX, req_buf_size, + sizeof(resp)); + + req->vport = mpc->port_handle; + req->rx_enable = 1; + req->update_default_rxobj = 1; + req->default_rxobj = default_rxobj; + req->hdr.dev_id = mdev->dev_id; + + /* If there are more than 1 entries in indirection table, enable RSS */ + if (log_ind_tbl_size) + req->rss_enable = true; + + req->num_indir_entries = MANA_INDIRECT_TABLE_SIZE; + req->indir_tab_offset = sizeof(*req); + req->update_indir_tab = true; + + req_indir_tab = (mana_handle_t *)(req + 1); + /* The ind table passed to the hardware must have + * MANA_INDIRECT_TABLE_SIZE entries. Adjust the verb + * ind_table to MANA_INDIRECT_TABLE_SIZE if required + */ + ibdev_dbg(&dev->ib_dev, "ind table size %u\n", 1 << log_ind_tbl_size); + for (i = 0; i < MANA_INDIRECT_TABLE_SIZE; i++) { + req_indir_tab[i] = ind_table[i % (1 << log_ind_tbl_size)]; + ibdev_dbg(&dev->ib_dev, "index %u handle 0x%llx\n", i, + req_indir_tab[i]); + } + + req->update_hashkey = true; + if (rx_hash_key_len) + memcpy(req->hashkey, rx_hash_key, rx_hash_key_len); + else + netdev_rss_key_fill(req->hashkey, MANA_HASH_KEY_SIZE); + + ibdev_dbg(&dev->ib_dev, "vport handle %llu default_rxobj 0x%llx\n", + req->vport, default_rxobj); + + err = mana_gd_send_request(gc, req_buf_size, req, sizeof(resp), &resp); + if (err) { + netdev_err(ndev, "Failed to configure vPort RX: %d\n", err); + goto out; + } + + if (resp.hdr.status) { + netdev_err(ndev, "vPort RX configuration failed: 0x%x\n", + resp.hdr.status); + err = -EPROTO; + goto out; + } + + netdev_info(ndev, "Configured steering vPort %llu log_entries %u\n", + mpc->port_handle, log_ind_tbl_size); + +out: + kfree(req); + return err; +} + +static int mana_ib_create_qp_rss(struct ib_qp *ibqp, struct ib_pd *pd, + struct ib_qp_init_attr *attr, + struct ib_udata *udata) +{ + struct mana_ib_qp *qp = container_of(ibqp, struct mana_ib_qp, ibqp); + struct mana_ib_dev *mdev = + container_of(pd->device, struct mana_ib_dev, ib_dev); + struct ib_rwq_ind_table *ind_tbl = attr->rwq_ind_tbl; + struct mana_ib_create_qp_rss_resp resp = {}; + struct mana_ib_create_qp_rss ucmd = {}; + struct gdma_dev *gd = mdev->gdma_dev; + mana_handle_t *mana_ind_table; + struct mana_port_context *mpc; + struct mana_context *mc; + struct net_device *ndev; + struct mana_ib_cq *cq; + struct mana_ib_wq *wq; + unsigned int ind_tbl_size; + struct ib_cq *ibcq; + struct ib_wq *ibwq; + int i = 0; + u32 port; + int ret; + + mc = gd->driver_data; + + if (!udata || udata->inlen < sizeof(ucmd)) + return -EINVAL; + + ret = ib_copy_from_udata(&ucmd, udata, min(sizeof(ucmd), udata->inlen)); + if (ret) { + ibdev_dbg(&mdev->ib_dev, + "Failed copy from udata for create rss-qp, err %d\n", + ret); + return ret; + } + + if (attr->cap.max_recv_wr > MAX_SEND_BUFFERS_PER_QUEUE) { + ibdev_dbg(&mdev->ib_dev, + "Requested max_recv_wr %d exceeding limit\n", + attr->cap.max_recv_wr); + return -EINVAL; + } + + if (attr->cap.max_recv_sge > MAX_RX_WQE_SGL_ENTRIES) { + ibdev_dbg(&mdev->ib_dev, + "Requested max_recv_sge %d exceeding limit\n", + attr->cap.max_recv_sge); + return -EINVAL; + } + + ind_tbl_size = 1 << ind_tbl->log_ind_tbl_size; + if (ind_tbl_size > MANA_INDIRECT_TABLE_SIZE) { + ibdev_dbg(&mdev->ib_dev, + "Indirect table size %d exceeding limit\n", + ind_tbl_size); + return -EINVAL; + } + + if (ucmd.rx_hash_function != MANA_IB_RX_HASH_FUNC_TOEPLITZ) { + ibdev_dbg(&mdev->ib_dev, + "RX Hash function is not supported, %d\n", + ucmd.rx_hash_function); + return -EINVAL; + } + + /* IB ports start with 1, MANA start with 0 */ + port = ucmd.port; + if (port < 1 || port > mc->num_ports) { + ibdev_dbg(&mdev->ib_dev, "Invalid port %u in creating qp\n", + port); + return -EINVAL; + } + ndev = mc->ports[port - 1]; + mpc = netdev_priv(ndev); + + ibdev_dbg(&mdev->ib_dev, "rx_hash_function %d port %d\n", + ucmd.rx_hash_function, port); + + mana_ind_table = kcalloc(ind_tbl_size, sizeof(mana_handle_t), + GFP_KERNEL); + if (!mana_ind_table) { + ret = -ENOMEM; + goto fail; + } + + qp->port = port; + + for (i = 0; i < ind_tbl_size; i++) { + struct mana_obj_spec wq_spec = {}; + struct mana_obj_spec cq_spec = {}; + + ibwq = ind_tbl->ind_tbl[i]; + wq = container_of(ibwq, struct mana_ib_wq, ibwq); + + ibcq = ibwq->cq; + cq = container_of(ibcq, struct mana_ib_cq, ibcq); + + wq_spec.gdma_region = wq->gdma_region; + wq_spec.queue_size = wq->wq_buf_size; + + cq_spec.gdma_region = cq->gdma_region; + cq_spec.queue_size = cq->cqe * COMP_ENTRY_SIZE; + cq_spec.modr_ctx_id = 0; + cq_spec.attached_eq = GDMA_CQ_NO_EQ; + + ret = mana_create_wq_obj(mpc, mpc->port_handle, GDMA_RQ, + &wq_spec, &cq_spec, &wq->rx_object); + if (ret) + goto fail; + + /* The GDMA regions are now owned by the WQ object */ + wq->gdma_region = GDMA_INVALID_DMA_REGION; + cq->gdma_region = GDMA_INVALID_DMA_REGION; + + wq->id = wq_spec.queue_index; + cq->id = cq_spec.queue_index; + + ibdev_dbg(&mdev->ib_dev, + "ret %d rx_object 0x%llx wq id %llu cq id %llu\n", + ret, wq->rx_object, wq->id, cq->id); + + resp.entries[i].cqid = cq->id; + resp.entries[i].wqid = wq->id; + + mana_ind_table[i] = wq->rx_object; + } + resp.num_entries = i; + + ret = mana_ib_cfg_vport_steering(mdev, ndev, wq->rx_object, + mana_ind_table, + ind_tbl->log_ind_tbl_size, + ucmd.rx_hash_key_len, + ucmd.rx_hash_key); + if (ret) + goto fail; + + ret = ib_copy_to_udata(udata, &resp, sizeof(resp)); + if (ret) { + ibdev_dbg(&mdev->ib_dev, + "Failed to copy to udata create rss-qp, %d\n", + ret); + goto fail; + } + + kfree(mana_ind_table); + + return 0; + +fail: + while (i-- > 0) { + ibwq = ind_tbl->ind_tbl[i]; + wq = container_of(ibwq, struct mana_ib_wq, ibwq); + mana_destroy_wq_obj(mpc, GDMA_RQ, wq->rx_object); + } + + kfree(mana_ind_table); + + return ret; +} + +static int mana_ib_create_qp_raw(struct ib_qp *ibqp, struct ib_pd *ibpd, + struct ib_qp_init_attr *attr, + struct ib_udata *udata) +{ + struct mana_ib_pd *pd = container_of(ibpd, struct mana_ib_pd, ibpd); + struct mana_ib_qp *qp = container_of(ibqp, struct mana_ib_qp, ibqp); + struct mana_ib_dev *mdev = + container_of(ibpd->device, struct mana_ib_dev, ib_dev); + struct mana_ib_cq *send_cq = + container_of(attr->send_cq, struct mana_ib_cq, ibcq); + struct mana_ib_ucontext *mana_ucontext = + rdma_udata_to_drv_context(udata, struct mana_ib_ucontext, + ibucontext); + struct mana_ib_create_qp_resp resp = {}; + struct gdma_dev *gd = mdev->gdma_dev; + struct mana_ib_create_qp ucmd = {}; + struct mana_obj_spec wq_spec = {}; + struct mana_obj_spec cq_spec = {}; + struct mana_port_context *mpc; + struct mana_context *mc; + struct net_device *ndev; + struct ib_umem *umem; + int err; + u32 port; + + mc = gd->driver_data; + + if (!mana_ucontext || udata->inlen < sizeof(ucmd)) + return -EINVAL; + + err = ib_copy_from_udata(&ucmd, udata, min(sizeof(ucmd), udata->inlen)); + if (err) { + ibdev_dbg(&mdev->ib_dev, + "Failed to copy from udata create qp-raw, %d\n", err); + return err; + } + + /* IB ports start with 1, MANA Ethernet ports start with 0 */ + port = ucmd.port; + if (ucmd.port > mc->num_ports) + return -EINVAL; + + if (attr->cap.max_send_wr > MAX_SEND_BUFFERS_PER_QUEUE) { + ibdev_dbg(&mdev->ib_dev, + "Requested max_send_wr %d exceeding limit\n", + attr->cap.max_send_wr); + return -EINVAL; + } + + if (attr->cap.max_send_sge > MAX_TX_WQE_SGL_ENTRIES) { + ibdev_dbg(&mdev->ib_dev, + "Requested max_send_sge %d exceeding limit\n", + attr->cap.max_send_sge); + return -EINVAL; + } + + ndev = mc->ports[port - 1]; + mpc = netdev_priv(ndev); + ibdev_dbg(&mdev->ib_dev, "port %u ndev %p mpc %p\n", port, ndev, mpc); + + err = mana_ib_cfg_vport(mdev, port - 1, pd, mana_ucontext->doorbell); + if (err) + return -ENODEV; + + qp->port = port; + + ibdev_dbg(&mdev->ib_dev, "ucmd sq_buf_addr 0x%llx port %u\n", + ucmd.sq_buf_addr, ucmd.port); + + umem = ib_umem_get(ibpd->device, ucmd.sq_buf_addr, ucmd.sq_buf_size, + IB_ACCESS_LOCAL_WRITE); + if (IS_ERR(umem)) { + err = PTR_ERR(umem); + ibdev_dbg(&mdev->ib_dev, + "Failed to get umem for create qp-raw, err %d\n", + err); + goto err_free_vport; + } + qp->sq_umem = umem; + + err = mana_ib_gd_create_dma_region(mdev, qp->sq_umem, + &qp->sq_gdma_region); + if (err) { + ibdev_dbg(&mdev->ib_dev, + "Failed to create dma region for create qp-raw, %d\n", + err); + goto err_release_umem; + } + + ibdev_dbg(&mdev->ib_dev, + "mana_ib_gd_create_dma_region ret %d gdma_region 0x%llx\n", + err, qp->sq_gdma_region); + + /* Create a WQ on the same port handle used by the Ethernet */ + wq_spec.gdma_region = qp->sq_gdma_region; + wq_spec.queue_size = ucmd.sq_buf_size; + + cq_spec.gdma_region = send_cq->gdma_region; + cq_spec.queue_size = send_cq->cqe * COMP_ENTRY_SIZE; + cq_spec.modr_ctx_id = 0; + cq_spec.attached_eq = GDMA_CQ_NO_EQ; + + err = mana_create_wq_obj(mpc, mpc->port_handle, GDMA_SQ, &wq_spec, + &cq_spec, &qp->tx_object); + if (err) { + ibdev_dbg(&mdev->ib_dev, + "Failed to create wq for create raw-qp, err %d\n", + err); + goto err_destroy_dma_region; + } + + /* The GDMA regions are now owned by the WQ object */ + qp->sq_gdma_region = GDMA_INVALID_DMA_REGION; + send_cq->gdma_region = GDMA_INVALID_DMA_REGION; + + qp->sq_id = wq_spec.queue_index; + send_cq->id = cq_spec.queue_index; + + ibdev_dbg(&mdev->ib_dev, + "ret %d qp->tx_object 0x%llx sq id %llu cq id %llu\n", err, + qp->tx_object, qp->sq_id, send_cq->id); + + resp.sqid = qp->sq_id; + resp.cqid = send_cq->id; + resp.tx_vp_offset = pd->tx_vp_offset; + + err = ib_copy_to_udata(udata, &resp, sizeof(resp)); + if (err) { + ibdev_dbg(&mdev->ib_dev, + "Failed copy udata for create qp-raw, %d\n", + err); + goto err_destroy_wq_obj; + } + + return 0; + +err_destroy_wq_obj: + mana_destroy_wq_obj(mpc, GDMA_SQ, qp->tx_object); + +err_destroy_dma_region: + mana_ib_gd_destroy_dma_region(mdev, qp->sq_gdma_region); + +err_release_umem: + ib_umem_release(umem); + +err_free_vport: + mana_ib_uncfg_vport(mdev, pd, port - 1); + + return err; +} + +int mana_ib_create_qp(struct ib_qp *ibqp, struct ib_qp_init_attr *attr, + struct ib_udata *udata) +{ + switch (attr->qp_type) { + case IB_QPT_RAW_PACKET: + /* When rwq_ind_tbl is used, it's for creating WQs for RSS */ + if (attr->rwq_ind_tbl) + return mana_ib_create_qp_rss(ibqp, ibqp->pd, attr, + udata); + + return mana_ib_create_qp_raw(ibqp, ibqp->pd, attr, udata); + default: + /* Creating QP other than IB_QPT_RAW_PACKET is not supported */ + ibdev_dbg(ibqp->device, "Creating QP type %u not supported\n", + attr->qp_type); + } + + return -EINVAL; +} + +int mana_ib_modify_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr, + int attr_mask, struct ib_udata *udata) +{ + /* modify_qp is not supported by this version of the driver */ + return -EOPNOTSUPP; +} + +static int mana_ib_destroy_qp_rss(struct mana_ib_qp *qp, + struct ib_rwq_ind_table *ind_tbl, + struct ib_udata *udata) +{ + struct mana_ib_dev *mdev = + container_of(qp->ibqp.device, struct mana_ib_dev, ib_dev); + struct gdma_dev *gd = mdev->gdma_dev; + struct mana_port_context *mpc; + struct mana_context *mc; + struct net_device *ndev; + struct mana_ib_wq *wq; + struct ib_wq *ibwq; + int i; + + mc = gd->driver_data; + ndev = mc->ports[qp->port - 1]; + mpc = netdev_priv(ndev); + + for (i = 0; i < (1 << ind_tbl->log_ind_tbl_size); i++) { + ibwq = ind_tbl->ind_tbl[i]; + wq = container_of(ibwq, struct mana_ib_wq, ibwq); + ibdev_dbg(&mdev->ib_dev, "destroying wq->rx_object %llu\n", + wq->rx_object); + mana_destroy_wq_obj(mpc, GDMA_RQ, wq->rx_object); + } + + return 0; +} + +static int mana_ib_destroy_qp_raw(struct mana_ib_qp *qp, struct ib_udata *udata) +{ + struct mana_ib_dev *mdev = + container_of(qp->ibqp.device, struct mana_ib_dev, ib_dev); + struct gdma_dev *gd = mdev->gdma_dev; + struct ib_pd *ibpd = qp->ibqp.pd; + struct mana_port_context *mpc; + struct mana_context *mc; + struct net_device *ndev; + struct mana_ib_pd *pd; + + mc = gd->driver_data; + ndev = mc->ports[qp->port - 1]; + mpc = netdev_priv(ndev); + pd = container_of(ibpd, struct mana_ib_pd, ibpd); + + mana_destroy_wq_obj(mpc, GDMA_SQ, qp->tx_object); + + if (qp->sq_umem) { + mana_ib_gd_destroy_dma_region(mdev, qp->sq_gdma_region); + ib_umem_release(qp->sq_umem); + } + + mana_ib_uncfg_vport(mdev, pd, qp->port - 1); + + return 0; +} + +int mana_ib_destroy_qp(struct ib_qp *ibqp, struct ib_udata *udata) +{ + struct mana_ib_qp *qp = container_of(ibqp, struct mana_ib_qp, ibqp); + + switch (ibqp->qp_type) { + case IB_QPT_RAW_PACKET: + if (ibqp->rwq_ind_tbl) + return mana_ib_destroy_qp_rss(qp, ibqp->rwq_ind_tbl, + udata); + + return mana_ib_destroy_qp_raw(qp, udata); + + default: + ibdev_dbg(ibqp->device, "Unexpected QP type %u\n", + ibqp->qp_type); + } + + return -ENOENT; +} diff --git a/drivers/infiniband/hw/mana/wq.c b/drivers/infiniband/hw/mana/wq.c new file mode 100644 index 000000000000..372d361510e0 --- /dev/null +++ b/drivers/infiniband/hw/mana/wq.c @@ -0,0 +1,115 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2022, Microsoft Corporation. All rights reserved. + */ + +#include "mana_ib.h" + +struct ib_wq *mana_ib_create_wq(struct ib_pd *pd, + struct ib_wq_init_attr *init_attr, + struct ib_udata *udata) +{ + struct mana_ib_dev *mdev = + container_of(pd->device, struct mana_ib_dev, ib_dev); + struct mana_ib_create_wq ucmd = {}; + struct mana_ib_wq *wq; + struct ib_umem *umem; + int err; + + if (udata->inlen < sizeof(ucmd)) + return ERR_PTR(-EINVAL); + + err = ib_copy_from_udata(&ucmd, udata, min(sizeof(ucmd), udata->inlen)); + if (err) { + ibdev_dbg(&mdev->ib_dev, + "Failed to copy from udata for create wq, %d\n", err); + return ERR_PTR(err); + } + + wq = kzalloc(sizeof(*wq), GFP_KERNEL); + if (!wq) + return ERR_PTR(-ENOMEM); + + ibdev_dbg(&mdev->ib_dev, "ucmd wq_buf_addr 0x%llx\n", ucmd.wq_buf_addr); + + umem = ib_umem_get(pd->device, ucmd.wq_buf_addr, ucmd.wq_buf_size, + IB_ACCESS_LOCAL_WRITE); + if (IS_ERR(umem)) { + err = PTR_ERR(umem); + ibdev_dbg(&mdev->ib_dev, + "Failed to get umem for create wq, err %d\n", err); + goto err_free_wq; + } + + wq->umem = umem; + wq->wqe = init_attr->max_wr; + wq->wq_buf_size = ucmd.wq_buf_size; + wq->rx_object = INVALID_MANA_HANDLE; + + err = mana_ib_gd_create_dma_region(mdev, wq->umem, &wq->gdma_region); + if (err) { + ibdev_dbg(&mdev->ib_dev, + "Failed to create dma region for create wq, %d\n", + err); + goto err_release_umem; + } + + ibdev_dbg(&mdev->ib_dev, + "mana_ib_gd_create_dma_region ret %d gdma_region 0x%llx\n", + err, wq->gdma_region); + + /* WQ ID is returned at wq_create time, doesn't know the value yet */ + + return &wq->ibwq; + +err_release_umem: + ib_umem_release(umem); + +err_free_wq: + kfree(wq); + + return ERR_PTR(err); +} + +int mana_ib_modify_wq(struct ib_wq *wq, struct ib_wq_attr *wq_attr, + u32 wq_attr_mask, struct ib_udata *udata) +{ + /* modify_wq is not supported by this version of the driver */ + return -EOPNOTSUPP; +} + +int mana_ib_destroy_wq(struct ib_wq *ibwq, struct ib_udata *udata) +{ + struct mana_ib_wq *wq = container_of(ibwq, struct mana_ib_wq, ibwq); + struct ib_device *ib_dev = ibwq->device; + struct mana_ib_dev *mdev; + + mdev = container_of(ib_dev, struct mana_ib_dev, ib_dev); + + mana_ib_gd_destroy_dma_region(mdev, wq->gdma_region); + ib_umem_release(wq->umem); + + kfree(wq); + + return 0; +} + +int mana_ib_create_rwq_ind_table(struct ib_rwq_ind_table *ib_rwq_ind_table, + struct ib_rwq_ind_table_init_attr *init_attr, + struct ib_udata *udata) +{ + /* + * There is no additional data in ind_table to be maintained by this + * driver, do nothing + */ + return 0; +} + +int mana_ib_destroy_rwq_ind_table(struct ib_rwq_ind_table *ib_rwq_ind_tbl) +{ + /* + * There is no additional data in ind_table to be maintained by this + * driver, do nothing + */ + return 0; +} diff --git a/include/net/mana/mana.h b/include/net/mana/mana.h index 713a8f8cca9a..20212ffeefb9 100644 --- a/include/net/mana/mana.h +++ b/include/net/mana/mana.h @@ -412,6 +412,9 @@ int mana_bpf(struct net_device *ndev, struct netdev_bpf *bpf); extern const struct ethtool_ops mana_ethtool_ops; +/* A CQ can be created not associated with any EQ */ +#define GDMA_CQ_NO_EQ 0xffff + struct mana_obj_spec { u32 queue_index; u64 gdma_region; diff --git a/include/uapi/rdma/ib_user_ioctl_verbs.h b/include/uapi/rdma/ib_user_ioctl_verbs.h index 7dd56210226f..e0c25537fd2e 100644 --- a/include/uapi/rdma/ib_user_ioctl_verbs.h +++ b/include/uapi/rdma/ib_user_ioctl_verbs.h @@ -251,6 +251,7 @@ enum rdma_driver_id { RDMA_DRIVER_EFA, RDMA_DRIVER_SIW, RDMA_DRIVER_ERDMA, + RDMA_DRIVER_MANA, }; enum ib_uverbs_gid_type { diff --git a/include/uapi/rdma/mana-abi.h b/include/uapi/rdma/mana-abi.h new file mode 100644 index 000000000000..5fcb31b37fb9 --- /dev/null +++ b/include/uapi/rdma/mana-abi.h @@ -0,0 +1,66 @@ +/* SPDX-License-Identifier: (GPL-2.0 WITH Linux-syscall-note) */ +/* + * Copyright (c) 2022, Microsoft Corporation. All rights reserved. + */ + +#ifndef MANA_ABI_USER_H +#define MANA_ABI_USER_H + +#include +#include + +/* + * Increment this value if any changes that break userspace ABI + * compatibility are made. + */ + +#define MANA_IB_UVERBS_ABI_VERSION 1 + +struct mana_ib_create_cq { + __aligned_u64 buf_addr; +}; + +struct mana_ib_create_qp { + __aligned_u64 sq_buf_addr; + __u32 sq_buf_size; + __u32 port; +}; + +struct mana_ib_create_qp_resp { + __u32 sqid; + __u32 cqid; + __u32 tx_vp_offset; + __u32 reserved; +}; + +struct mana_ib_create_wq { + __aligned_u64 wq_buf_addr; + __u32 wq_buf_size; + __u32 reserved; +}; + +/* RX Hash function flags */ +enum mana_ib_rx_hash_function_flags { + MANA_IB_RX_HASH_FUNC_TOEPLITZ = 1 << 0, +}; + +struct mana_ib_create_qp_rss { + __aligned_u64 rx_hash_fields_mask; + __u8 rx_hash_function; + __u8 reserved[7]; + __u32 rx_hash_key_len; + __u8 rx_hash_key[40]; + __u32 port; +}; + +struct rss_resp_entry { + __u32 cqid; + __u32 wqid; +}; + +struct mana_ib_create_qp_rss_resp { + __aligned_u64 num_entries; + struct rss_resp_entry entries[64]; +}; + +#endif