From patchwork Wed Nov 9 18:42:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 17742 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp500897wru; Wed, 9 Nov 2022 10:44:08 -0800 (PST) X-Google-Smtp-Source: AMsMyM6OmY1ONZvBOzblFTtM/B1Klq/JtkpC0ToCIgU2OJ0gSJwzC1LOvxZLOVvSxEnDXUCHEEUO X-Received: by 2002:a17:902:d4ce:b0:188:5340:4a3a with SMTP id o14-20020a170902d4ce00b0018853404a3amr35944162plg.79.1668019447950; Wed, 09 Nov 2022 10:44:07 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1668019447; cv=none; d=google.com; s=arc-20160816; b=Y6Ouh5Ab3/Mq4WY1FxadBSGTOu43sw4dLSRIn0suA6+9NAKfX2Yi3bDhfrzlKhIA5N wCkSlsN/6z/xEFNiu4Prbj/ZqCr6qIXy590qjoqisFZgzoAtD6dCrSPrBQ06oK6elz2A b5O1B+TxuNCWYSrMTBeWrUxt18k32QyeD2NhO6wUpsuO5WX2K1DQGCuvCdcCytIWhwAZ 9X+E+CW47ZsCdFhOTVhdaSjGzfRKTRzRVQkUKpNf06hLS5WgI+YYaQCg58xCWpMpW/Ly uCg8CKFJWr1CsBSA/PZ+Km9qNZCqL6fUUIPXLtIGvRPfPlwmrd2zqFNUb+A0kkll/nmA sphg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=KDAobn2RJjWpO0/dKeueHi7q643VxZRXyLJ4YMuV984=; b=zWMurfX6B47muMuu+rJIBkJLDomwZ3xuyr2Df6NFBFKD7YP9KZnPaPoG5fzzQiiQeD GliwMJI653ASpENqvP27XqyQYGRPOxTay75UP1Cl3MuCVnduvLXqp43ZIervLXJIUkL5 UcHmWQfiP7HVz7qTcd0BLOu1E/mJtfURemtxhO/c3o5RxQSXYF1pUsqkjgmZUKpQsG/u 6O8ChgqjIqitU7w+AGFKH6KHvlF+B20yGoDnRCQqvvHyHHsTtcbVgOdMb9GZxCBOzDl5 jF16jLCyKFhR4SMhbQRxrfLZSfCKiDh2FA5UWcQJL9Ysrd22Fxu06Z130dFatPBxPfG/ 1YbA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@broadcom.com header.s=google header.b=eCkdHnvP; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=broadcom.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id k35-20020a17090a14a600b00212f7abe85csi1829481pja.41.2022.11.09.10.43.54; Wed, 09 Nov 2022 10:44:07 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@broadcom.com header.s=google header.b=eCkdHnvP; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=broadcom.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231583AbiKISnb (ORCPT + 99 others); Wed, 9 Nov 2022 13:43:31 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33080 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231442AbiKISnD (ORCPT ); Wed, 9 Nov 2022 13:43:03 -0500 Received: from mail-qk1-x729.google.com (mail-qk1-x729.google.com [IPv6:2607:f8b0:4864:20::729]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D61572B275 for ; Wed, 9 Nov 2022 10:43:00 -0800 (PST) Received: by mail-qk1-x729.google.com with SMTP id p18so6937793qkg.2 for ; Wed, 09 Nov 2022 10:43:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:from:to:cc:subject:date:message-id:reply-to; bh=KDAobn2RJjWpO0/dKeueHi7q643VxZRXyLJ4YMuV984=; b=eCkdHnvPIKvKN1ORB6h1eMa+qaAiGfAXTm2o928OXempZ6aRSyFqrtow4dJtmmmRkj boi9ILvHacfjEbtHgcMsfhqEsvzC/NggGngVoD2CYwkgZkXlDa6W5/v0IvDPulTKctJk XYRPMMo9tjG/Y4nvnh81UKECqzLtvsgAvXf2g= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=KDAobn2RJjWpO0/dKeueHi7q643VxZRXyLJ4YMuV984=; b=zjxWL7zdtXLcTmLeOz84SRA6Mx0YdxbSA/m8hNSvHTP8jds8hnwk3k+zl4M8Qh1BLa nB95MV4OS+4GYh1ieE5uUC3dvChohosGpRGKSGiT2v8pM72qjEOtmhARvf0dXAiZVxMd EZD/UclbJuIN1IKC7Zv8OkbEhkbsKLxFzq7wWQp3SS6TBWO5YfIW3PGBvyxmoiTXBc+T pnVKE25qmUCaUxhL+UzTXlYpzCGzM/H8sibZQiL/WCx7nporJMkaOnEMqTT4zKYQvgll 903bU5bZUip1KHMS2p7xbg76a6CYmVTwtN6JJeWRcaEWp0/VfTW4PeR8Xr6LGJjpFidN FyDg== X-Gm-Message-State: ACrzQf2ZgUoJZfZpXJ0dUaIiYsqDLNVeLzYT2L4E498nIF2vp6QXaU0O 4hW5m2aoteoC63wzG2mwb4zjOA== X-Received: by 2002:a05:620a:1476:b0:6fa:4c67:83ec with SMTP id j22-20020a05620a147600b006fa4c6783ecmr32541980qkl.23.1668019379846; Wed, 09 Nov 2022 10:42:59 -0800 (PST) Received: from C02GC2QQMD6T.wifi.broadcom.net ([192.19.223.252]) by smtp.gmail.com with ESMTPSA id u17-20020a05622a011100b003a598fcddefsm4795108qtw.87.2022.11.09.10.42.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 09 Nov 2022 10:42:59 -0800 (PST) From: Ajit Khaparde To: ajit.khaparde@broadcom.com Cc: andrew.gospodarek@broadcom.com, davem@davemloft.net, edumazet@google.com, jgg@ziepe.ca, kuba@kernel.org, leon@kernel.org, linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org, michael.chan@broadcom.com, netdev@vger.kernel.org, pabeni@redhat.com, selvin.xavier@broadcom.com, Leon Romanovsky Subject: [PATCH v4 3/6] bnxt_en: Remove usage of ulp_id Date: Wed, 9 Nov 2022 10:42:41 -0800 Message-Id: <20221109184244.7032-4-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.37.1 (Apple Git-137.1) In-Reply-To: <20221109184244.7032-1-ajit.khaparde@broadcom.com> References: <20221109184244.7032-1-ajit.khaparde@broadcom.com> MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1749045160560005990?= X-GMAIL-MSGID: =?utf-8?q?1749045160560005990?= Since the driver continues to use the single ULP model, the extra complexity and indirection is unnecessary. Remove the usage of ulp_id from the code. Suggested-by: Leon Romanovsky Signed-off-by: Ajit Khaparde Reviewed-by: Andy Gospodarek Reviewed-by: Selvin Xavier --- drivers/infiniband/hw/bnxt_re/main.c | 24 +- drivers/net/ethernet/broadcom/bnxt/bnxt.c | 2 +- drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c | 211 ++++++++---------- drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.h | 26 +-- 4 files changed, 112 insertions(+), 151 deletions(-) diff --git a/drivers/infiniband/hw/bnxt_re/main.c b/drivers/infiniband/hw/bnxt_re/main.c index a9fb1067c9a1..e60bc3ce9c3d 100644 --- a/drivers/infiniband/hw/bnxt_re/main.c +++ b/drivers/infiniband/hw/bnxt_re/main.c @@ -366,8 +366,7 @@ static int bnxt_re_unregister_netdev(struct bnxt_re_dev *rdev) en_dev = rdev->en_dev; - rc = en_dev->en_ops->bnxt_unregister_device(rdev->en_dev, - BNXT_ROCE_ULP); + rc = en_dev->en_ops->bnxt_unregister_device(rdev->en_dev); return rc; } @@ -381,7 +380,7 @@ static int bnxt_re_register_netdev(struct bnxt_re_dev *rdev) en_dev = rdev->en_dev; - rc = en_dev->en_ops->bnxt_register_device(en_dev, BNXT_ROCE_ULP, + rc = en_dev->en_ops->bnxt_register_device(en_dev, &bnxt_re_ulp_ops, rdev); rdev->qplib_res.pdev = rdev->en_dev->pdev; return rc; @@ -390,16 +389,15 @@ static int bnxt_re_register_netdev(struct bnxt_re_dev *rdev) static int bnxt_re_free_msix(struct bnxt_re_dev *rdev) { struct bnxt_en_dev *en_dev; - int rc; if (!rdev) return -EINVAL; en_dev = rdev->en_dev; - rc = en_dev->en_ops->bnxt_free_msix(rdev->en_dev, BNXT_ROCE_ULP); + en_dev->en_ops->bnxt_free_msix(rdev->en_dev); - return rc; + return 0; } static int bnxt_re_request_msix(struct bnxt_re_dev *rdev) @@ -414,7 +412,7 @@ static int bnxt_re_request_msix(struct bnxt_re_dev *rdev) num_msix_want = min_t(u32, BNXT_RE_MAX_MSIX, num_online_cpus()); - num_msix_got = en_dev->en_ops->bnxt_request_msix(en_dev, BNXT_ROCE_ULP, + num_msix_got = en_dev->en_ops->bnxt_request_msix(en_dev, rdev->msix_entries, num_msix_want); if (num_msix_got < BNXT_RE_MIN_MSIX) { @@ -477,7 +475,7 @@ static int bnxt_re_net_ring_free(struct bnxt_re_dev *rdev, req.ring_id = cpu_to_le16(fw_ring_id); bnxt_re_fill_fw_msg(&fw_msg, (void *)&req, sizeof(req), (void *)&resp, sizeof(resp), DFLT_HWRM_CMD_TIMEOUT); - rc = en_dev->en_ops->bnxt_send_fw_msg(en_dev, BNXT_ROCE_ULP, &fw_msg); + rc = en_dev->en_ops->bnxt_send_fw_msg(en_dev, &fw_msg); if (rc) ibdev_err(&rdev->ibdev, "Failed to free HW ring:%d :%#x", req.ring_id, rc); @@ -514,7 +512,7 @@ static int bnxt_re_net_ring_alloc(struct bnxt_re_dev *rdev, req.int_mode = ring_attr->mode; bnxt_re_fill_fw_msg(&fw_msg, (void *)&req, sizeof(req), (void *)&resp, sizeof(resp), DFLT_HWRM_CMD_TIMEOUT); - rc = en_dev->en_ops->bnxt_send_fw_msg(en_dev, BNXT_ROCE_ULP, &fw_msg); + rc = en_dev->en_ops->bnxt_send_fw_msg(en_dev, &fw_msg); if (!rc) *fw_ring_id = le16_to_cpu(resp.ring_id); @@ -542,7 +540,7 @@ static int bnxt_re_net_stats_ctx_free(struct bnxt_re_dev *rdev, req.stat_ctx_id = cpu_to_le32(fw_stats_ctx_id); bnxt_re_fill_fw_msg(&fw_msg, (void *)&req, sizeof(req), (void *)&resp, sizeof(resp), DFLT_HWRM_CMD_TIMEOUT); - rc = en_dev->en_ops->bnxt_send_fw_msg(en_dev, BNXT_ROCE_ULP, &fw_msg); + rc = en_dev->en_ops->bnxt_send_fw_msg(en_dev, &fw_msg); if (rc) ibdev_err(&rdev->ibdev, "Failed to free HW stats context %#x", rc); @@ -575,7 +573,7 @@ static int bnxt_re_net_stats_ctx_alloc(struct bnxt_re_dev *rdev, req.stat_ctx_flags = STAT_CTX_ALLOC_REQ_STAT_CTX_FLAGS_ROCE; bnxt_re_fill_fw_msg(&fw_msg, (void *)&req, sizeof(req), (void *)&resp, sizeof(resp), DFLT_HWRM_CMD_TIMEOUT); - rc = en_dev->en_ops->bnxt_send_fw_msg(en_dev, BNXT_ROCE_ULP, &fw_msg); + rc = en_dev->en_ops->bnxt_send_fw_msg(en_dev, &fw_msg); if (!rc) *fw_stats_ctx_id = le32_to_cpu(resp.stat_ctx_id); @@ -1061,7 +1059,7 @@ static int bnxt_re_query_hwrm_pri2cos(struct bnxt_re_dev *rdev, u8 dir, bnxt_re_fill_fw_msg(&fw_msg, (void *)&req, sizeof(req), (void *)&resp, sizeof(resp), DFLT_HWRM_CMD_TIMEOUT); - rc = en_dev->en_ops->bnxt_send_fw_msg(en_dev, BNXT_ROCE_ULP, &fw_msg); + rc = en_dev->en_ops->bnxt_send_fw_msg(en_dev, &fw_msg); if (rc) return rc; @@ -1247,7 +1245,7 @@ static void bnxt_re_query_hwrm_intf_version(struct bnxt_re_dev *rdev) req.hwrm_intf_upd = HWRM_VERSION_UPDATE; bnxt_re_fill_fw_msg(&fw_msg, (void *)&req, sizeof(req), (void *)&resp, sizeof(resp), DFLT_HWRM_CMD_TIMEOUT); - rc = en_dev->en_ops->bnxt_send_fw_msg(en_dev, BNXT_ROCE_ULP, &fw_msg); + rc = en_dev->en_ops->bnxt_send_fw_msg(en_dev, &fw_msg); if (rc) { ibdev_err(&rdev->ibdev, "Failed to query HW version, rc = 0x%x", rc); diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c index 16265d177639..1b573e015b5e 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c @@ -5533,7 +5533,7 @@ int bnxt_hwrm_vnic_cfg(struct bnxt *bp, u16 vnic_id) #endif if ((bp->flags & BNXT_FLAG_STRIP_VLAN) || def_vlan) req->flags |= cpu_to_le32(VNIC_CFG_REQ_FLAGS_VLAN_STRIP_MODE); - if (!vnic_id && bnxt_ulp_registered(bp->edev, BNXT_ROCE_ULP)) + if (!vnic_id && bnxt_ulp_registered(bp->edev)) req->flags |= cpu_to_le32(bnxt_get_roce_vnic_mode(bp)); return hwrm_req_send(bp, req); diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c index 1d2d30f97ed9..3ea2e1de2e29 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c @@ -28,64 +28,44 @@ static DEFINE_IDA(bnxt_aux_dev_ids); -static int bnxt_register_dev(struct bnxt_en_dev *edev, unsigned int ulp_id, - struct bnxt_ulp_ops *ulp_ops, void *handle) +static int bnxt_register_dev(struct bnxt_en_dev *edev, + struct bnxt_ulp_ops *ulp_ops, + void *handle) { struct net_device *dev = edev->net; struct bnxt *bp = netdev_priv(dev); + unsigned int max_stat_ctxs; struct bnxt_ulp *ulp; - int rc = 0; - if (ulp_id >= BNXT_MAX_ULP) - return -EINVAL; + max_stat_ctxs = bnxt_get_max_func_stat_ctxs(bp); + if (max_stat_ctxs <= BNXT_MIN_ROCE_STAT_CTXS || + bp->cp_nr_rings == max_stat_ctxs) + return -ENOMEM; - ulp = &edev->ulp_tbl[ulp_id]; - if (rcu_access_pointer(ulp->ulp_ops)) { - netdev_err(bp->dev, "ulp id %d already registered\n", ulp_id); - rc = -EBUSY; - goto exit; - } - if (ulp_id == BNXT_ROCE_ULP) { - unsigned int max_stat_ctxs; - - max_stat_ctxs = bnxt_get_max_func_stat_ctxs(bp); - if (max_stat_ctxs <= BNXT_MIN_ROCE_STAT_CTXS || - bp->cp_nr_rings == max_stat_ctxs) { - rc = -ENOMEM; - goto exit; - } - } + ulp = kzalloc(sizeof(*ulp), GFP_KERNEL); + if (!ulp) + return -ENOMEM; - atomic_set(&ulp->ref_count, 1); + edev->ulp_tbl = ulp; ulp->handle = handle; rcu_assign_pointer(ulp->ulp_ops, ulp_ops); - if (ulp_id == BNXT_ROCE_ULP) { - if (test_bit(BNXT_STATE_OPEN, &bp->state)) - bnxt_hwrm_vnic_cfg(bp, 0); - } + if (test_bit(BNXT_STATE_OPEN, &bp->state)) + bnxt_hwrm_vnic_cfg(bp, 0); -exit: - return rc; + return 0; } -static int bnxt_unregister_dev(struct bnxt_en_dev *edev, unsigned int ulp_id) +static int bnxt_unregister_dev(struct bnxt_en_dev *edev) { struct net_device *dev = edev->net; struct bnxt *bp = netdev_priv(dev); struct bnxt_ulp *ulp; int i = 0; - if (ulp_id >= BNXT_MAX_ULP) - return -EINVAL; - - ulp = &edev->ulp_tbl[ulp_id]; - if (!rcu_access_pointer(ulp->ulp_ops)) { - netdev_err(bp->dev, "ulp id %d not registered\n", ulp_id); - return -EINVAL; - } - if (ulp_id == BNXT_ROCE_ULP && ulp->msix_requested) - edev->en_ops->bnxt_free_msix(edev, ulp_id); + ulp = edev->ulp_tbl; + if (ulp->msix_requested) + edev->en_ops->bnxt_free_msix(edev); if (ulp->max_async_event_id) bnxt_hwrm_func_drv_rgtr(bp, NULL, 0, true); @@ -98,6 +78,8 @@ static int bnxt_unregister_dev(struct bnxt_en_dev *edev, unsigned int ulp_id) msleep(100); i++; } + kfree(ulp); + edev->ulp_tbl = NULL; return 0; } @@ -106,8 +88,8 @@ static void bnxt_fill_msix_vecs(struct bnxt *bp, struct bnxt_msix_entry *ent) struct bnxt_en_dev *edev = bp->edev; int num_msix, idx, i; - num_msix = edev->ulp_tbl[BNXT_ROCE_ULP].msix_requested; - idx = edev->ulp_tbl[BNXT_ROCE_ULP].msix_base; + num_msix = edev->ulp_tbl->msix_requested; + idx = edev->ulp_tbl->msix_base; for (i = 0; i < num_msix; i++) { ent[i].vector = bp->irq_tbl[idx + i].vector; ent[i].ring_idx = idx + i; @@ -121,8 +103,9 @@ static void bnxt_fill_msix_vecs(struct bnxt *bp, struct bnxt_msix_entry *ent) } } -static int bnxt_req_msix_vecs(struct bnxt_en_dev *edev, unsigned int ulp_id, - struct bnxt_msix_entry *ent, int num_msix) +static int bnxt_req_msix_vecs(struct bnxt_en_dev *edev, + struct bnxt_msix_entry *ent, + int num_msix) { struct net_device *dev = edev->net; struct bnxt *bp = netdev_priv(dev); @@ -132,13 +115,10 @@ static int bnxt_req_msix_vecs(struct bnxt_en_dev *edev, unsigned int ulp_id, int total_vecs; int rc = 0; - if (ulp_id != BNXT_ROCE_ULP) - return -EINVAL; - if (!(bp->flags & BNXT_FLAG_USING_MSIX)) return -ENODEV; - if (edev->ulp_tbl[ulp_id].msix_requested) + if (edev->ulp_tbl->msix_requested) return -EAGAIN; max_cp_rings = bnxt_get_max_func_cp_rings(bp); @@ -155,8 +135,8 @@ static int bnxt_req_msix_vecs(struct bnxt_en_dev *edev, unsigned int ulp_id, idx = max_idx - avail_msix; } - edev->ulp_tbl[ulp_id].msix_base = idx; - edev->ulp_tbl[ulp_id].msix_requested = avail_msix; + edev->ulp_tbl->msix_base = idx; + edev->ulp_tbl->msix_requested = avail_msix; hw_resc = &bp->hw_resc; total_vecs = idx + avail_msix; if (bp->total_irqs < total_vecs || @@ -171,7 +151,7 @@ static int bnxt_req_msix_vecs(struct bnxt_en_dev *edev, unsigned int ulp_id, } } if (rc) { - edev->ulp_tbl[ulp_id].msix_requested = 0; + edev->ulp_tbl->msix_requested = 0; return -EAGAIN; } @@ -180,25 +160,22 @@ static int bnxt_req_msix_vecs(struct bnxt_en_dev *edev, unsigned int ulp_id, resv_msix = hw_resc->resv_irqs - bp->cp_nr_rings; avail_msix = min_t(int, resv_msix, avail_msix); - edev->ulp_tbl[ulp_id].msix_requested = avail_msix; + edev->ulp_tbl->msix_requested = avail_msix; } bnxt_fill_msix_vecs(bp, ent); edev->flags |= BNXT_EN_FLAG_MSIX_REQUESTED; return avail_msix; } -static int bnxt_free_msix_vecs(struct bnxt_en_dev *edev, unsigned int ulp_id) +static void bnxt_free_msix_vecs(struct bnxt_en_dev *edev) { struct net_device *dev = edev->net; struct bnxt *bp = netdev_priv(dev); - if (ulp_id != BNXT_ROCE_ULP) - return -EINVAL; - if (!(edev->flags & BNXT_EN_FLAG_MSIX_REQUESTED)) - return 0; + return; - edev->ulp_tbl[ulp_id].msix_requested = 0; + edev->ulp_tbl->msix_requested = 0; edev->flags &= ~BNXT_EN_FLAG_MSIX_REQUESTED; if (netif_running(dev) && !(edev->flags & BNXT_EN_FLAG_ULP_STOPPED)) { rtnl_lock(); @@ -207,43 +184,43 @@ static int bnxt_free_msix_vecs(struct bnxt_en_dev *edev, unsigned int ulp_id) rtnl_unlock(); } - return 0; + return; } int bnxt_get_ulp_msix_num(struct bnxt *bp) { - if (bnxt_ulp_registered(bp->edev, BNXT_ROCE_ULP)) { + if (bnxt_ulp_registered(bp->edev)) { struct bnxt_en_dev *edev = bp->edev; - return edev->ulp_tbl[BNXT_ROCE_ULP].msix_requested; + return edev->ulp_tbl->msix_requested; } return 0; } int bnxt_get_ulp_msix_base(struct bnxt *bp) { - if (bnxt_ulp_registered(bp->edev, BNXT_ROCE_ULP)) { + if (bnxt_ulp_registered(bp->edev)) { struct bnxt_en_dev *edev = bp->edev; - if (edev->ulp_tbl[BNXT_ROCE_ULP].msix_requested) - return edev->ulp_tbl[BNXT_ROCE_ULP].msix_base; + if (edev->ulp_tbl->msix_requested) + return edev->ulp_tbl->msix_base; } return 0; } int bnxt_get_ulp_stat_ctxs(struct bnxt *bp) { - if (bnxt_ulp_registered(bp->edev, BNXT_ROCE_ULP)) { + if (bnxt_ulp_registered(bp->edev)) { struct bnxt_en_dev *edev = bp->edev; - if (edev->ulp_tbl[BNXT_ROCE_ULP].msix_requested) + if (edev->ulp_tbl->msix_requested) return BNXT_MIN_ROCE_STAT_CTXS; } return 0; } -static int bnxt_send_msg(struct bnxt_en_dev *edev, unsigned int ulp_id, +static int bnxt_send_msg(struct bnxt_en_dev *edev, struct bnxt_fw_msg *fw_msg) { struct net_device *dev = edev->net; @@ -253,7 +230,7 @@ static int bnxt_send_msg(struct bnxt_en_dev *edev, unsigned int ulp_id, u32 resp_len; int rc; - if (ulp_id != BNXT_ROCE_ULP && bp->fw_reset_state) + if (bp->fw_reset_state) return -EBUSY; rc = hwrm_req_init(bp, req, 0 /* don't care */); @@ -292,27 +269,24 @@ void bnxt_ulp_stop(struct bnxt *bp) { struct bnxt_en_dev *edev = bp->edev; struct bnxt_ulp_ops *ops; - int i; + struct bnxt_ulp *ulp; if (!edev) return; edev->flags |= BNXT_EN_FLAG_ULP_STOPPED; - for (i = 0; i < BNXT_MAX_ULP; i++) { - struct bnxt_ulp *ulp = &edev->ulp_tbl[i]; - - ops = rtnl_dereference(ulp->ulp_ops); - if (!ops || !ops->ulp_stop) - continue; - ops->ulp_stop(ulp->handle); - } + ulp = edev->ulp_tbl; + ops = rtnl_dereference(ulp->ulp_ops); + if (!ops || !ops->ulp_stop) + return; + ops->ulp_stop(ulp->handle); } void bnxt_ulp_start(struct bnxt *bp, int err) { struct bnxt_en_dev *edev = bp->edev; struct bnxt_ulp_ops *ops; - int i; + struct bnxt_ulp *ulp; if (!edev) return; @@ -322,39 +296,33 @@ void bnxt_ulp_start(struct bnxt *bp, int err) if (err) return; - for (i = 0; i < BNXT_MAX_ULP; i++) { - struct bnxt_ulp *ulp = &edev->ulp_tbl[i]; - - ops = rtnl_dereference(ulp->ulp_ops); - if (!ops || !ops->ulp_start) - continue; - ops->ulp_start(ulp->handle); - } + ulp = edev->ulp_tbl; + ops = rtnl_dereference(ulp->ulp_ops); + if (!ops || !ops->ulp_start) + return; + ops->ulp_start(ulp->handle); } void bnxt_ulp_sriov_cfg(struct bnxt *bp, int num_vfs) { struct bnxt_en_dev *edev = bp->edev; struct bnxt_ulp_ops *ops; - int i; + struct bnxt_ulp *ulp; if (!edev) return; + ulp = edev->ulp_tbl; - for (i = 0; i < BNXT_MAX_ULP; i++) { - struct bnxt_ulp *ulp = &edev->ulp_tbl[i]; - - rcu_read_lock(); - ops = rcu_dereference(ulp->ulp_ops); - if (!ops || !ops->ulp_sriov_config) { - rcu_read_unlock(); - continue; - } - bnxt_ulp_get(ulp); + rcu_read_lock(); + ops = rcu_dereference(ulp->ulp_ops); + if (!ops || !ops->ulp_sriov_config) { rcu_read_unlock(); - ops->ulp_sriov_config(ulp->handle, num_vfs); - bnxt_ulp_put(ulp); + return; } + bnxt_ulp_get(ulp); + rcu_read_unlock(); + ops->ulp_sriov_config(ulp->handle, num_vfs); + bnxt_ulp_put(ulp); } void bnxt_ulp_irq_stop(struct bnxt *bp) @@ -365,8 +333,8 @@ void bnxt_ulp_irq_stop(struct bnxt *bp) if (!edev || !(edev->flags & BNXT_EN_FLAG_MSIX_REQUESTED)) return; - if (bnxt_ulp_registered(bp->edev, BNXT_ROCE_ULP)) { - struct bnxt_ulp *ulp = &edev->ulp_tbl[BNXT_ROCE_ULP]; + if (bnxt_ulp_registered(bp->edev)) { + struct bnxt_ulp *ulp = edev->ulp_tbl; if (!ulp->msix_requested) return; @@ -386,8 +354,8 @@ void bnxt_ulp_irq_restart(struct bnxt *bp, int err) if (!edev || !(edev->flags & BNXT_EN_FLAG_MSIX_REQUESTED)) return; - if (bnxt_ulp_registered(bp->edev, BNXT_ROCE_ULP)) { - struct bnxt_ulp *ulp = &edev->ulp_tbl[BNXT_ROCE_ULP]; + if (bnxt_ulp_registered(bp->edev)) { + struct bnxt_ulp *ulp = edev->ulp_tbl; struct bnxt_msix_entry *ent = NULL; if (!ulp->msix_requested) @@ -414,41 +382,36 @@ void bnxt_ulp_async_events(struct bnxt *bp, struct hwrm_async_event_cmpl *cmpl) u16 event_id = le16_to_cpu(cmpl->event_id); struct bnxt_en_dev *edev = bp->edev; struct bnxt_ulp_ops *ops; - int i; + struct bnxt_ulp *ulp; if (!edev) return; + ulp = edev->ulp_tbl; rcu_read_lock(); - for (i = 0; i < BNXT_MAX_ULP; i++) { - struct bnxt_ulp *ulp = &edev->ulp_tbl[i]; - - ops = rcu_dereference(ulp->ulp_ops); - if (!ops || !ops->ulp_async_notifier) - continue; - if (!ulp->async_events_bmap || - event_id > ulp->max_async_event_id) - continue; - - /* Read max_async_event_id first before testing the bitmap. */ - smp_rmb(); - if (test_bit(event_id, ulp->async_events_bmap)) - ops->ulp_async_notifier(ulp->handle, cmpl); - } + + ops = rcu_dereference(ulp->ulp_ops); + if (!ops || !ops->ulp_async_notifier) + return; + if (!ulp->async_events_bmap || event_id > ulp->max_async_event_id) + return; + + /* Read max_async_event_id first before testing the bitmap. */ + smp_rmb(); + if (test_bit(event_id, ulp->async_events_bmap)) + ops->ulp_async_notifier(ulp->handle, cmpl); rcu_read_unlock(); } -static int bnxt_register_async_events(struct bnxt_en_dev *edev, unsigned int ulp_id, - unsigned long *events_bmap, u16 max_id) +static int bnxt_register_async_events(struct bnxt_en_dev *edev, + unsigned long *events_bmap, + u16 max_id) { struct net_device *dev = edev->net; struct bnxt *bp = netdev_priv(dev); struct bnxt_ulp *ulp; - if (ulp_id >= BNXT_MAX_ULP) - return -EINVAL; - - ulp = &edev->ulp_tbl[ulp_id]; + ulp = edev->ulp_tbl; ulp->async_events_bmap = events_bmap; /* Make sure bnxt_ulp_async_events() sees this order */ smp_wmb(); diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.h b/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.h index aaf55d847505..7b2b829300b4 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.h +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.h @@ -65,7 +65,7 @@ struct bnxt_en_dev { #define BNXT_EN_FLAG_MSIX_REQUESTED 0x4 #define BNXT_EN_FLAG_ULP_STOPPED 0x8 const struct bnxt_en_ops *en_ops; - struct bnxt_ulp ulp_tbl[BNXT_MAX_ULP]; + struct bnxt_ulp *ulp_tbl; int l2_db_size; /* Doorbell BAR size in * bytes mapped by L2 * driver. @@ -77,21 +77,21 @@ struct bnxt_en_dev { }; struct bnxt_en_ops { - int (*bnxt_register_device)(struct bnxt_en_dev *, unsigned int, - struct bnxt_ulp_ops *, void *); - int (*bnxt_unregister_device)(struct bnxt_en_dev *, unsigned int); - int (*bnxt_request_msix)(struct bnxt_en_dev *, unsigned int, - struct bnxt_msix_entry *, int); - int (*bnxt_free_msix)(struct bnxt_en_dev *, unsigned int); - int (*bnxt_send_fw_msg)(struct bnxt_en_dev *, unsigned int, - struct bnxt_fw_msg *); - int (*bnxt_register_fw_async_events)(struct bnxt_en_dev *, unsigned int, - unsigned long *, u16); + int (*bnxt_register_device)(struct bnxt_en_dev *edev, + struct bnxt_ulp_ops *ulp_ops, void *handle); + int (*bnxt_unregister_device)(struct bnxt_en_dev *edev); + int (*bnxt_request_msix)(struct bnxt_en_dev *edev, + struct bnxt_msix_entry *ent, int num_msix); + void (*bnxt_free_msix)(struct bnxt_en_dev *edev); + int (*bnxt_send_fw_msg)(struct bnxt_en_dev *edev, + struct bnxt_fw_msg *fw_msg); + int (*bnxt_register_fw_async_events)(struct bnxt_en_dev *edev, + unsigned long *events_bmap, u16 max_id); }; -static inline bool bnxt_ulp_registered(struct bnxt_en_dev *edev, int ulp_id) +static inline bool bnxt_ulp_registered(struct bnxt_en_dev *edev) { - if (edev && rcu_access_pointer(edev->ulp_tbl[ulp_id].ulp_ops)) + if (edev && edev->ulp_tbl) return true; return false; }