Message ID | 20230403075657.168294-1-schnelle@linux.ibm.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp2138970vqo; Mon, 3 Apr 2023 01:16:43 -0700 (PDT) X-Google-Smtp-Source: AK7set+zZ+6byAggEexiPhdpYzLcw/1hwAcj+j04QAot7pQYQBzFSwkSKi7uuF9M1daKFEFSnSym X-Received: by 2002:a05:6a20:9305:b0:d8:cfcc:555d with SMTP id r5-20020a056a20930500b000d8cfcc555dmr26919685pzh.17.1680509802828; Mon, 03 Apr 2023 01:16:42 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680509802; cv=none; d=google.com; s=arc-20160816; b=umrYfzk7LUNzsAgVdTnYAAQ0R0La1mhjT7OUfkJPDPW3CuHo9Ba2NdHKZpTjb1apuz VTyQ83qrZ/kTDYVHQ2N5c+hE6nD0A0GyM/KeQgyFMACmB95l7MuJgDb9fZBWkh5wFSiT uAXAzNngOJI+/L5sSeuy8sSi3cWwpBs9e/rwqkIApQzTjbOVhihLHC7xMXGfrC4eEqiM q66mUklDSoESTQO76PRSUuduZvSdKdiB999Jmm4CBKx+3IOnjkO4DOPo19TRP8H77Y/L GoChodAGgy+nh2bHyEU1kwVuF022VOFKn7F2RLS2mJ9cZ71wauMR1hH0zNMWPdRmLsTW fWhg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from:dkim-signature; bh=YdnMwkUzrc1nb3ZjPZ9/4LYT9dsKuEGu0ZW1tbsxwrM=; b=MJdbzAndE/Xwr69cuEua1cvj44clBOosII6kDGmlJar7TdOTy8U+CVOlIgxmAiSdZV p6dMk3fdI3wn1jJ8H8YOV09wREvIZG2QN/IYw+PEcmLEBUR/5kg2T2R2hEqgNr5iUA6B FSoXKN4hoGVUsmlQ4GfkK3lBhzeMN3JZOnev9oHerfiDrzqFxr7l3mEJDtPoi+dpK1OR Iv3fgFrLA/+LPmuaIK3Y1jrOdFozJEo/BUxvua6R4YrprU6Wrdl7roRcoDYZMLJnnNpm QQmR1lcBWoK8QTh+sgXnmhBH2TzQlMo9nzQpUH4qiNvqZhSm4wd3aLQRCPomnnfdvf+X Hvbg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@ibm.com header.s=pp1 header.b="FC1ifGw/"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=NONE dis=NONE) header.from=ibm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id bs191-20020a6328c8000000b0050bf634ebf2si7523983pgb.304.2023.04.03.01.16.30; Mon, 03 Apr 2023 01:16:42 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@ibm.com header.s=pp1 header.b="FC1ifGw/"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=NONE dis=NONE) header.from=ibm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231659AbjDCH5O (ORCPT <rfc822;winker.wchi@gmail.com> + 99 others); Mon, 3 Apr 2023 03:57:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51674 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230095AbjDCH5N (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Mon, 3 Apr 2023 03:57:13 -0400 Received: from mx0b-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com [148.163.158.5]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9797FE6; Mon, 3 Apr 2023 00:57:11 -0700 (PDT) Received: from pps.filterd (m0098421.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 3336QW3w024232; Mon, 3 Apr 2023 07:57:04 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : mime-version : content-transfer-encoding; s=pp1; bh=YdnMwkUzrc1nb3ZjPZ9/4LYT9dsKuEGu0ZW1tbsxwrM=; b=FC1ifGw/j8tZDR1rlCspjnCxym/ZUD1DtrscT/GzN37qj04nTaXXUN3ss/nVJiq3uwKj hTdwqO0rlbQlyGQgIF6E5n8c+ZBIsDvsPjuxRzpy4OYv087UNAYj4mDhmLgCi6YiW89T LkjsJ8fCmQcBouXYnjXt6Gtpg15hfPUPpWyn6MjS00/tLWQEBcOjfEmbsLk99Ys5+Bbw SxGRiGb5roagY2FuATfki8hilTDwZLY0bMgxIxZWuyVCRhHZCfE6brbN53zQEO2t7M7a Yn/4OwB7t3tkTmzYXn57/NwMZyQfFJq4nKHnkkJdz6GOYQAx/iGr4yVg1wx9dDkzYEfu xg== Received: from pps.reinject (localhost [127.0.0.1]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3ppxf78pwf-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 03 Apr 2023 07:57:04 +0000 Received: from m0098421.ppops.net (m0098421.ppops.net [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 3337IYYj028032; Mon, 3 Apr 2023 07:57:03 GMT Received: from ppma03fra.de.ibm.com (6b.4a.5195.ip4.static.sl-reverse.com [149.81.74.107]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3ppxf78pvp-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 03 Apr 2023 07:57:03 +0000 Received: from pps.filterd (ppma03fra.de.ibm.com [127.0.0.1]) by ppma03fra.de.ibm.com (8.17.1.19/8.17.1.19) with ESMTP id 3332OCEL015082; Mon, 3 Apr 2023 07:57:01 GMT Received: from smtprelay06.fra02v.mail.ibm.com ([9.218.2.230]) by ppma03fra.de.ibm.com (PPS) with ESMTPS id 3ppc8712ya-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 03 Apr 2023 07:57:01 +0000 Received: from smtpav02.fra02v.mail.ibm.com (smtpav02.fra02v.mail.ibm.com [10.20.54.101]) by smtprelay06.fra02v.mail.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 3337uvs718154174 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Mon, 3 Apr 2023 07:56:57 GMT Received: from smtpav02.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 78AA820043; Mon, 3 Apr 2023 07:56:57 +0000 (GMT) Received: from smtpav02.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 337F020040; Mon, 3 Apr 2023 07:56:57 +0000 (GMT) Received: from tuxmaker.boeblingen.de.ibm.com (unknown [9.152.85.9]) by smtpav02.fra02v.mail.ibm.com (Postfix) with ESMTP; Mon, 3 Apr 2023 07:56:57 +0000 (GMT) From: Niklas Schnelle <schnelle@linux.ibm.com> To: Saeed Mahameed <saeedm@nvidia.com>, Leon Romanovsky <leon@kernel.org>, "David S. Miller" <davem@davemloft.net>, Eric Dumazet <edumazet@google.com>, Jakub Kicinski <kuba@kernel.org>, Paolo Abeni <pabeni@redhat.com> Cc: Gerd Bayer <gbayer@linux.ibm.com>, Alexander Schmidt <alexs@linux.ibm.com>, netdev@vger.kernel.org, linux-rdma@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH] net/mlx5: stop waiting for PCI link if reset is required Date: Mon, 3 Apr 2023 09:56:56 +0200 Message-Id: <20230403075657.168294-1-schnelle@linux.ibm.com> X-Mailer: git-send-email 2.37.2 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-TM-AS-GCONF: 00 X-Proofpoint-GUID: CLIiDkG56GBiyFPv8unp_FXTbc_CM9Gh X-Proofpoint-ORIG-GUID: zpYQrEWcmu7FPuGX05_3WOtrFlCfYVzO X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.942,Hydra:6.0.573,FMLib:17.11.170.22 definitions=2023-04-03_04,2023-03-31_01,2023-02-09_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxscore=0 bulkscore=0 clxscore=1011 malwarescore=0 adultscore=0 lowpriorityscore=0 suspectscore=0 mlxlogscore=999 impostorscore=0 priorityscore=1501 phishscore=0 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2303200000 definitions=main-2304030057 X-Spam-Status: No, score=-0.1 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_EF,RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1762142247066900991?= X-GMAIL-MSGID: =?utf-8?q?1762142247066900991?= |
Series |
net/mlx5: stop waiting for PCI link if reset is required
|
|
Commit Message
Niklas Schnelle
April 3, 2023, 7:56 a.m. UTC
after an error on the PCI link, the driver does not need to wait for the link to become functional again as a reset is required. Stop the wait loop in this case to accelerate the recovery flow. Co-developed-by: Alexander Schmidt <alexs@linux.ibm.com> Signed-off-by: Alexander Schmidt <alexs@linux.ibm.com> Signed-off-by: Niklas Schnelle <schnelle@linux.ibm.com> --- drivers/net/ethernet/mellanox/mlx5/core/health.c | 12 ++++++++++-- 1 file changed, 10 insertions(+), 2 deletions(-) base-commit: 7e364e56293bb98cae1b55fd835f5991c4e96e7d
Comments
On Mon, Apr 03, 2023 at 09:56:56AM +0200, Niklas Schnelle wrote: > after an error on the PCI link, the driver does not need to wait > for the link to become functional again as a reset is required. Stop > the wait loop in this case to accelerate the recovery flow. > > Co-developed-by: Alexander Schmidt <alexs@linux.ibm.com> > Signed-off-by: Alexander Schmidt <alexs@linux.ibm.com> > Signed-off-by: Niklas Schnelle <schnelle@linux.ibm.com> > --- > drivers/net/ethernet/mellanox/mlx5/core/health.c | 12 ++++++++++-- > 1 file changed, 10 insertions(+), 2 deletions(-) > > diff --git a/drivers/net/ethernet/mellanox/mlx5/core/health.c b/drivers/net/ethernet/mellanox/mlx5/core/health.c > index f9438d4e43ca..81ca44e0705a 100644 > --- a/drivers/net/ethernet/mellanox/mlx5/core/health.c > +++ b/drivers/net/ethernet/mellanox/mlx5/core/health.c > @@ -325,6 +325,8 @@ int mlx5_health_wait_pci_up(struct mlx5_core_dev *dev) > while (sensor_pci_not_working(dev)) { According to the comment in sensor_pci_not_working(), this loop is supposed to wait till PCI will be ready again. Otherwise, already in first iteration, we will bail out with pci_channel_offline() error. Thanks > if (time_after(jiffies, end)) > return -ETIMEDOUT; > + if (pci_channel_offline(dev->pdev)) > + return -EIO; > msleep(100); > } > return 0; > @@ -332,10 +334,16 @@ int mlx5_health_wait_pci_up(struct mlx5_core_dev *dev) > > static int mlx5_health_try_recover(struct mlx5_core_dev *dev) > { > + int rc; > + > mlx5_core_warn(dev, "handling bad device here\n"); > mlx5_handle_bad_state(dev); > - if (mlx5_health_wait_pci_up(dev)) { > - mlx5_core_err(dev, "health recovery flow aborted, PCI reads still not working\n"); > + rc = mlx5_health_wait_pci_up(dev); > + if (rc) { > + if (rc == -ETIMEDOUT) > + mlx5_core_err(dev, "health recovery flow aborted, PCI reads still not working\n"); > + else > + mlx5_core_err(dev, "health recovery flow aborted, PCI channel offline\n"); > return -EIO; > } > mlx5_core_err(dev, "starting health recovery flow\n"); > > base-commit: 7e364e56293bb98cae1b55fd835f5991c4e96e7d > -- > 2.37.2 >
On Mon, 2023-04-03 at 21:21 +0300, Leon Romanovsky wrote: > On Mon, Apr 03, 2023 at 09:56:56AM +0200, Niklas Schnelle wrote: > > after an error on the PCI link, the driver does not need to wait > > for the link to become functional again as a reset is required. Stop > > the wait loop in this case to accelerate the recovery flow. > > > > Co-developed-by: Alexander Schmidt <alexs@linux.ibm.com> > > Signed-off-by: Alexander Schmidt <alexs@linux.ibm.com> > > Signed-off-by: Niklas Schnelle <schnelle@linux.ibm.com> > > --- > > drivers/net/ethernet/mellanox/mlx5/core/health.c | 12 ++++++++++-- > > 1 file changed, 10 insertions(+), 2 deletions(-) > > > > diff --git a/drivers/net/ethernet/mellanox/mlx5/core/health.c b/drivers/net/ethernet/mellanox/mlx5/core/health.c > > index f9438d4e43ca..81ca44e0705a 100644 > > --- a/drivers/net/ethernet/mellanox/mlx5/core/health.c > > +++ b/drivers/net/ethernet/mellanox/mlx5/core/health.c > > @@ -325,6 +325,8 @@ int mlx5_health_wait_pci_up(struct mlx5_core_dev *dev) > > while (sensor_pci_not_working(dev)) { > > According to the comment in sensor_pci_not_working(), this loop is > supposed to wait till PCI will be ready again. Otherwise, already in > first iteration, we will bail out with pci_channel_offline() error. > > Thanks Well yes. The problem is that this works for intermittent errors including when the card resets itself which seems to be the use case in mlx5_fw_reset_complete_reload() and mlx5_devlink_reload_fw_activate(). If there is a PCI error that requires a link reset though we see some problems though it does work after running into the timeout. As I understand it and as implemented at least on s390, pci_channel_io_frozen is only set for fatal errors that require a reset while non fatal errors will have pci_channel_io_normal (see also Documentation/PCI/pcieaer-howto.rst) thus I think pci_channel_offline() should only be true if a reset is required or there is a permanent error. Furthermore in the pci_channel_io_frozen state the PCI function may be isolated and the reads will not reach the endpoint, this is the case at least on s390. Thus for errors requiring a reset the loop without pci_channel_offline() will run until the reset is performed or the timeout is reached. In the mlx5_health_try_recover() case during error recovery we will then indeed always loop until timeout, because the loop blocks mlx5_pci_err_detected() from returning thus blocking the reset (see Documentation/PCI/pci-error-recovery.rst). Adding Bjorn, maybe he can confirm or correct my assumptions here. Thanks, Niklas > > > if (time_after(jiffies, end)) > > return -ETIMEDOUT; > > + if (pci_channel_offline(dev->pdev)) > > + return -EIO; > > msleep(100); > > } > > return 0; > > @@ -332,10 +334,16 @@ int mlx5_health_wait_pci_up(struct mlx5_core_dev *dev) > > > > static int mlx5_health_try_recover(struct mlx5_core_dev *dev) > > { > > + int rc; > > + > > mlx5_core_warn(dev, "handling bad device here\n"); > > mlx5_handle_bad_state(dev); > > - if (mlx5_health_wait_pci_up(dev)) { > > - mlx5_core_err(dev, "health recovery flow aborted, PCI reads still not working\n"); > > + rc = mlx5_health_wait_pci_up(dev); > > + if (rc) { > > + if (rc == -ETIMEDOUT) > > + mlx5_core_err(dev, "health recovery flow aborted, PCI reads still not working\n"); > > + else > > + mlx5_core_err(dev, "health recovery flow aborted, PCI channel offline\n"); > > return -EIO; > > } > > mlx5_core_err(dev, "starting health recovery flow\n"); > > > > base-commit: 7e364e56293bb98cae1b55fd835f5991c4e96e7d > > -- > > 2.37.2 > >
On Tue, Apr 04, 2023 at 05:27:35PM +0200, Niklas Schnelle wrote: > On Mon, 2023-04-03 at 21:21 +0300, Leon Romanovsky wrote: > > On Mon, Apr 03, 2023 at 09:56:56AM +0200, Niklas Schnelle wrote: > > > after an error on the PCI link, the driver does not need to wait > > > for the link to become functional again as a reset is required. Stop > > > the wait loop in this case to accelerate the recovery flow. > > > > > > Co-developed-by: Alexander Schmidt <alexs@linux.ibm.com> > > > Signed-off-by: Alexander Schmidt <alexs@linux.ibm.com> > > > Signed-off-by: Niklas Schnelle <schnelle@linux.ibm.com> > > > --- > > > drivers/net/ethernet/mellanox/mlx5/core/health.c | 12 ++++++++++-- > > > 1 file changed, 10 insertions(+), 2 deletions(-) > > > > > > diff --git a/drivers/net/ethernet/mellanox/mlx5/core/health.c b/drivers/net/ethernet/mellanox/mlx5/core/health.c > > > index f9438d4e43ca..81ca44e0705a 100644 > > > --- a/drivers/net/ethernet/mellanox/mlx5/core/health.c > > > +++ b/drivers/net/ethernet/mellanox/mlx5/core/health.c > > > @@ -325,6 +325,8 @@ int mlx5_health_wait_pci_up(struct mlx5_core_dev *dev) > > > while (sensor_pci_not_working(dev)) { > > > > According to the comment in sensor_pci_not_working(), this loop is > > supposed to wait till PCI will be ready again. Otherwise, already in > > first iteration, we will bail out with pci_channel_offline() error. > > Well yes. The problem is that this works for intermittent errors > including when the card resets itself which seems to be the use case in > mlx5_fw_reset_complete_reload() and mlx5_devlink_reload_fw_activate(). > If there is a PCI error that requires a link reset though we see some > problems though it does work after running into the timeout. > > As I understand it and as implemented at least on s390, > pci_channel_io_frozen is only set for fatal errors that require a reset > while non fatal errors will have pci_channel_io_normal (see also > Documentation/PCI/pcieaer-howto.rst) Yes, I think that's true, see handle_error_source(). > thus I think pci_channel_offline() > should only be true if a reset is required or there is a permanent > error. Yes, I think pci_channel_offline() will only be true when a fatal error has been reported via AER or DPC (or a hotplug driver says the device has been removed). The driver resetting the device should not cause such a fatal error. > Furthermore in the pci_channel_io_frozen state the PCI function > may be isolated and the reads will not reach the endpoint, this is the > case at least on s390. Thus for errors requiring a reset the loop > without pci_channel_offline() will run until the reset is performed or > the timeout is reached. In the mlx5_health_try_recover() case during > error recovery we will then indeed always loop until timeout, because > the loop blocks mlx5_pci_err_detected() from returning thus blocking > the reset (see Documentation/PCI/pci-error-recovery.rst). Adding Bjorn, > maybe he can confirm or correct my assumptions here. > > > if (time_after(jiffies, end)) > > > return -ETIMEDOUT; > > > + if (pci_channel_offline(dev->pdev)) > > > + return -EIO; > > > msleep(100); > > > } > > > return 0; > > > @@ -332,10 +334,16 @@ int mlx5_health_wait_pci_up(struct mlx5_core_dev *dev) > > > > > > static int mlx5_health_try_recover(struct mlx5_core_dev *dev) > > > { > > > + int rc; > > > + > > > mlx5_core_warn(dev, "handling bad device here\n"); > > > mlx5_handle_bad_state(dev); > > > - if (mlx5_health_wait_pci_up(dev)) { > > > - mlx5_core_err(dev, "health recovery flow aborted, PCI reads still not working\n"); > > > + rc = mlx5_health_wait_pci_up(dev); > > > + if (rc) { > > > + if (rc == -ETIMEDOUT) > > > + mlx5_core_err(dev, "health recovery flow aborted, PCI reads still not working\n"); > > > + else > > > + mlx5_core_err(dev, "health recovery flow aborted, PCI channel offline\n"); > > > return -EIO; > > > } > > > mlx5_core_err(dev, "starting health recovery flow\n"); > > > > > > base-commit: 7e364e56293bb98cae1b55fd835f5991c4e96e7d > > > -- > > > 2.37.2 > > > >
On Wed, Apr 05, 2023 at 04:06:13PM -0500, Bjorn Helgaas wrote: > On Tue, Apr 04, 2023 at 05:27:35PM +0200, Niklas Schnelle wrote: > > On Mon, 2023-04-03 at 21:21 +0300, Leon Romanovsky wrote: > > > On Mon, Apr 03, 2023 at 09:56:56AM +0200, Niklas Schnelle wrote: > > > > after an error on the PCI link, the driver does not need to wait > > > > for the link to become functional again as a reset is required. Stop > > > > the wait loop in this case to accelerate the recovery flow. > > > > > > > > Co-developed-by: Alexander Schmidt <alexs@linux.ibm.com> > > > > Signed-off-by: Alexander Schmidt <alexs@linux.ibm.com> > > > > Signed-off-by: Niklas Schnelle <schnelle@linux.ibm.com> > > > > --- > > > > drivers/net/ethernet/mellanox/mlx5/core/health.c | 12 ++++++++++-- > > > > 1 file changed, 10 insertions(+), 2 deletions(-) > > > > > > > > diff --git a/drivers/net/ethernet/mellanox/mlx5/core/health.c b/drivers/net/ethernet/mellanox/mlx5/core/health.c > > > > index f9438d4e43ca..81ca44e0705a 100644 > > > > --- a/drivers/net/ethernet/mellanox/mlx5/core/health.c > > > > +++ b/drivers/net/ethernet/mellanox/mlx5/core/health.c > > > > @@ -325,6 +325,8 @@ int mlx5_health_wait_pci_up(struct mlx5_core_dev *dev) > > > > while (sensor_pci_not_working(dev)) { > > > > > > According to the comment in sensor_pci_not_working(), this loop is > > > supposed to wait till PCI will be ready again. Otherwise, already in > > > first iteration, we will bail out with pci_channel_offline() error. > > > > Well yes. The problem is that this works for intermittent errors > > including when the card resets itself which seems to be the use case in > > mlx5_fw_reset_complete_reload() and mlx5_devlink_reload_fw_activate(). > > If there is a PCI error that requires a link reset though we see some > > problems though it does work after running into the timeout. > > > > As I understand it and as implemented at least on s390, > > pci_channel_io_frozen is only set for fatal errors that require a reset > > while non fatal errors will have pci_channel_io_normal (see also > > Documentation/PCI/pcieaer-howto.rst) > > Yes, I think that's true, see handle_error_source(). > > > thus I think pci_channel_offline() > > should only be true if a reset is required or there is a permanent > > error. > > Yes, I think pci_channel_offline() will only be true when a fatal > error has been reported via AER or DPC (or a hotplug driver says the > device has been removed). The driver resetting the device should not > cause such a fatal error. Thank you for an explanation and confirmation.
On Mon, Apr 03, 2023 at 09:56:56AM +0200, Niklas Schnelle wrote: > after an error on the PCI link, the driver does not need to wait > for the link to become functional again as a reset is required. Stop > the wait loop in this case to accelerate the recovery flow. > > Co-developed-by: Alexander Schmidt <alexs@linux.ibm.com> > Signed-off-by: Alexander Schmidt <alexs@linux.ibm.com> > Signed-off-by: Niklas Schnelle <schnelle@linux.ibm.com> > --- > drivers/net/ethernet/mellanox/mlx5/core/health.c | 12 ++++++++++-- > 1 file changed, 10 insertions(+), 2 deletions(-) > The subject line should include target for netdev patches: [PATCH net-next] .... Thanks, Reviewed-by: Leon Romanovsky <leonro@nvidia.com>
On Sun, 2023-04-09 at 11:55 +0300, Leon Romanovsky wrote: > On Mon, Apr 03, 2023 at 09:56:56AM +0200, Niklas Schnelle wrote: > > after an error on the PCI link, the driver does not need to wait > > for the link to become functional again as a reset is required. Stop > > the wait loop in this case to accelerate the recovery flow. > > > > Co-developed-by: Alexander Schmidt <alexs@linux.ibm.com> > > Signed-off-by: Alexander Schmidt <alexs@linux.ibm.com> > > Signed-off-by: Niklas Schnelle <schnelle@linux.ibm.com> > > --- > > drivers/net/ethernet/mellanox/mlx5/core/health.c | 12 ++++++++++-- > > 1 file changed, 10 insertions(+), 2 deletions(-) > > > > The subject line should include target for netdev patches: [PATCH net-next] .... > > Thanks, > Reviewed-by: Leon Romanovsky <leonro@nvidia.com> Thanks, I'll sent a v2 with your R-b, the correct net-next prefix and a Link to this discussion.
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/health.c b/drivers/net/ethernet/mellanox/mlx5/core/health.c index f9438d4e43ca..81ca44e0705a 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/health.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/health.c @@ -325,6 +325,8 @@ int mlx5_health_wait_pci_up(struct mlx5_core_dev *dev) while (sensor_pci_not_working(dev)) { if (time_after(jiffies, end)) return -ETIMEDOUT; + if (pci_channel_offline(dev->pdev)) + return -EIO; msleep(100); } return 0; @@ -332,10 +334,16 @@ int mlx5_health_wait_pci_up(struct mlx5_core_dev *dev) static int mlx5_health_try_recover(struct mlx5_core_dev *dev) { + int rc; + mlx5_core_warn(dev, "handling bad device here\n"); mlx5_handle_bad_state(dev); - if (mlx5_health_wait_pci_up(dev)) { - mlx5_core_err(dev, "health recovery flow aborted, PCI reads still not working\n"); + rc = mlx5_health_wait_pci_up(dev); + if (rc) { + if (rc == -ETIMEDOUT) + mlx5_core_err(dev, "health recovery flow aborted, PCI reads still not working\n"); + else + mlx5_core_err(dev, "health recovery flow aborted, PCI channel offline\n"); return -EIO; } mlx5_core_err(dev, "starting health recovery flow\n");