From patchwork Mon Oct 24 11:32:50 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg KH X-Patchwork-Id: 10044 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp622710wru; Mon, 24 Oct 2022 12:46:16 -0700 (PDT) X-Google-Smtp-Source: AMsMyM4AU76ZT2ECC5hM/CEte/b6meiyXBCxbIr1BM1LUolgGTroo70qpfJLwqXfuXkbZtxU/F8H X-Received: by 2002:a17:90b:2705:b0:20a:b25d:5d93 with SMTP id px5-20020a17090b270500b0020ab25d5d93mr75957108pjb.218.1666640775709; Mon, 24 Oct 2022 12:46:15 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1666640775; cv=none; d=google.com; s=arc-20160816; b=PLNeFmUHX6JgUbvKcyeF68UHGyBKQKBcOEiaVaX0DuFckx4btUmTTNPicSkT5r6Tjw 6T+lf5AWr7ko8vQeN8Q7ycHgJF2Ksnb9DAC86JcUcAgv7XedtEewwxwkb+tO7U7lpNpZ sG5HjusZpxlVi+6vHcqDyg+xl5PFZ7ofgLfGSiXvDIse+/aZxMfpEz4dLGxnS+bScpPH 23Lvhj3A9lTF7CMIePnEkuT3ML/cCWqZpkyjDi9dX8eWJ0a3r8gPWCXnfJgVY/lW3siW /qdEvQaMqHZSDxGzNgt+PEilfSfrTOFSJcXvZ/hyVkqi2I0/2/p1AxW/LG0B6m5gT901 B9XQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=/rYlX0BtJ92gnuqksY1CIiVY2iuAfoCnGJEVRGjxqvo=; b=ZyJnygl8Z77gJf5El6kQQ1Q6CAC1fzseLIqJFM18DkrDPrz9CkkIAFFi9SMcH9P+XS jR9n8VVqY7VHi9qulBHMzNtgTEYH+JVcKmOoyks9QTnRNtlT5f8S67/cJqTtGqwGISvY Zd9hQqRrNn+ERuNK9LsuZiaW3yMFBYw57b1015ero20l6Vn1Wkgx8iA8gEQgk+uPb+MX +EHMmtjGkzJQf9VRlyQj2KlO4Pbhgqfk1InsCg1Eo84EWV1/fijREnHhaD1xHqM4M9av GHuSa41XCjh6LSjhpJoeSpIQ9d5MjBt6vPJP3eXGWd84Xd5AWZhT1vUMCcJgtwq5rRUf lvDA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=lqq5Px4F; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id gg13-20020a17090b0a0d00b0021320088071si697417pjb.175.2022.10.24.12.46.01; Mon, 24 Oct 2022 12:46:15 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=lqq5Px4F; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233141AbiJXTmb (ORCPT + 99 others); Mon, 24 Oct 2022 15:42:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53340 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233438AbiJXTl1 (ORCPT ); Mon, 24 Oct 2022 15:41:27 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3C524262DCD; Mon, 24 Oct 2022 11:11:17 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 1BA91B8169E; Mon, 24 Oct 2022 12:51:46 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 716D3C433C1; Mon, 24 Oct 2022 12:51:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1666615904; bh=dJH2252v8C1VcHtokL3JAqci3ZfsZKJ6QQcpbTTfJaU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=lqq5Px4FZoxgjzruTg3st2L/cx5p5KeNHciyDhkiV6wFunnQDaY5a/s8ixBlU2t4o NbqOad3duDTet+QhgkBQfJW5qxOgryv+aFsyRnp4e0LwslJXscyERnzLRgIrz8XTlk IgmbYxCJO5ich/e3s3cgP7am9R/6LMrYHxk1+jCo= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Michal Jaron , Mateusz Palczewski , Konrad Jankowski , Tony Nguyen , Sasha Levin Subject: [PATCH 5.15 426/530] iavf: Fix race between iavf_close and iavf_reset_task Date: Mon, 24 Oct 2022 13:32:50 +0200 Message-Id: <20221024113104.379678664@linuxfoundation.org> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221024113044.976326639@linuxfoundation.org> References: <20221024113044.976326639@linuxfoundation.org> User-Agent: quilt/0.67 MIME-Version: 1.0 X-Spam-Status: No, score=-7.6 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_HI, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1747599518002974409?= X-GMAIL-MSGID: =?utf-8?q?1747599518002974409?= From: Michal Jaron [ Upstream commit 11c12adcbc1598d91e73ab6ddfa41d25a01478ed ] During stress tests with adding VF to namespace and changing vf's trust there was a race between iavf_reset_task and iavf_close. Sometimes when IAVF_FLAG_AQ_DISABLE_QUEUES from iavf_close was sent to PF after reset and before IAVF_AQ_GET_CONFIG was sent then PF returns error IAVF_NOT_SUPPORTED to disable queues request and following requests. There is need to get_config before other aq_required will be send but iavf_close clears all flags, if get_config was not sent before iavf_close, then it will not be send at all. In case when IAVF_FLAG_AQ_GET_OFFLOAD_VLAN_V2_CAPS was sent before IAVF_FLAG_AQ_DISABLE_QUEUES then there was rtnl_lock deadlock between iavf_close and iavf_adminq_task until iavf_close timeouts and disable queues was sent after iavf_close ends. There was also a problem with sending delete/add filters. Sometimes when filters was not yet added to PF and in iavf_close all filters was set to remove there might be a try to remove nonexistent filters on PF. Add aq_required_tmp to save aq_required flags and send them after disable_queues will be handled. Clear flags given to iavf_down different than IAVF_FLAG_AQ_GET_CONFIG as this flag is necessary to sent other aq_required. Remove some flags that we don't want to send as we are in iavf_close and we want to disable interface. Remove filters which was not yet sent and send del filters flags only when there are filters to remove. Signed-off-by: Michal Jaron Signed-off-by: Mateusz Palczewski Tested-by: Konrad Jankowski Signed-off-by: Tony Nguyen Signed-off-by: Sasha Levin --- drivers/net/ethernet/intel/iavf/iavf_main.c | 177 ++++++++++++++++---- 1 file changed, 141 insertions(+), 36 deletions(-) diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c index 00b2ef01f4ea..629ebdfa48b8 100644 --- a/drivers/net/ethernet/intel/iavf/iavf_main.c +++ b/drivers/net/ethernet/intel/iavf/iavf_main.c @@ -1012,66 +1012,138 @@ static void iavf_up_complete(struct iavf_adapter *adapter) } /** - * iavf_down - Shutdown the connection processing + * iavf_clear_mac_vlan_filters - Remove mac and vlan filters not sent to PF + * yet and mark other to be removed. * @adapter: board private structure - * - * Expects to be called while holding the __IAVF_IN_CRITICAL_TASK bit lock. **/ -void iavf_down(struct iavf_adapter *adapter) +static void iavf_clear_mac_vlan_filters(struct iavf_adapter *adapter) { - struct net_device *netdev = adapter->netdev; - struct iavf_vlan_filter *vlf; - struct iavf_cloud_filter *cf; - struct iavf_fdir_fltr *fdir; - struct iavf_mac_filter *f; - struct iavf_adv_rss *rss; - - if (adapter->state <= __IAVF_DOWN_PENDING) - return; - - netif_carrier_off(netdev); - netif_tx_disable(netdev); - adapter->link_up = false; - iavf_napi_disable_all(adapter); - iavf_irq_disable(adapter); + struct iavf_vlan_filter *vlf, *vlftmp; + struct iavf_mac_filter *f, *ftmp; spin_lock_bh(&adapter->mac_vlan_list_lock); - /* clear the sync flag on all filters */ __dev_uc_unsync(adapter->netdev, NULL); __dev_mc_unsync(adapter->netdev, NULL); /* remove all MAC filters */ - list_for_each_entry(f, &adapter->mac_filter_list, list) { - f->remove = true; + list_for_each_entry_safe(f, ftmp, &adapter->mac_filter_list, + list) { + if (f->add) { + list_del(&f->list); + kfree(f); + } else { + f->remove = true; + } } /* remove all VLAN filters */ - list_for_each_entry(vlf, &adapter->vlan_filter_list, list) { - vlf->remove = true; + list_for_each_entry_safe(vlf, vlftmp, &adapter->vlan_filter_list, + list) { + if (vlf->add) { + list_del(&vlf->list); + kfree(vlf); + } else { + vlf->remove = true; + } } - spin_unlock_bh(&adapter->mac_vlan_list_lock); +} + +/** + * iavf_clear_cloud_filters - Remove cloud filters not sent to PF yet and + * mark other to be removed. + * @adapter: board private structure + **/ +static void iavf_clear_cloud_filters(struct iavf_adapter *adapter) +{ + struct iavf_cloud_filter *cf, *cftmp; /* remove all cloud filters */ spin_lock_bh(&adapter->cloud_filter_list_lock); - list_for_each_entry(cf, &adapter->cloud_filter_list, list) { - cf->del = true; + list_for_each_entry_safe(cf, cftmp, &adapter->cloud_filter_list, + list) { + if (cf->add) { + list_del(&cf->list); + kfree(cf); + adapter->num_cloud_filters--; + } else { + cf->del = true; + } } spin_unlock_bh(&adapter->cloud_filter_list_lock); +} + +/** + * iavf_clear_fdir_filters - Remove fdir filters not sent to PF yet and mark + * other to be removed. + * @adapter: board private structure + **/ +static void iavf_clear_fdir_filters(struct iavf_adapter *adapter) +{ + struct iavf_fdir_fltr *fdir, *fdirtmp; /* remove all Flow Director filters */ spin_lock_bh(&adapter->fdir_fltr_lock); - list_for_each_entry(fdir, &adapter->fdir_list_head, list) { - fdir->state = IAVF_FDIR_FLTR_DEL_REQUEST; + list_for_each_entry_safe(fdir, fdirtmp, &adapter->fdir_list_head, + list) { + if (fdir->state == IAVF_FDIR_FLTR_ADD_REQUEST) { + list_del(&fdir->list); + kfree(fdir); + adapter->fdir_active_fltr--; + } else { + fdir->state = IAVF_FDIR_FLTR_DEL_REQUEST; + } } spin_unlock_bh(&adapter->fdir_fltr_lock); +} + +/** + * iavf_clear_adv_rss_conf - Remove adv rss conf not sent to PF yet and mark + * other to be removed. + * @adapter: board private structure + **/ +static void iavf_clear_adv_rss_conf(struct iavf_adapter *adapter) +{ + struct iavf_adv_rss *rss, *rsstmp; /* remove all advance RSS configuration */ spin_lock_bh(&adapter->adv_rss_lock); - list_for_each_entry(rss, &adapter->adv_rss_list_head, list) - rss->state = IAVF_ADV_RSS_DEL_REQUEST; + list_for_each_entry_safe(rss, rsstmp, &adapter->adv_rss_list_head, + list) { + if (rss->state == IAVF_ADV_RSS_ADD_REQUEST) { + list_del(&rss->list); + kfree(rss); + } else { + rss->state = IAVF_ADV_RSS_DEL_REQUEST; + } + } spin_unlock_bh(&adapter->adv_rss_lock); +} + +/** + * iavf_down - Shutdown the connection processing + * @adapter: board private structure + * + * Expects to be called while holding the __IAVF_IN_CRITICAL_TASK bit lock. + **/ +void iavf_down(struct iavf_adapter *adapter) +{ + struct net_device *netdev = adapter->netdev; + + if (adapter->state <= __IAVF_DOWN_PENDING) + return; + + netif_carrier_off(netdev); + netif_tx_disable(netdev); + adapter->link_up = false; + iavf_napi_disable_all(adapter); + iavf_irq_disable(adapter); + + iavf_clear_mac_vlan_filters(adapter); + iavf_clear_cloud_filters(adapter); + iavf_clear_fdir_filters(adapter); + iavf_clear_adv_rss_conf(adapter); if (!(adapter->flags & IAVF_FLAG_PF_COMMS_FAILED)) { /* cancel any current operation */ @@ -1080,11 +1152,16 @@ void iavf_down(struct iavf_adapter *adapter) * here for this to complete. The watchdog is still running * and it will take care of this. */ - adapter->aq_required = IAVF_FLAG_AQ_DEL_MAC_FILTER; - adapter->aq_required |= IAVF_FLAG_AQ_DEL_VLAN_FILTER; - adapter->aq_required |= IAVF_FLAG_AQ_DEL_CLOUD_FILTER; - adapter->aq_required |= IAVF_FLAG_AQ_DEL_FDIR_FILTER; - adapter->aq_required |= IAVF_FLAG_AQ_DEL_ADV_RSS_CFG; + if (!list_empty(&adapter->mac_filter_list)) + adapter->aq_required |= IAVF_FLAG_AQ_DEL_MAC_FILTER; + if (!list_empty(&adapter->vlan_filter_list)) + adapter->aq_required |= IAVF_FLAG_AQ_DEL_VLAN_FILTER; + if (!list_empty(&adapter->cloud_filter_list)) + adapter->aq_required |= IAVF_FLAG_AQ_DEL_CLOUD_FILTER; + if (!list_empty(&adapter->fdir_list_head)) + adapter->aq_required |= IAVF_FLAG_AQ_DEL_FDIR_FILTER; + if (!list_empty(&adapter->adv_rss_list_head)) + adapter->aq_required |= IAVF_FLAG_AQ_DEL_ADV_RSS_CFG; adapter->aq_required |= IAVF_FLAG_AQ_DISABLE_QUEUES; } @@ -3483,6 +3560,7 @@ static int iavf_open(struct net_device *netdev) static int iavf_close(struct net_device *netdev) { struct iavf_adapter *adapter = netdev_priv(netdev); + u64 aq_to_restore; int status; mutex_lock(&adapter->crit_lock); @@ -3495,6 +3573,29 @@ static int iavf_close(struct net_device *netdev) set_bit(__IAVF_VSI_DOWN, adapter->vsi.state); if (CLIENT_ENABLED(adapter)) adapter->flags |= IAVF_FLAG_CLIENT_NEEDS_CLOSE; + /* We cannot send IAVF_FLAG_AQ_GET_OFFLOAD_VLAN_V2_CAPS before + * IAVF_FLAG_AQ_DISABLE_QUEUES because in such case there is rtnl + * deadlock with adminq_task() until iavf_close timeouts. We must send + * IAVF_FLAG_AQ_GET_CONFIG before IAVF_FLAG_AQ_DISABLE_QUEUES to make + * disable queues possible for vf. Give only necessary flags to + * iavf_down and save other to set them right before iavf_close() + * returns, when IAVF_FLAG_AQ_DISABLE_QUEUES will be already sent and + * iavf will be in DOWN state. + */ + aq_to_restore = adapter->aq_required; + adapter->aq_required &= IAVF_FLAG_AQ_GET_CONFIG; + + /* Remove flags which we do not want to send after close or we want to + * send before disable queues. + */ + aq_to_restore &= ~(IAVF_FLAG_AQ_GET_CONFIG | + IAVF_FLAG_AQ_ENABLE_QUEUES | + IAVF_FLAG_AQ_CONFIGURE_QUEUES | + IAVF_FLAG_AQ_ADD_VLAN_FILTER | + IAVF_FLAG_AQ_ADD_MAC_FILTER | + IAVF_FLAG_AQ_ADD_CLOUD_FILTER | + IAVF_FLAG_AQ_ADD_FDIR_FILTER | + IAVF_FLAG_AQ_ADD_ADV_RSS_CFG); iavf_down(adapter); iavf_change_state(adapter, __IAVF_DOWN_PENDING); @@ -3518,6 +3619,10 @@ static int iavf_close(struct net_device *netdev) msecs_to_jiffies(500)); if (!status) netdev_warn(netdev, "Device resources not yet released\n"); + + mutex_lock(&adapter->crit_lock); + adapter->aq_required |= aq_to_restore; + mutex_unlock(&adapter->crit_lock); return 0; }