From patchwork Tue Aug 15 13:01:54 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Tong Tiangen X-Patchwork-Id: 135737 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:a888:0:b0:3f2:4152:657d with SMTP id x8csp974900vqo; Wed, 16 Aug 2023 02:10:38 -0700 (PDT) X-Google-Smtp-Source: AGHT+IErpFfeNEVqDJvLcjOzhNqs89ma6fA7rVkJPIIzBPgJJfl3Hgqbe3hJ59gP8waR7uyOII0N X-Received: by 2002:a05:6a00:23c1:b0:686:5a11:a434 with SMTP id g1-20020a056a0023c100b006865a11a434mr1243920pfc.3.1692177037705; Wed, 16 Aug 2023 02:10:37 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1692177037; cv=none; d=google.com; s=arc-20160816; b=EiQv2x8yxX87rf6OVUWZeZUQrlIQhIlleECFx41riIcJqbQSlJJoN5yUSVqZ/rycVB 466n0DIquehPmhDpdn5hrX9McsDJV/DlTaNrxNcQPfE74Oo0MpT8zeg70wUVOTWkZb8M p1cSJW70wWuFMMOyK+xFZJuQ1jMGc671Tfnvi71umNkpU/rStKzOzp72aGxBBpkFy23E dvjAIUVVV/Tt9h0tDL9nn1Trav9K3MmcQUueURtlbBRci1+jPZSdUA1hkae201oHKCWR 4F8zn+5mXUyRQAfdtUAw3c9oS0CHgEuK6ffikiQK8w+aUR5M/+TiuIOk5UVx/H4B0wnS qjkQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from; bh=Hd4yX+YDpkz1nDp+sn7qH/84fUoGOUfDYbCSKa+Jxt4=; fh=nt19++IXTsokBI4dVsr5UmRnf1CfHjJVjCRK6fOdTiA=; b=RbwOff0yOs2EfgYKWMFDd+D2TBOP7wJfgIvgwY81QoF/D0kT68Q/Tl9mWu2rQ5WlGy C4MhRRUyt2/JetddN4hmapBgXi32Z791YjcDtYPgSikJ11DZ7ZYb216rqgrdqMjPxnYU RiFfo0bKpLjV1zGsMLqUk14htJfHG6CKOfue1KEiBhClXQHVRymYi9O0AL0UnXwQnRRf CD6NuEvhMZP6cKyj2PWGXlwbRb8IWLyhcGqWV1+AeJaysV/YJHKd7lCx47OX6Kjl/ytI 5ETQK5z3WEsyu57nzo4PPlH9fPHhP+glSJOxEVEJ52+OaU2gqh5cGLL4NF01X4GKt8wM GLLQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id j8-20020a63ec08000000b0056513361b58si10916861pgh.859.2023.08.16.02.10.16; Wed, 16 Aug 2023 02:10:37 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237338AbjHONC3 (ORCPT + 99 others); Tue, 15 Aug 2023 09:02:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59320 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237343AbjHONCK (ORCPT ); Tue, 15 Aug 2023 09:02:10 -0400 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E4AF1173F for ; Tue, 15 Aug 2023 06:02:01 -0700 (PDT) Received: from kwepemm600017.china.huawei.com (unknown [172.30.72.53]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4RQBCS1KLCzNmjH; Tue, 15 Aug 2023 20:58:28 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by kwepemm600017.china.huawei.com (7.193.23.234) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.31; Tue, 15 Aug 2023 21:01:58 +0800 From: Tong Tiangen To: Andrew Morton , Naoya Horiguchi , Miaohe Lin CC: , , Tong Tiangen , , Guohanjun Subject: [RFC PATCH -next] mm: fix softlockup by replacing tasklist_lock with RCU in for_each_process() Date: Tue, 15 Aug 2023 21:01:54 +0800 Message-ID: <20230815130154.1100779-1-tongtiangen@huawei.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To kwepemm600017.china.huawei.com (7.193.23.234) X-CFilter-Loop: Reflected X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, RCVD_IN_MSPIKE_H5,RCVD_IN_MSPIKE_WL,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1774376229581477550 X-GMAIL-MSGID: 1774376229581477550 We found a softlock issue in our test, analyzed the logs, and found that the relevant CPU call trace as follows: CPU0: _do_fork -> copy_process() -> write_lock_irq(&tasklist_lock) //Disable irq,waiting for //tasklist_lock CPU1: wp_page_copy() ->pte_offset_map_lock() -> spin_lock(&page->ptl); //Hold page->ptl -> ptep_clear_flush() -> flush_tlb_others() ... -> smp_call_function_many() -> arch_send_call_function_ipi_mask() -> csd_lock_wait() //Waiting for other CPUs respond //IPI CPU2: collect_procs_anon() -> read_lock(&tasklist_lock) //Hold tasklist_lock ->for_each_process(tsk) -> page_mapped_in_vma() -> page_vma_mapped_walk() -> map_pte() ->spin_lock(&page->ptl) //Waiting for page->ptl We can see that CPU1 waiting for CPU0 respond IPI,CPU0 waiting for CPU2 unlock tasklist_lock, CPU2 waiting for CPU1 unlock page->ptl. As a result, softlockup is triggered. For collect_procs_anon(), we will not modify the tasklist, but only perform read traversal. Therefore, we can use rcu lock instead of spin lock tasklist_lock, from this, we can break the softlock chain above. The same logic can also be applied to: - collect_procs_file() - collect_procs_fsdax() - collect_procs_ksm() - find_early_kill_thread() Signed-off-by: Tong Tiangen --- mm/ksm.c | 4 ++-- mm/memory-failure.c | 36 ++++++++++++++++++++++-------------- 2 files changed, 24 insertions(+), 16 deletions(-) diff --git a/mm/ksm.c b/mm/ksm.c index 6b7b8928fb96..dcbc0c7f68e7 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -2919,7 +2919,7 @@ void collect_procs_ksm(struct page *page, struct list_head *to_kill, struct anon_vma *av = rmap_item->anon_vma; anon_vma_lock_read(av); - read_lock(&tasklist_lock); + rcu_read_lock(); for_each_process(tsk) { struct anon_vma_chain *vmac; unsigned long addr; @@ -2938,7 +2938,7 @@ void collect_procs_ksm(struct page *page, struct list_head *to_kill, } } } - read_unlock(&tasklist_lock); + rcu_read_unlock(); anon_vma_unlock_read(av); } } diff --git a/mm/memory-failure.c b/mm/memory-failure.c index 7b01fffe7a79..6a02706043f4 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -546,24 +546,32 @@ static void kill_procs(struct list_head *to_kill, int forcekill, bool fail, * Find a dedicated thread which is supposed to handle SIGBUS(BUS_MCEERR_AO) * on behalf of the thread group. Return task_struct of the (first found) * dedicated thread if found, and return NULL otherwise. - * - * We already hold read_lock(&tasklist_lock) in the caller, so we don't - * have to call rcu_read_lock/unlock() in this function. */ static struct task_struct *find_early_kill_thread(struct task_struct *tsk) { struct task_struct *t; + bool find = false; + rcu_read_lock(); for_each_thread(tsk, t) { if (t->flags & PF_MCE_PROCESS) { - if (t->flags & PF_MCE_EARLY) - return t; + if (t->flags & PF_MCE_EARLY) { + find = true; + break; + } } else { - if (sysctl_memory_failure_early_kill) - return t; + if (sysctl_memory_failure_early_kill) { + find = true; + break; + } } } - return NULL; + rcu_read_unlock(); + + if (!find) + t = NULL; + + return t; } /* @@ -609,7 +617,7 @@ static void collect_procs_anon(struct page *page, struct list_head *to_kill, return; pgoff = page_to_pgoff(page); - read_lock(&tasklist_lock); + rcu_read_lock(); for_each_process(tsk) { struct anon_vma_chain *vmac; struct task_struct *t = task_early_kill(tsk, force_early); @@ -626,7 +634,7 @@ static void collect_procs_anon(struct page *page, struct list_head *to_kill, add_to_kill_anon_file(t, page, vma, to_kill); } } - read_unlock(&tasklist_lock); + rcu_read_unlock(); anon_vma_unlock_read(av); } @@ -642,7 +650,7 @@ static void collect_procs_file(struct page *page, struct list_head *to_kill, pgoff_t pgoff; i_mmap_lock_read(mapping); - read_lock(&tasklist_lock); + rcu_read_lock(); pgoff = page_to_pgoff(page); for_each_process(tsk) { struct task_struct *t = task_early_kill(tsk, force_early); @@ -662,7 +670,7 @@ static void collect_procs_file(struct page *page, struct list_head *to_kill, add_to_kill_anon_file(t, page, vma, to_kill); } } - read_unlock(&tasklist_lock); + rcu_read_unlock(); i_mmap_unlock_read(mapping); } @@ -685,7 +693,7 @@ static void collect_procs_fsdax(struct page *page, struct task_struct *tsk; i_mmap_lock_read(mapping); - read_lock(&tasklist_lock); + rcu_read_lock(); for_each_process(tsk) { struct task_struct *t = task_early_kill(tsk, true); @@ -696,7 +704,7 @@ static void collect_procs_fsdax(struct page *page, add_to_kill_fsdax(t, page, vma, to_kill, pgoff); } } - read_unlock(&tasklist_lock); + rcu_read_unlock(); i_mmap_unlock_read(mapping); } #endif /* CONFIG_FS_DAX */