From patchwork Fri Jun 9 07:50:49 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Song Shuai X-Patchwork-Id: 105375 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp789478vqr; Fri, 9 Jun 2023 01:16:23 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ7IOOCMiucUt3V+s66mCGWouS3r6rQdDZffaPNOTDvAagx0XrS535LBRcIhZJRdDr17w/Aa X-Received: by 2002:a17:90b:4a4c:b0:24e:14a4:9b92 with SMTP id lb12-20020a17090b4a4c00b0024e14a49b92mr1384269pjb.5.1686298582906; Fri, 09 Jun 2023 01:16:22 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1686298582; cv=none; d=google.com; s=arc-20160816; b=tA2JjlNOlkuZ/YC+mEKtvT/O5LA2JGtY5JwqYD7hnbcNsCAr7zhLzXMSqDVOTpM+jM ISWn/Q7jfEKM+Cv/xM7aR2r9eMZTs1uts0ziOyFJZ08eEob1BX2AOH/ES9bezUHLYdZJ ds/OHjjDmglAjd98+lQBd3GIA7V8IYU73ZsiFhjakPsRrdjlRC+YiGZz/nRhTHVHWSR7 EDOUz9k4XksAO/pKGWeFTq7QL5ZjzS8oyoHydYhFfV74bxlUisf0wg+HZAciPJfmeDXp La5R3fgmwF7VUjv68SmlFbzFmcD9lYq7giYIKp/JIXq5Vep6IOxqMuZYhSaTd0nziD7e OO/A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:feedback-id:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from; bh=/2bfth/BagJ5lE+ig4GB9W1+Vx6SGMp5Ma3sslUc8SU=; b=EXvH97U43X70PahI8XIJ8+PIg5ttsG5HzFB8w9zh2yg7CsIGBK8CXtFi8vKayi3jdC oOZKR7EOPcxnhlQUe+I01FL/MwOG6vQaeMt8B8m6F+CW1zSxr8kBo31gRhhxi6hM8jk0 2MjtIZyl4Hhi+Wa5c/IlkZEDAaOF1ObXxLTe+ioFfOAxHNSEvYVu+S3vXTskTa0fiLs1 ddz35kam1p91uGLmi1M/n/p7oXzZ4Z3kWxHOJn9Ec1qiqqHC6itskTg4bNtdDbPmcccW AQQdkE1+ooYyAs8XkMTnl3oWPROKYL40+6O7QiPleYa4BVUsExmw63VhxdWbcrRnxC2Z 5ZNA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id v16-20020a63d550000000b00543c8ad57f1si2405604pgi.67.2023.06.09.01.16.07; Fri, 09 Jun 2023 01:16:22 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238922AbjFIH71 (ORCPT + 99 others); Fri, 9 Jun 2023 03:59:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34124 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230519AbjFIH60 (ORCPT ); Fri, 9 Jun 2023 03:58:26 -0400 X-Greylist: delayed 141 seconds by postgrey-1.37 at lindbergh.monkeyblade.net; Fri, 09 Jun 2023 00:57:18 PDT Received: from bg4.exmail.qq.com (bg4.exmail.qq.com [43.154.54.12]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 153D73AB8 for ; Fri, 9 Jun 2023 00:57:17 -0700 (PDT) X-QQ-mid: bizesmtp75t1686297197t84smj5p Received: from localhost.localdomain ( [58.240.82.166]) by bizesmtp.qq.com (ESMTP) with id ; Fri, 09 Jun 2023 15:53:16 +0800 (CST) X-QQ-SSF: 01200000000000908000000A0000000 X-QQ-FEAT: eSZ1CZgv+JDrzqJXUyJ/mcu057NGxL1xBlezD+yoPQqZOp/XEym+SAkiqR8mP y/mpdUlUsr3GccK7aytaQDnPMxaiHL0++ERM3P1l+Z/Ht3BibeLCrNg1poYm/nD5zSwsdMD x72PQm0lIBAV69smXv3fYf+3+HlF6GoJyEZQNabbUaICIzb/E+0kXyTuI+T+To9yodHmBDi lHBYo3hnlIeaXt+2+yLenDhJTnsQ2nxYthPnKi+X7qAPucOcklQ2F4jIZVhOcufgT5JU8AT Y+xJXmSZCBgWSYe/O0u0TIksw1pW0bqCG4b+661yftYmZawpM3HxK5RHa0LJQUgbugjwNuR WIICypWsMDvDee8ZyVamLyn6/2XgGOU6joVIrGjkqnxJHXfJbmwurL5VSNFuQ== X-QQ-GoodBg: 0 X-BIZMAIL-ID: 2352795355600838194 From: Song Shuai To: catalin.marinas@arm.com, will@kernel.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, chris@zankel.net, jcmvbkbc@gmail.com, songshuaishuai@tinylab.org, steven.price@arm.com, vincenzo.frascino@arm.com, leyfoon.tan@starfivetech.com, mason.huo@starfivetech.com, jeeheng.sia@starfivetech.com, conor.dooley@microchip.com, ajones@ventanamicro.com Cc: linux@armlinux.org.uk, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org Subject: [PATCH V2 4/4] xtensa: hibernate: remove WARN_ON in save_processor_state Date: Fri, 9 Jun 2023 15:50:49 +0800 Message-Id: <20230609075049.2651723-5-songshuaishuai@tinylab.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20230609075049.2651723-1-songshuaishuai@tinylab.org> References: <20230609075049.2651723-1-songshuaishuai@tinylab.org> MIME-Version: 1.0 X-QQ-SENDSIZE: 520 Feedback-ID: bizesmtp:tinylab.org:qybglogicsvrgz:qybglogicsvrgz5a-3 X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,RCVD_IN_MSPIKE_H3, RCVD_IN_MSPIKE_WL,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1768212222761073247?= X-GMAIL-MSGID: =?utf-8?q?1768212222761073247?= During hibernation or restoration, freeze_secondary_cpus checks num_online_cpus via BUG_ON, and the subsequent save_processor_state also does the checking with WARN_ON. In the case of CONFIG_PM_SLEEP_SMP=n, freeze_secondary_cpus is not defined, but the sole possible condition to disable CONFIG_PM_SLEEP_SMP is !SMP where num_online_cpus is always 1. We also don't have to check it in save_processor_state. So remove the unnecessary checking in save_processor_state. Signed-off-by: Song Shuai --- arch/xtensa/kernel/hibernate.c | 1 - 1 file changed, 1 deletion(-) diff --git a/arch/xtensa/kernel/hibernate.c b/arch/xtensa/kernel/hibernate.c index 06984327d6e2..314602bbf431 100644 --- a/arch/xtensa/kernel/hibernate.c +++ b/arch/xtensa/kernel/hibernate.c @@ -14,7 +14,6 @@ int pfn_is_nosave(unsigned long pfn) void notrace save_processor_state(void) { - WARN_ON(num_online_cpus() != 1); #if XTENSA_HAVE_COPROCESSORS local_coprocessors_flush_release_all(); #endif