From patchwork Wed Jul 19 11:29:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kemeng Shi X-Patchwork-Id: 122368 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:c923:0:b0:3e4:2afc:c1 with SMTP id j3csp2174593vqt; Tue, 18 Jul 2023 20:55:56 -0700 (PDT) X-Google-Smtp-Source: APBJJlGxfBTb3WRbzRbMWSQoMKN34w/uktwak1U2hmuOrL1XFhi8yJxFdmS4P/YgTvqveaYiTawg X-Received: by 2002:aa7:df87:0:b0:51e:404:1e6d with SMTP id b7-20020aa7df87000000b0051e04041e6dmr1469860edy.38.1689738956363; Tue, 18 Jul 2023 20:55:56 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1689738956; cv=none; d=google.com; s=arc-20160816; b=u/whOIgECcPe2LTjYc7UnzuFh67DS+yrdKnINxSKqLm4vH3VxQRL735fJh4REVfLKj u1zOtxqcM7Th6YNtXF8FncCXmY1mlOZLXZ3K39BPhUAa1auX5xi+msosEVhmwQll3Jmc btTMp2SumqYKPkCEIg1WxoDsX8z4WEtdI60oQBo8Vxx2En9+04Cel1kG0kkvlSM2ilVx VHUBqdIirZ2dTdH7yiisRwHIaVOkXgirKsuesCcwP79USrlibjVdhpuVOty6Um8emfvj hktNF15u0jc3Ar7tNlEwnSWDhtHa+FdE+tPI21It1TQ6NnhuAsG1vP1z1ycz45MNgSJH OaVA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=awPybspYBqJhakmb86/rO0BqZIwjOzPt8SuDXp7yaAM=; fh=Gzi1aFpH4bsuJxl33dRilzaRcjMW8JVkCVkQbG3Bsew=; b=0RmticvYsu3x4rrGA+mpA1oNtbVE1ApFGWGkICFxyotUKBjXLbyY6LyBhxq0DYhCSU ATMe/9KOLNpiW3x78pB0bzHcPeE7XBZAQ0m/lE76FZNVVR4MuSH2c3sGvy9kZuU2bwvL NwzW42HF96+Sn1+CDtghXgp+DIgPx71duJOmbKoQFX8mJFaG1uTF2aCVW+su1IrMR9h0 58Ow/09IFBYb1Vau0TcgBz2edS7C4s738RBOzNS7Mq5ILV6E4TlejnQkeadhuKmvd2Tn z3FPnJpigVISE9tu88iyQ2KfbesA2SFVUdUPIadfm3MQlpEw7QM0nSaKRTN5hP9dEunn bilg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id a21-20020a50ff15000000b0051a7bccf383si2242731edu.86.2023.07.18.20.55.22; Tue, 18 Jul 2023 20:55:56 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230040AbjGSDbr (ORCPT + 99 others); Tue, 18 Jul 2023 23:31:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52882 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229902AbjGSDbU (ORCPT ); Tue, 18 Jul 2023 23:31:20 -0400 Received: from dggsgout12.his.huawei.com (unknown [45.249.212.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DDCA33C1F for ; Tue, 18 Jul 2023 20:29:31 -0700 (PDT) Received: from mail02.huawei.com (unknown [172.30.67.143]) by dggsgout12.his.huawei.com (SkyGuard) with ESMTP id 4R5LsK3YDrz4f3jXN for ; Wed, 19 Jul 2023 11:29:25 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.124.27]) by APP2 (Coremail) with SMTP id Syh0CgAXC+mVWLdkI7bmOA--.28977S4; Wed, 19 Jul 2023 11:29:27 +0800 (CST) From: Kemeng Shi To: akpm@linux-foundation.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: shikemeng@huaweicloud.com Subject: [PATCH 2/4] mm/compaction: use "spinlock_t *" to record held lock in isolate_migratepages_block Date: Wed, 19 Jul 2023 19:29:59 +0800 Message-Id: <20230719113001.2023703-3-shikemeng@huaweicloud.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20230719113001.2023703-1-shikemeng@huaweicloud.com> References: <20230719113001.2023703-1-shikemeng@huaweicloud.com> MIME-Version: 1.0 X-CM-TRANSID: Syh0CgAXC+mVWLdkI7bmOA--.28977S4 X-Coremail-Antispam: 1UD129KBjvJXoWxGryUGFWDKw13JFyDuF4rKrg_yoW5ury8pF ykCasIkr4kua4agF1aqrs5uFsIg34fJF47Ar43K3WfXF4ftF9rGw1IyFyUurWrZr13AFZ5 CFs8Ka4kAa12v3JanT9S1TB71UUUUUUqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUBE14x267AKxVW8JVW5JwAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2jI8I6cxK62vIxIIY0VWUZVW8XwA2048vs2IY02 0E87I2jVAFwI0_Jryl82xGYIkIc2x26xkF7I0E14v26r1I6r4UM28lY4IEw2IIxxk0rwA2 F7IY1VAKz4vEj48ve4kI8wA2z4x0Y4vE2Ix0cI8IcVAFwI0_Ar0_tr1l84ACjcxK6xIIjx v20xvEc7CjxVAFwI0_Gr1j6F4UJwA2z4x0Y4vEx4A2jsIE14v26rxl6s0DM28EF7xvwVC2 z280aVCY1x0267AKxVW0oVCq3wAS0I0E0xvYzxvE52x082IY62kv0487Mc02F40EFcxC0V AKzVAqx4xG6I80ewAv7VC0I7IYx2IY67AKxVWUJVWUGwAv7VC2z280aVAFwI0_Jr0_Gr1l Ox8S6xCaFVCjc4AY6r1j6r4UM4x0Y48IcxkI7VAKI48JM4x0x7Aq67IIx4CEVc8vx2IErc IFxwCF04k20xvY0x0EwIxGrwCFx2IqxVCFs4IE7xkEbVWUJVW8JwC20s026c02F40E14v2 6r1j6r18MI8I3I0E7480Y4vE14v26r106r1rMI8E67AF67kF1VAFwI0_JF0_Jw1lIxkGc2 Ij64vIr41lIxAIcVC0I7IYx2IY67AKxVWUJVWUCwCI42IY6xIIjxv20xvEc7CjxVAFwI0_ Gr0_Cr1lIxAIcVCF04k26cxKx2IYs7xG6r1j6r1xMIIF0xvEx4A2jsIE14v26r1j6r4UMI IF0xvEx4A2jsIEc7CjxVAFwI0_Gr0_Gr1UYxBIdaVFxhVjvjDU0xZFpf9x0pRsXo7UUUUU = X-CM-SenderInfo: 5vklyvpphqwq5kxd4v5lfo033gof0z/ X-CFilter-Loop: Reflected X-Spam-Status: No, score=0.0 required=5.0 tests=BAYES_00,DATE_IN_FUTURE_06_12, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1771819715999283800 X-GMAIL-MSGID: 1771819715999283800 Use "spinlock_t *" instead of "struct lruvec *" to record held lock in isolate_migratepages_block. This is a preparation to use compact_unlock_should_abort in isolate_migratepages_block to remove repeat code. Signed-off-by: Kemeng Shi --- mm/compaction.c | 21 ++++++++++----------- 1 file changed, 10 insertions(+), 11 deletions(-) diff --git a/mm/compaction.c b/mm/compaction.c index dfef14d3ef78..638146a49e89 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -840,7 +840,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, unsigned long nr_scanned = 0, nr_isolated = 0; struct lruvec *lruvec; unsigned long flags = 0; - struct lruvec *locked = NULL; + spinlock_t *locked = NULL; struct folio *folio = NULL; struct page *page = NULL, *valid_page = NULL; struct address_space *mapping; @@ -911,7 +911,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, */ if (!(low_pfn % COMPACT_CLUSTER_MAX)) { if (locked) { - unlock_page_lruvec_irqrestore(locked, flags); + spin_unlock_irqrestore(locked, flags); locked = NULL; } @@ -946,7 +946,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, if (PageHuge(page) && cc->alloc_contig) { if (locked) { - unlock_page_lruvec_irqrestore(locked, flags); + spin_unlock_irqrestore(locked, flags); locked = NULL; } @@ -1035,7 +1035,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, if (unlikely(__PageMovable(page)) && !PageIsolated(page)) { if (locked) { - unlock_page_lruvec_irqrestore(locked, flags); + spin_unlock_irqrestore(locked, flags); locked = NULL; } @@ -1120,12 +1120,11 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, lruvec = folio_lruvec(folio); /* If we already hold the lock, we can skip some rechecking */ - if (lruvec != locked) { + if (&lruvec->lru_lock != locked) { if (locked) - unlock_page_lruvec_irqrestore(locked, flags); + spin_unlock_irqrestore(locked, flags); - compact_lock_irqsave(&lruvec->lru_lock, &flags, cc); - locked = lruvec; + locked = compact_lock_irqsave(&lruvec->lru_lock, &flags, cc); lruvec_memcg_debug(lruvec, folio); @@ -1188,7 +1187,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, isolate_fail_put: /* Avoid potential deadlock in freeing page under lru_lock */ if (locked) { - unlock_page_lruvec_irqrestore(locked, flags); + spin_unlock_irqrestore(locked, flags); locked = NULL; } folio_put(folio); @@ -1204,7 +1203,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, */ if (nr_isolated) { if (locked) { - unlock_page_lruvec_irqrestore(locked, flags); + spin_unlock_irqrestore(locked, flags); locked = NULL; } putback_movable_pages(&cc->migratepages); @@ -1236,7 +1235,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, isolate_abort: if (locked) - unlock_page_lruvec_irqrestore(locked, flags); + spin_unlock_irqrestore(locked, flags); if (folio) { folio_set_lru(folio); folio_put(folio);