From patchwork Sat Nov 4 03:55:13 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 161528 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:8f47:0:b0:403:3b70:6f57 with SMTP id j7csp1451036vqu; Fri, 3 Nov 2023 20:56:56 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGbXDffCldEAXqJYcoE93279+S5EGAz5KW3FQJdCqWhTW2Wsi3YEhuSbV/8atkRb0C3M/hv X-Received: by 2002:a05:620a:2847:b0:77a:6498:7091 with SMTP id h7-20020a05620a284700b0077a64987091mr6198823qkp.67.1699070215811; Fri, 03 Nov 2023 20:56:55 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1699070215; cv=none; d=google.com; s=arc-20160816; b=ztkKtJ8ZeLBCSQnVjNQyVg6C4NG1nVm3eISnhlR6Wa7f0SLfTaJY04KeVwKmoUhc2W uWgKNKryUWkNrpMtHvIBMcVJgV3tntz1DKNg9AUexsG4edEHDIpVqcqI6mK8MgWSKgH7 jjBlym42ZSoX73ma+r1TZZFybxG6xbCApdzqD4ay/oEHMNlCkiLrBqDKaZrksZ3eYDZz 6B1TvwpdF6Siddejg9NmDI2Jx+ytqvP4pxvjs8BD8Bp+UDM9Sb8sQjnhsTEUWOCe8w+7 i0x0wpv2LMuOOIFwCzfEUBweAGXVQa3yvd+nHpVI99mODjTTHz6XAaYhN05iyBJuiSbX W1cA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=oZZ54VdEIlacFfuk1r0uZ44GYKJKe8HsZ6PDThbFMXE=; fh=kUj/5waaepfljbNx30rj2BALdV0u/mUCTaFWThlgquQ=; b=cK40bRaP6KcLeSrs/v//+kEwGAo36TUlJ51AoNsfHaTDQZLSqQBp/TCHgvYvCiXhQ4 vf68Hx/jrfObYwwefXwxNkp2t6sGYUet4dr9TbtHnA/Y/KG9jTFxVXkh+A2/mDCbAYgX Y0pKiTZas1U+XsEfdg7O0Aj3P9G4dQwpJ7xvtCoqtuBzgbp8cCeeta800jgkx2ZH5F4M 2HNmEe+3CEnOTOCvZaen6VC0f/5+xb81Jo6u94xMx3XvPtITU7sl38xfQjJGNOS2ZWnD TLqY3Kc0Qf6OVhYwoapnNWRQYmbw6BCrKcOdPuzPOpO0dMGP5vK4qiSJxviZ8NIZtXgq K1cg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: from snail.vger.email (snail.vger.email. [2620:137:e000::3:7]) by mx.google.com with ESMTPS id nn8-20020a17090b38c800b002768ab837bfsi3175930pjb.48.2023.11.03.20.56.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 03 Nov 2023 20:56:55 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) client-ip=2620:137:e000::3:7; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by snail.vger.email (Postfix) with ESMTP id 5465680AB1FA; Fri, 3 Nov 2023 20:56:08 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at snail.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346223AbjKDD4A (ORCPT + 35 others); Fri, 3 Nov 2023 23:56:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59648 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234316AbjKDDzs (ORCPT ); Fri, 3 Nov 2023 23:55:48 -0400 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5B5141AA; Fri, 3 Nov 2023 20:55:44 -0700 (PDT) Received: from dggpemm100001.china.huawei.com (unknown [172.30.72.54]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4SMkKh6BgFzvQ5W; Sat, 4 Nov 2023 11:55:36 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.31; Sat, 4 Nov 2023 11:55:40 +0800 From: Kefeng Wang To: Andrew Morton CC: , , Matthew Wilcox , David Hildenbrand , , Kefeng Wang Subject: [PATCH v2 01/10] mm: swap: introduce pfn_swap_entry_to_folio() Date: Sat, 4 Nov 2023 11:55:13 +0800 Message-ID: <20231104035522.2418660-2-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20231104035522.2418660-1-wangkefeng.wang@huawei.com> References: <20231104035522.2418660-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm100001.china.huawei.com (7.185.36.93) X-CFilter-Loop: Reflected X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_BLOCKED,RCVD_IN_MSPIKE_H5,RCVD_IN_MSPIKE_WL, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (snail.vger.email [0.0.0.0]); Fri, 03 Nov 2023 20:56:08 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1781604250671608569 X-GMAIL-MSGID: 1781604250671608569 Introduce a new pfn_swap_entry_to_folio(), it is similar to pfn_swap_entry_to_page(), but return a folio, which allow us to completely replace the struct page variables with struct folio variables. Signed-off-by: Kefeng Wang --- include/linux/swapops.h | 13 +++++++++++++ 1 file changed, 13 insertions(+) diff --git a/include/linux/swapops.h b/include/linux/swapops.h index bff1e8d97de0..85cb84e4be95 100644 --- a/include/linux/swapops.h +++ b/include/linux/swapops.h @@ -468,6 +468,19 @@ static inline struct page *pfn_swap_entry_to_page(swp_entry_t entry) return p; } +static inline struct folio *pfn_swap_entry_to_folio(swp_entry_t entry) +{ + struct folio *folio = pfn_folio(swp_offset_pfn(entry)); + + /* + * Any use of migration entries may only occur while the + * corresponding folio is locked + */ + BUG_ON(is_migration_entry(entry) && !folio_test_locked(folio)); + + return folio; +} + /* * A pfn swap entry is a special type of swap entry that always has a pfn stored * in the swap offset. They are used to represent unaddressable device memory