From patchwork Mon May 22 04:51:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hugh Dickins X-Patchwork-Id: 97014 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp1212851vqo; Sun, 21 May 2023 22:09:15 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ5Ut8LYPBdmYb9ebOVrR9pNOmoIjCaAhvt0UewXwN3KOIVf6iGexeIfa9q7o8EqJ3UIDyhZ X-Received: by 2002:a17:90a:a415:b0:253:3e9d:f920 with SMTP id y21-20020a17090aa41500b002533e9df920mr9170804pjp.3.1684732155432; Sun, 21 May 2023 22:09:15 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1684732155; cv=none; d=google.com; s=arc-20160816; b=Z1E6NTruO23YhHQs6UVSuV12gvar5AbB2P5bh7+RfbrCUUM+B+IlohpuftXh8+UC4m UqJ7HhhdSmn/tOcn6uZii6rEY/v/QOn5Yv041RHmdVeKdnzmqjZGavvntDbypTrzqbIc B7kVoPTAuS2kpzAce3XsqfUVo27s0nNesBEHljA+VyF2WLKxz3PAu+Nt2xbP8iGCg0RS 4mscLW8kpcUMjPRFfUVthL46WCHK5BRW+1exOWA/7bgXS/y7jBKgNhP0aOrQ8OaZfoqh 5AmRsShRvnrB3hEgJDS0bD5Rq4iAnQ1ZY7MtQDTFMxwj9KdlvOPN663JsdDvbcDmDRoU LFUw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:message-id:in-reply-to :subject:cc:to:from:date:dkim-signature; bh=ANcMmc9BeTTDRJseGPHLnYq8DMAivc7OJ7zKS5AkOEU=; b=zOe71zDQ6DZi+3U051Sh2gBz1992xU9leX5XjN1/R7r2mJ6nYVYdEA7uVQ78BQ15x6 RgYYoVDHZkGiFtujpGYfea8+IdQrGmDp98v96xRkDn1SyOoeXN9bh4cDV7E40tLQWqRm g+h5WFjIqPrwaA+3hQ347+Oxf8gy6aU+2ZoPo40plt0BPZGxkKI81YcfN0CigKDAGJlc IDuW6iY2ALX2QqHS1C/113WuqoT262kDtbs9cjbYnSvtciQuyv+EX2aU6ju0+QYPBhlp 1fFvaQFY0a3DdonIS+7oNhA6ZwST7n9p4316cMUH2TdKRsYRIMadtElQtSegKEdpRfyq lfYA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b="suZ7/fWy"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id kk4-20020a17090b4a0400b002471d40b4d0si4467251pjb.106.2023.05.21.22.09.03; Sun, 21 May 2023 22:09:15 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b="suZ7/fWy"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231544AbjEVEvP (ORCPT + 99 others); Mon, 22 May 2023 00:51:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42846 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229571AbjEVEvL (ORCPT ); Mon, 22 May 2023 00:51:11 -0400 Received: from mail-yb1-xb2e.google.com (mail-yb1-xb2e.google.com [IPv6:2607:f8b0:4864:20::b2e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 85F0B132 for ; Sun, 21 May 2023 21:51:05 -0700 (PDT) Received: by mail-yb1-xb2e.google.com with SMTP id 3f1490d57ef6-b9a7e639656so10757891276.0 for ; Sun, 21 May 2023 21:51:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1684731064; x=1687323064; h=mime-version:references:message-id:in-reply-to:subject:cc:to:from :date:from:to:cc:subject:date:message-id:reply-to; bh=ANcMmc9BeTTDRJseGPHLnYq8DMAivc7OJ7zKS5AkOEU=; b=suZ7/fWyjgtDmmUUG368PhOQSIoqVmOnxBEGz1k9zCkGOxssorwZrFR60X+8a4DQqr 7u71fp8c7x68EiyyW8ZSDwzHkHiPONOn7yDOgmy63jiQBTyUNcqNciC9w+0U347zF4Gf qZEb5FQPLn1+cE60AE8Q2hbQucuksdSO++m5CDyyBTN/iGnKQo4gRrU//els8iIiBdS7 ovYLm7Yj3qMDE0Qoy1tWZGPaThqr4eXUwvzaDxInjjwEGGfg2sUxlTqNWk6AaZWuz+i8 FTUjVygqWWxTs+smOosRSftvUI/03MEA0EZ3L2DHmfVjXa6bRJ1mwd0Lc0WFnRQmYQLX GTlA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1684731064; x=1687323064; h=mime-version:references:message-id:in-reply-to:subject:cc:to:from :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=ANcMmc9BeTTDRJseGPHLnYq8DMAivc7OJ7zKS5AkOEU=; b=cIQnZDEAno+Dh2HM1nAaFnzgKtnED2E4c/IqYJ2q+O4ZBmU+sct4cYRCr1quYEBDxn hIO4XON1ZNNZ1HH3FKt6FvPpBiN2RBXTe0n3Jc59nFLZqNCaGglmTCIVAKvvrZ2Hccdt 6lOwXdRXAH9WpWZ5Kv5HWe5dqdOeI7jE/WQbfhHCHs+nAbpdfI8ptpdB87nAaypXwV9i sYDcSHATxrC0dUWH71xE74J5vuwoKCnctZS2Fe4gqhVVXj/uGQC1RUt+CC5UB87xaN8B 3B5xNB17I0ssMPGTlddTSN9WYOzpfylsBmh5shZQjwryWuv9hFyOee32Hetls+NOD16C lCDg== X-Gm-Message-State: AC+VfDyFO2lXBsy8e5pyyIW5PKwA5cBFBF65jwSM19hoZ2nPxrHIe+W4 L8kV4PgPzs6CrW9OXE4AgvwYwg== X-Received: by 2002:a25:1042:0:b0:ba8:1807:9d7f with SMTP id 63-20020a251042000000b00ba818079d7fmr9116239ybq.58.1684731064262; Sun, 21 May 2023 21:51:04 -0700 (PDT) Received: from ripple.attlocal.net (172-10-233-147.lightspeed.sntcca.sbcglobal.net. [172.10.233.147]) by smtp.gmail.com with ESMTPSA id a11-20020a81bb4b000000b0054e82b6a95esm1808098ywl.42.2023.05.21.21.51.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 21 May 2023 21:51:04 -0700 (PDT) Date: Sun, 21 May 2023 21:51:00 -0700 (PDT) From: Hugh Dickins X-X-Sender: hugh@ripple.attlocal.net To: Andrew Morton cc: Mike Kravetz , Mike Rapoport , "Kirill A. Shutemov" , Matthew Wilcox , David Hildenbrand , Suren Baghdasaryan , Qi Zheng , Yang Shi , Mel Gorman , Peter Xu , Peter Zijlstra , Will Deacon , Yu Zhao , Alistair Popple , Ralph Campbell , Ira Weiny , Steven Price , SeongJae Park , Naoya Horiguchi , Christophe Leroy , Zack Rusin , Jason Gunthorpe , Axel Rasmussen , Anshuman Khandual , Pasha Tatashin , Miaohe Lin , Minchan Kim , Christoph Hellwig , Song Liu , Thomas Hellstrom , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 02/31] mm/migrate: remove cruft from migration_entry_wait()s In-Reply-To: <68a97fbe-5c1e-7ac6-72c-7b9c6290b370@google.com> Message-ID: <1659568-468a-6d36-c26-6a52a335ab59@google.com> References: <68a97fbe-5c1e-7ac6-72c-7b9c6290b370@google.com> MIME-Version: 1.0 X-Spam-Status: No, score=-17.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF, ENV_AND_HDR_SPF_MATCH,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS, T_SCC_BODY_TEXT_LINE,USER_IN_DEF_DKIM_WL,USER_IN_DEF_SPF_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1766569705053461775?= X-GMAIL-MSGID: =?utf-8?q?1766569705053461775?= migration_entry_wait_on_locked() does not need to take a mapped pte pointer, its callers can do the unmap first. Annotate it with __releases(ptl) to reduce sparse warnings. Fold __migration_entry_wait_huge() into migration_entry_wait_huge(). Fold __migration_entry_wait() into migration_entry_wait(), preferring the tighter pte_offset_map_lock() to pte_offset_map() and pte_lockptr(). Signed-off-by: Hugh Dickins Reviewed-by: Alistair Popple --- include/linux/migrate.h | 4 ++-- include/linux/swapops.h | 17 +++-------------- mm/filemap.c | 13 ++++--------- mm/migrate.c | 37 +++++++++++++------------------------ 4 files changed, 22 insertions(+), 49 deletions(-) diff --git a/include/linux/migrate.h b/include/linux/migrate.h index 6241a1596a75..affea3063473 100644 --- a/include/linux/migrate.h +++ b/include/linux/migrate.h @@ -75,8 +75,8 @@ bool isolate_movable_page(struct page *page, isolate_mode_t mode); int migrate_huge_page_move_mapping(struct address_space *mapping, struct folio *dst, struct folio *src); -void migration_entry_wait_on_locked(swp_entry_t entry, pte_t *ptep, - spinlock_t *ptl); +void migration_entry_wait_on_locked(swp_entry_t entry, spinlock_t *ptl) + __releases(ptl); void folio_migrate_flags(struct folio *newfolio, struct folio *folio); void folio_migrate_copy(struct folio *newfolio, struct folio *folio); int folio_migrate_mapping(struct address_space *mapping, diff --git a/include/linux/swapops.h b/include/linux/swapops.h index 3a451b7afcb3..4c932cb45e0b 100644 --- a/include/linux/swapops.h +++ b/include/linux/swapops.h @@ -332,15 +332,9 @@ static inline bool is_migration_entry_dirty(swp_entry_t entry) return false; } -extern void __migration_entry_wait(struct mm_struct *mm, pte_t *ptep, - spinlock_t *ptl); extern void migration_entry_wait(struct mm_struct *mm, pmd_t *pmd, unsigned long address); -#ifdef CONFIG_HUGETLB_PAGE -extern void __migration_entry_wait_huge(struct vm_area_struct *vma, - pte_t *ptep, spinlock_t *ptl); extern void migration_entry_wait_huge(struct vm_area_struct *vma, pte_t *pte); -#endif /* CONFIG_HUGETLB_PAGE */ #else /* CONFIG_MIGRATION */ static inline swp_entry_t make_readable_migration_entry(pgoff_t offset) { @@ -362,15 +356,10 @@ static inline int is_migration_entry(swp_entry_t swp) return 0; } -static inline void __migration_entry_wait(struct mm_struct *mm, pte_t *ptep, - spinlock_t *ptl) { } static inline void migration_entry_wait(struct mm_struct *mm, pmd_t *pmd, - unsigned long address) { } -#ifdef CONFIG_HUGETLB_PAGE -static inline void __migration_entry_wait_huge(struct vm_area_struct *vma, - pte_t *ptep, spinlock_t *ptl) { } -static inline void migration_entry_wait_huge(struct vm_area_struct *vma, pte_t *pte) { } -#endif /* CONFIG_HUGETLB_PAGE */ + unsigned long address) { } +static inline void migration_entry_wait_huge(struct vm_area_struct *vma, + pte_t *pte) { } static inline int is_writable_migration_entry(swp_entry_t entry) { return 0; diff --git a/mm/filemap.c b/mm/filemap.c index b4c9bd368b7e..28b42ee848a4 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1359,8 +1359,6 @@ static inline int folio_wait_bit_common(struct folio *folio, int bit_nr, /** * migration_entry_wait_on_locked - Wait for a migration entry to be removed * @entry: migration swap entry. - * @ptep: mapped pte pointer. Will return with the ptep unmapped. Only required - * for pte entries, pass NULL for pmd entries. * @ptl: already locked ptl. This function will drop the lock. * * Wait for a migration entry referencing the given page to be removed. This is @@ -1369,13 +1367,13 @@ static inline int folio_wait_bit_common(struct folio *folio, int bit_nr, * should be called while holding the ptl for the migration entry referencing * the page. * - * Returns after unmapping and unlocking the pte/ptl with pte_unmap_unlock(). + * Returns after unlocking the ptl. * * This follows the same logic as folio_wait_bit_common() so see the comments * there. */ -void migration_entry_wait_on_locked(swp_entry_t entry, pte_t *ptep, - spinlock_t *ptl) +void migration_entry_wait_on_locked(swp_entry_t entry, spinlock_t *ptl) + __releases(ptl) { struct wait_page_queue wait_page; wait_queue_entry_t *wait = &wait_page.wait; @@ -1409,10 +1407,7 @@ void migration_entry_wait_on_locked(swp_entry_t entry, pte_t *ptep, * a valid reference to the page, and it must take the ptl to remove the * migration entry. So the page is valid until the ptl is dropped. */ - if (ptep) - pte_unmap_unlock(ptep, ptl); - else - spin_unlock(ptl); + spin_unlock(ptl); for (;;) { unsigned int flags; diff --git a/mm/migrate.c b/mm/migrate.c index 01cac26a3127..3ecb7a40075f 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -296,14 +296,18 @@ void remove_migration_ptes(struct folio *src, struct folio *dst, bool locked) * get to the page and wait until migration is finished. * When we return from this function the fault will be retried. */ -void __migration_entry_wait(struct mm_struct *mm, pte_t *ptep, - spinlock_t *ptl) +void migration_entry_wait(struct mm_struct *mm, pmd_t *pmd, + unsigned long address) { + spinlock_t *ptl; + pte_t *ptep; pte_t pte; swp_entry_t entry; - spin_lock(ptl); + ptep = pte_offset_map_lock(mm, pmd, address, &ptl); pte = *ptep; + pte_unmap(ptep); + if (!is_swap_pte(pte)) goto out; @@ -311,18 +315,10 @@ void __migration_entry_wait(struct mm_struct *mm, pte_t *ptep, if (!is_migration_entry(entry)) goto out; - migration_entry_wait_on_locked(entry, ptep, ptl); + migration_entry_wait_on_locked(entry, ptl); return; out: - pte_unmap_unlock(ptep, ptl); -} - -void migration_entry_wait(struct mm_struct *mm, pmd_t *pmd, - unsigned long address) -{ - spinlock_t *ptl = pte_lockptr(mm, pmd); - pte_t *ptep = pte_offset_map(pmd, address); - __migration_entry_wait(mm, ptep, ptl); + spin_unlock(ptl); } #ifdef CONFIG_HUGETLB_PAGE @@ -332,9 +328,9 @@ void migration_entry_wait(struct mm_struct *mm, pmd_t *pmd, * * This function will release the vma lock before returning. */ -void __migration_entry_wait_huge(struct vm_area_struct *vma, - pte_t *ptep, spinlock_t *ptl) +void migration_entry_wait_huge(struct vm_area_struct *vma, pte_t *ptep) { + spinlock_t *ptl = huge_pte_lockptr(hstate_vma(vma), vma->vm_mm, ptep); pte_t pte; hugetlb_vma_assert_locked(vma); @@ -352,16 +348,9 @@ void __migration_entry_wait_huge(struct vm_area_struct *vma, * lock release in migration_entry_wait_on_locked(). */ hugetlb_vma_unlock_read(vma); - migration_entry_wait_on_locked(pte_to_swp_entry(pte), NULL, ptl); + migration_entry_wait_on_locked(pte_to_swp_entry(pte), ptl); } } - -void migration_entry_wait_huge(struct vm_area_struct *vma, pte_t *pte) -{ - spinlock_t *ptl = huge_pte_lockptr(hstate_vma(vma), vma->vm_mm, pte); - - __migration_entry_wait_huge(vma, pte, ptl); -} #endif #ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION @@ -372,7 +361,7 @@ void pmd_migration_entry_wait(struct mm_struct *mm, pmd_t *pmd) ptl = pmd_lock(mm, pmd); if (!is_pmd_migration_entry(*pmd)) goto unlock; - migration_entry_wait_on_locked(pmd_to_swp_entry(*pmd), NULL, ptl); + migration_entry_wait_on_locked(pmd_to_swp_entry(*pmd), ptl); return; unlock: spin_unlock(ptl);