From patchwork Mon Feb 6 06:33:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Huang, Ying" X-Patchwork-Id: 53019 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp2082462wrn; Sun, 5 Feb 2023 22:37:34 -0800 (PST) X-Google-Smtp-Source: AK7set9DY3Hqn1dYUmQUVAtF+dMORZMFCopz7y0A8IAMZou8DO3RZsOCjO2iuEs0pgx9FwhXFHHm X-Received: by 2002:a17:90a:316:b0:22c:387a:6842 with SMTP id 22-20020a17090a031600b0022c387a6842mr20295510pje.26.1675665454363; Sun, 05 Feb 2023 22:37:34 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1675665454; cv=none; d=google.com; s=arc-20160816; b=kBXJx4sQQayN6OR9Khlz4mp00FpXNgAYtLiXu3zJz+piqmb63F2eGWIEL9QcJDWXn6 U2l5bSRmyAZw8LY9r5oFSRYnYT8QVdhIpfC0xzopxKHtaNJ+gsUMNczxfquwP+xll/oQ 1A5oxOsw4C9q8KEguL7PcGMJjeadtZ3iTcSYFeZxi2IbYIXO7bfk/x0XeX8rW5dxuHtr xsMKy4GUF4Nn7CCWNSbONstJLpQSNP2Yi27+B2ZF/a53kTQUtBwl422p+6mQfgzzUnEB AJr+ZQEewdQBf0QAgFVuA+7UITPQ/QfpuZZ5DYTOO+87cFcjZkO0fBz3e4Pdwh8YJDoa yh4w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=CfOZxZhQh8vQmdTck0nZFv1+sFy7RYIre0T7zyrtQIw=; b=E1/qTCs9tALAZRbQiktkVHgWEj+yZ4SJOdFOA7a4V+XVf7LupaVy9OP6KIGAVQq3ek geKksHXAsvw6V6yGk63p4uvXMVWtrh1pJ6IOkEchhjo6e7hD+ra+90EDcCUAHv9100UE l1Pf/ckx+fnltCApXg0PiIFrblMG8VE6Awh3moiVhW8mmViWD0zA1RuI/UmoCbwpGiwn HbpcwJwuW40oJ0zYL1M2n9obk9HoZGXQDA8nLGR05MV9TieJoj7DVtMws7/F2t5YuyMM pMlIpq1JWObuGMJqu+NJSzajV+FXzrEtjCDMHnc9vfCK1Ueq5yJ9AWYi4/Coup0AYo+t R/KQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=eaGea8UD; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id v3-20020a17090a960300b0022c19e2e734si15266526pjo.151.2023.02.05.22.37.22; Sun, 05 Feb 2023 22:37:34 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=eaGea8UD; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229685AbjBFGek (ORCPT + 99 others); Mon, 6 Feb 2023 01:34:40 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37782 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229682AbjBFGeX (ORCPT ); Mon, 6 Feb 2023 01:34:23 -0500 Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B103B1C31B for ; Sun, 5 Feb 2023 22:34:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1675665249; x=1707201249; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=xnyyIG4eBpTGks53Pu5vU49x/7nhh4tAaQ/5pqQLi1A=; b=eaGea8UD/imGA9+cDGJ87YK5UQnpoEQVXaQ8mTS67o/r7B2vuLYE544w YcHDLPPbEtI406QpcRSmolOYiBrXFqeQU1PCo3qDbJ2fCYKXwFT5HN6wc /HlpOHR1Vk1Lnddo5KUbXWXUHz4Q/2pAn+G5uYcfCSfUhLnrcKRBbPYOj uuG91Y0RP5JRh9kcYPEVL+kF6DAgRXPzYYtz+g792IsIS6JnAiq0wblOn xJwGzQ8GRs5xy5Z1MfXeFbiAJpDNl+D7TgmCnK8ZcxSpZvmivHTd4M1Sz 5dmP10LfXpqQF/rYieuLiPZhVKo5x9hUJ2QuS3U0x8oi5S9zRJLcdRb8s Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10612"; a="330432747" X-IronPort-AV: E=Sophos;i="5.97,276,1669104000"; d="scan'208";a="330432747" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Feb 2023 22:34:05 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10612"; a="659744775" X-IronPort-AV: E=Sophos;i="5.97,276,1669104000"; d="scan'208";a="659744775" Received: from baoyumen-mobl.ccr.corp.intel.com (HELO yhuang6-mobl2.smartont.net) ([10.255.30.227]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Feb 2023 22:34:00 -0800 From: Huang Ying To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Huang Ying , Zi Yan , Yang Shi , Baolin Wang , Oscar Salvador , Matthew Wilcox , Bharata B Rao , Alistair Popple , haoxin , Minchan Kim , Mike Kravetz , Hyeonggon Yoo <42.hyeyoo@gmail.com> Subject: [PATCH -v4 7/9] migrate_pages: share more code between _unmap and _move Date: Mon, 6 Feb 2023 14:33:11 +0800 Message-Id: <20230206063313.635011-8-ying.huang@intel.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20230206063313.635011-1-ying.huang@intel.com> References: <20230206063313.635011-1-ying.huang@intel.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, RCVD_IN_MSPIKE_H3,RCVD_IN_MSPIKE_WL,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1757062579599074271?= X-GMAIL-MSGID: =?utf-8?q?1757062579599074271?= This is a code cleanup patch to reduce the duplicated code between the _unmap and _move stages of migrate_pages(). No functionality change is expected. Signed-off-by: "Huang, Ying" Cc: Zi Yan Cc: Yang Shi Cc: Baolin Wang Cc: Oscar Salvador Cc: Matthew Wilcox Cc: Bharata B Rao Cc: Alistair Popple Cc: haoxin Cc: Minchan Kim Cc: Mike Kravetz Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com> Reviewed-by: Zi Yan --- mm/migrate.c | 203 ++++++++++++++++++++------------------------------- 1 file changed, 81 insertions(+), 122 deletions(-) diff --git a/mm/migrate.c b/mm/migrate.c index 23eb01cfae4c..9378fa2ad4a5 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1037,6 +1037,7 @@ static void __migrate_folio_extract(struct folio *dst, static void migrate_folio_undo_src(struct folio *src, int page_was_mapped, struct anon_vma *anon_vma, + bool locked, struct list_head *ret) { if (page_was_mapped) @@ -1044,16 +1045,20 @@ static void migrate_folio_undo_src(struct folio *src, /* Drop an anon_vma reference if we took one */ if (anon_vma) put_anon_vma(anon_vma); - folio_unlock(src); - list_move_tail(&src->lru, ret); + if (locked) + folio_unlock(src); + if (ret) + list_move_tail(&src->lru, ret); } /* Restore the destination folio to the original state upon failure */ static void migrate_folio_undo_dst(struct folio *dst, + bool locked, free_page_t put_new_page, unsigned long private) { - folio_unlock(dst); + if (locked) + folio_unlock(dst); if (put_new_page) put_new_page(&dst->page, private); else @@ -1078,13 +1083,42 @@ static void migrate_folio_done(struct folio *src, folio_put(src); } -static int __migrate_folio_unmap(struct folio *src, struct folio *dst, - int force, bool force_lock, enum migrate_mode mode) +/* Obtain the lock on page, remove all ptes. */ +static int migrate_folio_unmap(new_page_t get_new_page, free_page_t put_new_page, + unsigned long private, struct folio *src, + struct folio **dstp, int force, bool force_lock, + enum migrate_mode mode, enum migrate_reason reason, + struct list_head *ret) { + struct folio *dst; int rc = -EAGAIN; + struct page *newpage = NULL; int page_was_mapped = 0; struct anon_vma *anon_vma = NULL; bool is_lru = !__PageMovable(&src->page); + bool locked = false; + bool dst_locked = false; + + if (!thp_migration_supported() && folio_test_transhuge(src)) + return -ENOSYS; + + if (folio_ref_count(src) == 1) { + /* Folio was freed from under us. So we are done. */ + folio_clear_active(src); + folio_clear_unevictable(src); + /* free_pages_prepare() will clear PG_isolated. */ + list_del(&src->lru); + migrate_folio_done(src, reason); + return MIGRATEPAGE_SUCCESS; + } + + newpage = get_new_page(&src->page, private); + if (!newpage) + return -ENOMEM; + dst = page_folio(newpage); + *dstp = dst; + + dst->private = NULL; if (!folio_trylock(src)) { if (!force || mode == MIGRATE_ASYNC) @@ -1119,6 +1153,7 @@ static int __migrate_folio_unmap(struct folio *src, struct folio *dst, folio_lock(src); } + locked = true; if (folio_test_writeback(src)) { /* @@ -1133,10 +1168,10 @@ static int __migrate_folio_unmap(struct folio *src, struct folio *dst, break; default: rc = -EBUSY; - goto out_unlock; + goto out; } if (!force) - goto out_unlock; + goto out; folio_wait_writeback(src); } @@ -1166,7 +1201,8 @@ static int __migrate_folio_unmap(struct folio *src, struct folio *dst, * This is much like races on refcount of oldpage: just don't BUG(). */ if (unlikely(!folio_trylock(dst))) - goto out_unlock; + goto out; + dst_locked = true; if (unlikely(!is_lru)) { __migrate_folio_record(dst, page_was_mapped, anon_vma); @@ -1188,7 +1224,7 @@ static int __migrate_folio_unmap(struct folio *src, struct folio *dst, if (!src->mapping) { if (folio_test_private(src)) { try_to_free_buffers(src); - goto out_unlock_both; + goto out; } } else if (folio_mapped(src)) { /* Establish migration ptes */ @@ -1203,74 +1239,26 @@ static int __migrate_folio_unmap(struct folio *src, struct folio *dst, return MIGRATEPAGE_UNMAP; } - if (page_was_mapped) - remove_migration_ptes(src, src, false); - -out_unlock_both: - folio_unlock(dst); -out_unlock: - /* Drop an anon_vma reference if we took one */ - if (anon_vma) - put_anon_vma(anon_vma); - folio_unlock(src); out: - - return rc; -} - -/* Obtain the lock on page, remove all ptes. */ -static int migrate_folio_unmap(new_page_t get_new_page, free_page_t put_new_page, - unsigned long private, struct folio *src, - struct folio **dstp, int force, bool force_lock, - enum migrate_mode mode, enum migrate_reason reason, - struct list_head *ret) -{ - struct folio *dst; - int rc = MIGRATEPAGE_UNMAP; - struct page *newpage = NULL; - - if (!thp_migration_supported() && folio_test_transhuge(src)) - return -ENOSYS; - - if (folio_ref_count(src) == 1) { - /* Folio was freed from under us. So we are done. */ - folio_clear_active(src); - folio_clear_unevictable(src); - /* free_pages_prepare() will clear PG_isolated. */ - list_del(&src->lru); - migrate_folio_done(src, reason); - return MIGRATEPAGE_SUCCESS; - } - - newpage = get_new_page(&src->page, private); - if (!newpage) - return -ENOMEM; - dst = page_folio(newpage); - *dstp = dst; - - dst->private = NULL; - rc = __migrate_folio_unmap(src, dst, force, force_lock, mode); - if (rc == MIGRATEPAGE_UNMAP) - return rc; - /* * A page that has not been migrated will have kept its * references and be restored. */ /* restore the folio to right list. */ - if (rc != -EAGAIN && rc != -EDEADLOCK) - list_move_tail(&src->lru, ret); + if (rc == -EAGAIN || rc == -EDEADLOCK) + ret = NULL; - if (put_new_page) - put_new_page(&dst->page, private); - else - folio_put(dst); + migrate_folio_undo_src(src, page_was_mapped, anon_vma, locked, ret); + migrate_folio_undo_dst(dst, dst_locked, put_new_page, private); return rc; } -static int __migrate_folio_move(struct folio *src, struct folio *dst, - enum migrate_mode mode) +/* Migrate the folio to the newly allocated folio in dst. */ +static int migrate_folio_move(free_page_t put_new_page, unsigned long private, + struct folio *src, struct folio *dst, + enum migrate_mode mode, enum migrate_reason reason, + struct list_head *ret) { int rc; int page_was_mapped = 0; @@ -1283,12 +1271,8 @@ static int __migrate_folio_move(struct folio *src, struct folio *dst, list_del(&dst->lru); rc = move_to_new_folio(dst, src, mode); - - if (rc == -EAGAIN) { - list_add(&dst->lru, prev); - __migrate_folio_record(dst, page_was_mapped, anon_vma); - return rc; - } + if (rc) + goto out; if (unlikely(!is_lru)) goto out_unlock_both; @@ -1302,70 +1286,45 @@ static int __migrate_folio_move(struct folio *src, struct folio *dst, * unsuccessful, and other cases when a page has been temporarily * isolated from the unevictable LRU: but this case is the easiest. */ - if (rc == MIGRATEPAGE_SUCCESS) { - folio_add_lru(dst); - if (page_was_mapped) - lru_add_drain(); - } + folio_add_lru(dst); + if (page_was_mapped) + lru_add_drain(); if (page_was_mapped) - remove_migration_ptes(src, - rc == MIGRATEPAGE_SUCCESS ? dst : src, false); + remove_migration_ptes(src, dst, false); out_unlock_both: folio_unlock(dst); - /* Drop an anon_vma reference if we took one */ - if (anon_vma) - put_anon_vma(anon_vma); - folio_unlock(src); + set_page_owner_migrate_reason(&dst->page, reason); /* * If migration is successful, decrease refcount of dst, * which will not free the page because new page owner increased * refcounter. */ - if (rc == MIGRATEPAGE_SUCCESS) - folio_put(dst); - - return rc; -} - -/* Migrate the folio to the newly allocated folio in dst. */ -static int migrate_folio_move(free_page_t put_new_page, unsigned long private, - struct folio *src, struct folio *dst, - enum migrate_mode mode, enum migrate_reason reason, - struct list_head *ret) -{ - int rc; - - rc = __migrate_folio_move(src, dst, mode); - if (rc == MIGRATEPAGE_SUCCESS) - set_page_owner_migrate_reason(&dst->page, reason); - - if (rc != -EAGAIN) { - /* - * A folio that has been migrated has all references - * removed and will be freed. A folio that has not been - * migrated will have kept its references and be restored. - */ - list_del(&src->lru); - } + folio_put(dst); /* - * If migration is successful, releases reference grabbed during - * isolation. Otherwise, restore the folio to right list unless - * we want to retry. + * A page that has been migrated has all references removed + * and will be freed. */ - if (rc == MIGRATEPAGE_SUCCESS) { - migrate_folio_done(src, reason); - } else if (rc != -EAGAIN) { - list_add_tail(&src->lru, ret); + list_del(&src->lru); + /* Drop an anon_vma reference if we took one */ + if (anon_vma) + put_anon_vma(anon_vma); + folio_unlock(src); + migrate_folio_done(src, reason); - if (put_new_page) - put_new_page(&dst->page, private); - else - folio_put(dst); + return rc; +out: + if (rc == -EAGAIN) { + list_add(&dst->lru, prev); + __migrate_folio_record(dst, page_was_mapped, anon_vma); + return rc; } + migrate_folio_undo_src(src, page_was_mapped, anon_vma, true, ret); + migrate_folio_undo_dst(dst, true, put_new_page, private); + return rc; } @@ -1897,9 +1856,9 @@ static int migrate_pages_batch(struct list_head *from, new_page_t get_new_page, __migrate_folio_extract(dst, &page_was_mapped, &anon_vma); migrate_folio_undo_src(folio, page_was_mapped, anon_vma, - ret_folios); + true, ret_folios); list_del(&dst->lru); - migrate_folio_undo_dst(dst, put_new_page, private); + migrate_folio_undo_dst(dst, true, put_new_page, private); dst = dst2; dst2 = list_next_entry(dst, lru); }