From patchwork Tue Oct 10 14:21:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Roberts X-Patchwork-Id: 150817 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:612c:2908:b0:403:3b70:6f57 with SMTP id ib8csp238877vqb; Tue, 10 Oct 2023 07:21:55 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGxEBSlOY1AaE+OzxRKAFNQL73Oz6hZ0fXnjEejF/+Lx4/T5DOyo54+z8pqlRWDqRr3kAhE X-Received: by 2002:a17:902:6bc5:b0:1c9:bef4:e11 with SMTP id m5-20020a1709026bc500b001c9bef40e11mr1001918plt.46.1696947715045; Tue, 10 Oct 2023 07:21:55 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1696947715; cv=none; d=google.com; s=arc-20160816; b=aZUKRErJOzRbRR98mnce8g22L9bt1gl42skFQ/lFyaTwQ1WTpYmHn+gOLSo8bnEJcA y4wE6CShYKD96sWW13Gv8VWESb2G3wHof8UjuvVvVCv+jukl0rpH/U+u3aovxIdUlW/A +m7RjDKFgitic5/UcOCSzvaVkTMM1vUDT3/Il1MN4ELloLIyfBPjCFNUI6BiA+hTMzRI WcDDlQkhRKkTBtzqAk4Wj6dBpbTt6RuU4Wzb9IbTiAsVgG5hNKiH58wdf/OGqhcu9veZ iV8jp7+fGEnBIPKbMrnZeAr/HKE5c0hpIFXT49V2TJ8hTzgdOZViIvj44DmwR4ixwGP1 S+Cg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=LvPNpBn5V6353PiF4jY+DaqBtEmdQFdjfrHbl0dM16I=; fh=Z88u1Jfe/43T2N2CcO+vJyH7zjKPj5sAtvRfvG6Y63c=; b=nhZHUsy32yL9kZC/ZfM9ruoZ9FDHxXWV1C1OcI5EqrfWCzlhMAd58/CqIhYLUMZA54 g83/y7Yu9ilVh3ul08YObAkSgy1ezYWWnu4Wi+1UoxVx85xK15re5PXvIoSZ1ASovdEf 7U/as8Gk3kNBtZVcOqUq+aocesHtkl/h+x7EesZzg/UeTuhiiqZGJp6jfFEa7Vl9O7Eu wtrSTMj6/MR2LiWEyDfDToJJLead3W7c2O4yWX2csqR72FLHcVoSKP/1azroz5GIZmjt uaMhiqH3w4Copm0kDx1G8JKMdfg+C9uX6BtXDbzY+EacD7irTBvdWic+1pjtd6uTUngJ 1Chg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:8 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from fry.vger.email (fry.vger.email. [2620:137:e000::3:8]) by mx.google.com with ESMTPS id kk6-20020a170903070600b001c39f2b4d2bsi11504989plb.438.2023.10.10.07.21.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 10 Oct 2023 07:21:55 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:8 as permitted sender) client-ip=2620:137:e000::3:8; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:8 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by fry.vger.email (Postfix) with ESMTP id 563378108372; Tue, 10 Oct 2023 07:21:52 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at fry.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232981AbjJJOVe (ORCPT + 20 others); Tue, 10 Oct 2023 10:21:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59414 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232953AbjJJOV2 (ORCPT ); Tue, 10 Oct 2023 10:21:28 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id BEED2A9 for ; Tue, 10 Oct 2023 07:21:26 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 3EE63106F; Tue, 10 Oct 2023 07:22:07 -0700 (PDT) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.26]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 1E5043F762; Tue, 10 Oct 2023 07:21:25 -0700 (PDT) From: Ryan Roberts To: Andrew Morton , David Hildenbrand , Matthew Wilcox , Huang Ying , Gao Xiang , Yu Zhao , Yang Shi , Michal Hocko Cc: Ryan Roberts , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [RFC PATCH v1 2/2] mm: swap: Swap-out small-sized THP without splitting Date: Tue, 10 Oct 2023 15:21:11 +0100 Message-Id: <20231010142111.3997780-3-ryan.roberts@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20231010142111.3997780-1-ryan.roberts@arm.com> References: <20231010142111.3997780-1-ryan.roberts@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=2.8 required=5.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,RCVD_IN_SBL_CSS,SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on fry.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (fry.vger.email [0.0.0.0]); Tue, 10 Oct 2023 07:21:52 -0700 (PDT) X-Spam-Level: ** X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1779378647154881523 X-GMAIL-MSGID: 1779378647154881523 The upcoming anonymous small-sized THP feature enables performance improvements by allocating large folios for anonymous memory. However I've observed that on an arm64 system running a parallel workload (e.g. kernel compilation) across many cores, under high memory pressure, the speed regresses. This is due to bottlenecking on the increased number of TLBIs added due to all the extra folio splitting. Therefore, solve this regression by adding support for swapping out small-sized THP without needing to split the folio, just like is already done for PMD-sized THP. This change only applies when CONFIG_THP_SWAP is enabled, and when the swap backing store is a non-rotating block device - these are the same constraints as for the existing PMD-sized THP swap-out support. Note that no attempt is made to swap-in THP here - this is still done page-by-page, like for PMD-sized THP. The main change here is to improve the swap entry allocator so that it can allocate any power-of-2 number of contiguous entries between [4, (1 << PMD_ORDER)]. This is done by allocating a cluster for each distinct order and allocating sequentially from it until the cluster is full. This ensures that we don't need to search the map and we get no fragmentation due to alignment padding for different orders in the cluster. If there is no current cluster for a given order, we attempt to allocate a free cluster from the list. If there are no free clusters, we fail the allocation and the caller falls back to splitting the folio and allocates individual entries (as per existing PMD-sized THP fallback). As far as I can tell, this should not cause any extra fragmentation concerns, given how similar it is to the existing PMD-sized THP allocation mechanism. There will be up to (PMD_ORDER-1) clusters in concurrent use though. In practice, the number of orders in use will be small though. Signed-off-by: Ryan Roberts --- include/linux/swap.h | 7 ++++++ mm/swapfile.c | 60 +++++++++++++++++++++++++++++++++----------- mm/vmscan.c | 10 +++++--- 3 files changed, 59 insertions(+), 18 deletions(-) -- 2.25.1 diff --git a/include/linux/swap.h b/include/linux/swap.h index a073366a227c..fc55b760aeff 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -320,6 +320,13 @@ struct swap_info_struct { */ struct work_struct discard_work; /* discard worker */ struct swap_cluster_list discard_clusters; /* discard clusters list */ + unsigned int large_next[PMD_ORDER]; /* + * next free offset within current + * allocation cluster for large + * folios, or UINT_MAX if no current + * cluster. Index is (order - 1). + * Only when cluster_info is used. + */ struct plist_node avail_lists[]; /* * entries in swap_avail_heads, one * entry per node. diff --git a/mm/swapfile.c b/mm/swapfile.c index c668838fa660..f8093dedc866 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -987,8 +987,10 @@ static int scan_swap_map_slots(struct swap_info_struct *si, return n_ret; } -static int swap_alloc_cluster(struct swap_info_struct *si, swp_entry_t *slot) +static int swap_alloc_large(struct swap_info_struct *si, swp_entry_t *slot, + unsigned int nr_pages) { + int order; unsigned long idx; struct swap_cluster_info *ci; unsigned long offset; @@ -1002,20 +1004,47 @@ static int swap_alloc_cluster(struct swap_info_struct *si, swp_entry_t *slot) return 0; } - if (cluster_list_empty(&si->free_clusters)) - return 0; + VM_WARN_ON(nr_pages < 2); + VM_WARN_ON(nr_pages > SWAPFILE_CLUSTER); + VM_WARN_ON(!is_power_of_2(nr_pages)); - idx = cluster_list_first(&si->free_clusters); - offset = idx * SWAPFILE_CLUSTER; - ci = lock_cluster(si, offset); - alloc_cluster(si, idx); - cluster_set_count_flag(ci, SWAPFILE_CLUSTER, 0); + order = ilog2(nr_pages); + offset = si->large_next[order - 1]; + + if (offset == UINT_MAX) { + if (cluster_list_empty(&si->free_clusters)) + return 0; - memset(si->swap_map + offset, SWAP_HAS_CACHE, SWAPFILE_CLUSTER); + idx = cluster_list_first(&si->free_clusters); + offset = idx * SWAPFILE_CLUSTER; + + ci = lock_cluster(si, offset); + alloc_cluster(si, idx); + cluster_set_count_flag(ci, SWAPFILE_CLUSTER, 0); + + /* + * If scan_swap_map_slots() can't find a free cluster, it will + * check si->swap_map directly. To make sure this standby + * cluster isn't taken by scan_swap_map_slots(), mark the swap + * entries bad (occupied). (same approach as discard). + */ + memset(si->swap_map + offset + nr_pages, SWAP_MAP_BAD, + SWAPFILE_CLUSTER - nr_pages); + } else { + idx = offset / SWAPFILE_CLUSTER; + ci = lock_cluster(si, offset); + } + + memset(si->swap_map + offset, SWAP_HAS_CACHE, nr_pages); unlock_cluster(ci); - swap_range_alloc(si, offset, SWAPFILE_CLUSTER); + swap_range_alloc(si, offset, nr_pages); *slot = swp_entry(si->type, offset); + offset += nr_pages; + if (idx != offset / SWAPFILE_CLUSTER) + offset = UINT_MAX; + si->large_next[order - 1] = offset; + return 1; } @@ -1041,7 +1070,7 @@ int get_swap_pages(int n_goal, swp_entry_t swp_entries[], int entry_size) int node; /* Only single cluster request supported */ - WARN_ON_ONCE(n_goal > 1 && size == SWAPFILE_CLUSTER); + WARN_ON_ONCE(n_goal > 1 && size > 1); spin_lock(&swap_avail_lock); @@ -1078,14 +1107,14 @@ int get_swap_pages(int n_goal, swp_entry_t swp_entries[], int entry_size) spin_unlock(&si->lock); goto nextsi; } - if (size == SWAPFILE_CLUSTER) { + if (size > 1) { if (si->flags & SWP_BLKDEV) - n_ret = swap_alloc_cluster(si, swp_entries); + n_ret = swap_alloc_large(si, swp_entries, size); } else n_ret = scan_swap_map_slots(si, SWAP_HAS_CACHE, n_goal, swp_entries); spin_unlock(&si->lock); - if (n_ret || size == SWAPFILE_CLUSTER) + if (n_ret || size > 1) goto check_out; cond_resched(); @@ -2725,6 +2754,9 @@ static struct swap_info_struct *alloc_swap_info(void) spin_lock_init(&p->cont_lock); init_completion(&p->comp); + for (i = 0; i < ARRAY_SIZE(p->large_next); i++) + p->large_next[i] = UINT_MAX; + return p; } diff --git a/mm/vmscan.c b/mm/vmscan.c index c16e2b1ea8ae..5984d2ae4547 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1212,11 +1212,13 @@ static unsigned int shrink_folio_list(struct list_head *folio_list, if (!can_split_folio(folio, NULL)) goto activate_locked; /* - * Split folios without a PMD map right - * away. Chances are some or all of the - * tail pages can be freed without IO. + * Split PMD-mappable folios without a + * PMD map right away. Chances are some + * or all of the tail pages can be freed + * without IO. */ - if (!folio_entire_mapcount(folio) && + if (folio_test_pmd_mappable(folio) && + !folio_entire_mapcount(folio) && split_folio_to_list(folio, folio_list)) goto activate_locked;