Message ID | 20240118111036.72641-5-21cnbao@gmail.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel+bounces-30039-ouuuleilei=gmail.com@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7301:2bc4:b0:101:a8e8:374 with SMTP id hx4csp268022dyb; Thu, 18 Jan 2024 03:15:43 -0800 (PST) X-Google-Smtp-Source: AGHT+IFtNjd1x8BRJ1Ltv9rBc1j23ngiy8q4y+WBMarvnh+LJtwXgUkDJFbSLQJDx8UWTsF00wk8 X-Received: by 2002:a05:6871:58a2:b0:210:d06f:39dd with SMTP id ok34-20020a05687158a200b00210d06f39ddmr55609oac.26.1705576542979; Thu, 18 Jan 2024 03:15:42 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1705576542; cv=pass; d=google.com; s=arc-20160816; b=A/PYJUfWqCUBvQQ9kVpVKTORJWjtgi3T65cq++46f4L2TwGNmvCGfCRA5z8tRNH0HH FICh9TriSf6NK1hTamdPuYmSPTNhcqmCRZw89ymmyRRH+Ej1ujHh6rHOeVqTCIE/d/LN 4f2Fxymiro7jE1FC2Vm71hOs+2eVZeC69uWsDfpRo2UaW1xflW3ILnKHTqSo0DBXKtN8 WXbVuN0Al8l6C12hdNwRaVzjwmzcdltZ+yzd6k/Jg3z0rPqcGFm5ODfeLFxamj+f4Oup KXkVT2IVLmrSXxc8+jcBMVUJ2DJMyNUFMDvpt9ak8aLHBF6Zg8IqRdxcXrBgeSW0L4BD hKpA== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=vxtV65B3xdBW2CsjvzUu/C8QzJUJxjQlgzUH6d2NXwk=; fh=HoForU5yJzZ7+tbHNDgwWa4PuJoKilOkY6/CHbWKADs=; b=R0NNl6yP4S/k8rht0hxn3/KeVUc4dos9yh6WIhLJro3QeE61pv9wjj0HWPR5W9Z7yL jIBWisOekDNVPRTkqCJIx2+ju0w86E+g/nWIQsTl/MZh433yaTL4JzVA+aBPTdcCSGiZ s1UXCYbf9Cy8SzOl2LimeMl+mC1q15GcmGs9UckBYb5RTymTJoqrMhlo0aBvjzPi1e7m W8NRlwOrAwYJIpQXIMGG6E4p7u6itxbjzDtt+SDFWIGzSGssDWYP4ACMI9P/Qzh63aca p7MM61vk4R4XAvi+MfvWmYMMJMlVLkn+FD+y/Gw7Lnt+3Gv1Tfl+OVk+4gp6+AT92gTE 1ftg== ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@gmail.com header.s=20230601 header.b="YzIVV/Kt"; arc=pass (i=1 spf=pass spfdomain=gmail.com dkim=pass dkdomain=gmail.com dmarc=pass fromdomain=gmail.com); spf=pass (google.com: domain of linux-kernel+bounces-30039-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-30039-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [139.178.88.99]) by mx.google.com with ESMTPS id bx26-20020a056a02051a00b00578ea9a0b93si1384319pgb.890.2024.01.18.03.15.42 for <ouuuleilei@gmail.com> (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 18 Jan 2024 03:15:42 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-30039-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) client-ip=139.178.88.99; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20230601 header.b="YzIVV/Kt"; arc=pass (i=1 spf=pass spfdomain=gmail.com dkim=pass dkdomain=gmail.com dmarc=pass fromdomain=gmail.com); spf=pass (google.com: domain of linux-kernel+bounces-30039-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-30039-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id AB6F6283A5C for <ouuuleilei@gmail.com>; Thu, 18 Jan 2024 11:13:47 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id EA7902577D; Thu, 18 Jan 2024 11:12:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="YzIVV/Kt" Received: from mail-oo1-f51.google.com (mail-oo1-f51.google.com [209.85.161.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7778323768 for <linux-kernel@vger.kernel.org>; Thu, 18 Jan 2024 11:12:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.161.51 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1705576327; cv=none; b=NCit6Lz5yq5QTDrwldvo8uZifsQa3pfpA4ZI+qQYMvGOyX3Kb36GgukSrZIGfSiJCgSyMOMs4U/SJKm0gYpm9sUnYwruOkQDtxQp/RIkauEJgJvX2nR+3Yw8/d914bh112gEfhpjH9D+DdoDGGpstQl+dUi+qq/93WS7vHZUEp0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1705576327; c=relaxed/simple; bh=3iSuabJMCFOhjj90fjjIfmTTve7X6yPyjG4bw6oRZqE=; h=Received:DKIM-Signature:X-Google-DKIM-Signature: X-Gm-Message-State:X-Google-Smtp-Source:X-Received:Received:From: To:Cc:Subject:Date:Message-Id:X-Mailer:In-Reply-To:References: MIME-Version:Content-Transfer-Encoding; b=DyqlM3oXGMBFDMyEukKUPR3ypN/dKgBfgeTVLUOindaygf4dLrvFlixcn78kHNtAUrxSgp+SJETO/zZYwD3feVl9t8HisfUnRYy3Hk9QYRtNKIt1EwIRNbwAnBUd5B9a25NGWDI1GWJeKynbEvvI4uxVieVdmpfogdD9bDLgGng= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=YzIVV/Kt; arc=none smtp.client-ip=209.85.161.51 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Received: by mail-oo1-f51.google.com with SMTP id 006d021491bc7-595d24ad466so6600513eaf.0 for <linux-kernel@vger.kernel.org>; Thu, 18 Jan 2024 03:12:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1705576324; x=1706181124; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=vxtV65B3xdBW2CsjvzUu/C8QzJUJxjQlgzUH6d2NXwk=; b=YzIVV/Ktr9rxO0oV++6pS+GCPLBOwHYC5hjFRWOon2FTHfP/34/4p/aoQ5i2HzG1Jp 6BHn5vD4xPqhTXJ1p25zK3hC69uCChayIUsFa+Hzc3F5NWtnLO5mDgr54KyjXPwhD2pm YUtEw3yRgK/kJlRz216vXUzCqiDXlxsHsubz33RWRJCMsI3juo85WYP0Pp+b/bca4m4/ nBuuLlMnjvBhLjH67a+VQlWFDoqi1zzuFpqozMwHgw33eq9RpIkc31EKmd3dLKt+Yzib 16bfBx3+z2NmOCxtluJkDyHonGHW4H+5lvQQ/7CQWIRS/zPWu2vk3dNq3jESCisYUT5Z bvTA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1705576324; x=1706181124; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=vxtV65B3xdBW2CsjvzUu/C8QzJUJxjQlgzUH6d2NXwk=; b=tFcRvPp7AT0g4kiDo8z8u0rhm27VK0Mw2eA8ToyfLGBRmbg6NbwMjFskDLffmbqnZ0 qGdHMvwIDg/9tBlpqMa3l16wCmUCowRxYPxBthlPQ0jafCSXXJXrqPmeKsLB7D2x3lD3 T/6jumMk0iI9YczvNR55tkeC/084FGL600q48hcMDYr4jbqFYGNnPwusXCbK+fXIOEpO s/wSaLiY9j9UlZ5GnIaNFbm2LosrlnhW/mIWA1A1CLHqSCb5wviZBgzvDxLvGWIawzNZ iBKSdh9g8v09whjutdhFx9Eoz217p278zOSwYIJfhZ4TpBju2u2NmApdAgIq8Hz77BWB Nawg== X-Gm-Message-State: AOJu0YzuqTMmDq88SWsrm9umerkfbHoxU+7O1X9ICap4Kx3+HyY1xnbh EcKp91AoHLIM5exXWQr94IeyZBLpAoaEI88IYTCAM3HAE6/uIZNA9QN8JDo2 X-Received: by 2002:a05:6359:ba7:b0:175:5c8c:3ab with SMTP id gf39-20020a0563590ba700b001755c8c03abmr609446rwb.65.1705576324307; Thu, 18 Jan 2024 03:12:04 -0800 (PST) Received: from barry-desktop.. (143.122.224.49.dyn.cust.vf.net.nz. [49.224.122.143]) by smtp.gmail.com with ESMTPSA id t19-20020a056a0021d300b006d9be753ac7sm3039107pfj.108.2024.01.18.03.11.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 18 Jan 2024 03:12:03 -0800 (PST) From: Barry Song <21cnbao@gmail.com> To: ryan.roberts@arm.com, akpm@linux-foundation.org, david@redhat.com, linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, mhocko@suse.com, shy828301@gmail.com, wangkefeng.wang@huawei.com, willy@infradead.org, xiang@kernel.org, ying.huang@intel.com, yuzhao@google.com, surenb@google.com, steven.price@arm.com, Chuanhua Han <hanchuanhua@oppo.com>, Barry Song <v-songbaohua@oppo.com> Subject: [PATCH RFC 4/6] mm: support large folios swapin as a whole Date: Fri, 19 Jan 2024 00:10:34 +1300 Message-Id: <20240118111036.72641-5-21cnbao@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240118111036.72641-1-21cnbao@gmail.com> References: <20231025144546.577640-1-ryan.roberts@arm.com> <20240118111036.72641-1-21cnbao@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: <linux-kernel.vger.kernel.org> List-Subscribe: <mailto:linux-kernel+subscribe@vger.kernel.org> List-Unsubscribe: <mailto:linux-kernel+unsubscribe@vger.kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1788426629345746337 X-GMAIL-MSGID: 1788426629345746337 |
Series |
mm: support large folios swap-in
|
|
Commit Message
Barry Song
Jan. 18, 2024, 11:10 a.m. UTC
From: Chuanhua Han <hanchuanhua@oppo.com> On an embedded system like Android, more than half of anon memory is actually in swap devices such as zRAM. For example, while an app is switched to back- ground, its most memory might be swapped-out. Now we have mTHP features, unfortunately, if we don't support large folios swap-in, once those large folios are swapped-out, we immediately lose the performance gain we can get through large folios and hardware optimization such as CONT-PTE. This patch brings up mTHP swap-in support. Right now, we limit mTHP swap-in to those contiguous swaps which were likely swapped out from mTHP as a whole. On the other hand, the current implementation only covers the SWAP_SYCHRONOUS case. It doesn't support swapin_readahead as large folios yet. Right now, we are re-faulting large folios which are still in swapcache as a whole, this can effectively decrease extra loops and early-exitings which we have increased in arch_swap_restore() while supporting MTE restore for folios rather than page. Signed-off-by: Chuanhua Han <hanchuanhua@oppo.com> Co-developed-by: Barry Song <v-songbaohua@oppo.com> Signed-off-by: Barry Song <v-songbaohua@oppo.com> --- mm/memory.c | 108 +++++++++++++++++++++++++++++++++++++++++++++------- 1 file changed, 94 insertions(+), 14 deletions(-)
Comments
On Thu, Jan 18, 2024 at 3:12 AM Barry Song <21cnbao@gmail.com> wrote: > > From: Chuanhua Han <hanchuanhua@oppo.com> > > On an embedded system like Android, more than half of anon memory is actually > in swap devices such as zRAM. For example, while an app is switched to back- > ground, its most memory might be swapped-out. > > Now we have mTHP features, unfortunately, if we don't support large folios > swap-in, once those large folios are swapped-out, we immediately lose the > performance gain we can get through large folios and hardware optimization > such as CONT-PTE. > > This patch brings up mTHP swap-in support. Right now, we limit mTHP swap-in > to those contiguous swaps which were likely swapped out from mTHP as a whole. > > On the other hand, the current implementation only covers the SWAP_SYCHRONOUS > case. It doesn't support swapin_readahead as large folios yet. > > Right now, we are re-faulting large folios which are still in swapcache as a > whole, this can effectively decrease extra loops and early-exitings which we > have increased in arch_swap_restore() while supporting MTE restore for folios > rather than page. > > Signed-off-by: Chuanhua Han <hanchuanhua@oppo.com> > Co-developed-by: Barry Song <v-songbaohua@oppo.com> > Signed-off-by: Barry Song <v-songbaohua@oppo.com> > --- > mm/memory.c | 108 +++++++++++++++++++++++++++++++++++++++++++++------- > 1 file changed, 94 insertions(+), 14 deletions(-) > > diff --git a/mm/memory.c b/mm/memory.c > index f61a48929ba7..928b3f542932 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -107,6 +107,8 @@ EXPORT_SYMBOL(mem_map); > static vm_fault_t do_fault(struct vm_fault *vmf); > static vm_fault_t do_anonymous_page(struct vm_fault *vmf); > static bool vmf_pte_changed(struct vm_fault *vmf); > +static struct folio *alloc_anon_folio(struct vm_fault *vmf, > + bool (*pte_range_check)(pte_t *, int)); Instead of returning "bool", the pte_range_check() can return the start of the swap entry of the large folio. That will save some of the later code needed to get the start of the large folio. > > /* > * Return true if the original pte was a uffd-wp pte marker (so the pte was > @@ -3784,6 +3786,34 @@ static vm_fault_t handle_pte_marker(struct vm_fault *vmf) > return VM_FAULT_SIGBUS; > } > > +static bool pte_range_swap(pte_t *pte, int nr_pages) This function name seems to suggest it will perform the range swap. That is not what it is doing. Suggest change to some other name reflecting that it is only a condition test without actual swap action. I am not very good at naming functions. Just think it out loud: e.g. pte_range_swap_check, pte_test_range_swap. You can come up with something better. > +{ > + int i; > + swp_entry_t entry; > + unsigned type; > + pgoff_t start_offset; > + > + entry = pte_to_swp_entry(ptep_get_lockless(pte)); > + if (non_swap_entry(entry)) > + return false; > + start_offset = swp_offset(entry); > + if (start_offset % nr_pages) > + return false; This suggests the pte argument needs to point to the beginning of the large folio equivalent of swap entry(not sure what to call it. Let me call it "large folio swap" here.). We might want to unify the terms for that. Any way, might want to document this requirement, otherwise the caller might consider passing the current pte that generates the fault. From the function name it is not obvious which pte should pass it. > + > + type = swp_type(entry); > + for (i = 1; i < nr_pages; i++) { You might want to test the last page backwards, because if the entry is not the large folio swap, most likely it will have the last entry invalid. Some of the beginning swap entries might match due to batch allocation etc. The SSD likes to group the nearby swap entry write out together on the disk. > + entry = pte_to_swp_entry(ptep_get_lockless(pte + i)); > + if (non_swap_entry(entry)) > + return false; > + if (swp_offset(entry) != start_offset + i) > + return false; > + if (swp_type(entry) != type) > + return false; > + } > + > + return true; > +} > + > /* > * We enter with non-exclusive mmap_lock (to exclude vma changes, > * but allow concurrent faults), and pte mapped but not yet locked. > @@ -3804,6 +3834,9 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > pte_t pte; > vm_fault_t ret = 0; > void *shadow = NULL; > + int nr_pages = 1; > + unsigned long start_address; > + pte_t *start_pte; > > if (!pte_unmap_same(vmf)) > goto out; > @@ -3868,13 +3901,20 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > if (data_race(si->flags & SWP_SYNCHRONOUS_IO) && > __swap_count(entry) == 1) { > /* skip swapcache */ > - folio = vma_alloc_folio(GFP_HIGHUSER_MOVABLE, 0, > - vma, vmf->address, false); > + folio = alloc_anon_folio(vmf, pte_range_swap); This function can call pte_range_swap() twice(), one here, another one in folio_test_large(). Consider caching the result so it does not need to walk the pte range swap twice. I think alloc_anon_folio should either be told what is the size(prefered) or just figure out the right size. I don't think it needs to pass in the checking function as function callbacks. There are two call sites of alloc_anon_folio, they are all within this function. The call back seems a bit overkill here. Also duplicate the range swap walk. > page = &folio->page; > if (folio) { > __folio_set_locked(folio); > __folio_set_swapbacked(folio); > > + if (folio_test_large(folio)) { > + unsigned long start_offset; > + > + nr_pages = folio_nr_pages(folio); > + start_offset = swp_offset(entry) & ~(nr_pages - 1); Here is the first place place we roll up the start offset with folio size > + entry = swp_entry(swp_type(entry), start_offset); > + } > + > if (mem_cgroup_swapin_charge_folio(folio, > vma->vm_mm, GFP_KERNEL, > entry)) { > @@ -3980,6 +4020,39 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > */ > vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, vmf->address, > &vmf->ptl); > + > + start_address = vmf->address; > + start_pte = vmf->pte; > + if (folio_test_large(folio)) { > + unsigned long nr = folio_nr_pages(folio); > + unsigned long addr = ALIGN_DOWN(vmf->address, nr * PAGE_SIZE); > + pte_t *pte_t = vmf->pte - (vmf->address - addr) / PAGE_SIZE; Here is the second place we roll up the folio size. Maybe we can cache results and avoid repetition? > + > + /* > + * case 1: we are allocating large_folio, try to map it as a whole > + * iff the swap entries are still entirely mapped; > + * case 2: we hit a large folio in swapcache, and all swap entries > + * are still entirely mapped, try to map a large folio as a whole. > + * otherwise, map only the faulting page within the large folio > + * which is swapcache > + */ One question I have in mind is that the swap device is locked. We can't change the swap slot allocations. It does not stop the pte entry getting changed right? Then we can have someone in the user pace racing to change the PTE vs we checking the pte there. > + if (pte_range_swap(pte_t, nr)) { After this pte_range_swap() check, some of the PTE entries get changed and now we don't have the full large page swap any more? At least I can't conclude this possibility can't happen yet, please enlighten me. > + start_address = addr; > + start_pte = pte_t; > + if (unlikely(folio == swapcache)) { > + /* > + * the below has been done before swap_read_folio() > + * for case 1 > + */ > + nr_pages = nr; > + entry = pte_to_swp_entry(ptep_get(start_pte)); If we make pte_range_swap() return the entry, we can avoid refetching the swap entry here. > + page = &folio->page; > + } > + } else if (nr_pages > 1) { /* ptes have changed for case 1 */ > + goto out_nomap; > + } > + } > + I rewrote the above to make the code indentation matching the execution flow. I did not add any functional change. Just rearrange the code to be a bit more streamlined. Get rid of the "else if goto". if (!pte_range_swap(pte_t, nr)) { if (nr_pages > 1) /* ptes have changed for case 1 */ goto out_nomap; goto check_pte; } start_address = addr; start_pte = pte_t; if (unlikely(folio == swapcache)) { /* * the below has been done before swap_read_folio() * for case 1 */ nr_pages = nr; entry = pte_to_swp_entry(ptep_get(start_pte)); page = &folio->page; } } check_pte: > if (unlikely(!vmf->pte || !pte_same(ptep_get(vmf->pte), vmf->orig_pte))) > goto out_nomap; > > @@ -4047,12 +4120,14 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > * We're already holding a reference on the page but haven't mapped it > * yet. > */ > - swap_free(entry); > + swap_nr_free(entry, nr_pages); > if (should_try_to_free_swap(folio, vma, vmf->flags)) > folio_free_swap(folio); > > - inc_mm_counter(vma->vm_mm, MM_ANONPAGES); > - dec_mm_counter(vma->vm_mm, MM_SWAPENTS); > + folio_ref_add(folio, nr_pages - 1); > + add_mm_counter(vma->vm_mm, MM_ANONPAGES, nr_pages); > + add_mm_counter(vma->vm_mm, MM_SWAPENTS, -nr_pages); > + > pte = mk_pte(page, vma->vm_page_prot); > > /* > @@ -4062,14 +4137,14 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > * exclusivity. > */ > if (!folio_test_ksm(folio) && > - (exclusive || folio_ref_count(folio) == 1)) { > + (exclusive || folio_ref_count(folio) == nr_pages)) { > if (vmf->flags & FAULT_FLAG_WRITE) { > pte = maybe_mkwrite(pte_mkdirty(pte), vma); > vmf->flags &= ~FAULT_FLAG_WRITE; > } > rmap_flags |= RMAP_EXCLUSIVE; > } > - flush_icache_page(vma, page); > + flush_icache_pages(vma, page, nr_pages); > if (pte_swp_soft_dirty(vmf->orig_pte)) > pte = pte_mksoft_dirty(pte); > if (pte_swp_uffd_wp(vmf->orig_pte)) > @@ -4081,14 +4156,15 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > folio_add_new_anon_rmap(folio, vma, vmf->address); > folio_add_lru_vma(folio, vma); > } else { > - folio_add_anon_rmap_pte(folio, page, vma, vmf->address, > + folio_add_anon_rmap_ptes(folio, page, nr_pages, vma, start_address, > rmap_flags); > } > > VM_BUG_ON(!folio_test_anon(folio) || > (pte_write(pte) && !PageAnonExclusive(page))); > - set_pte_at(vma->vm_mm, vmf->address, vmf->pte, pte); > - arch_do_swap_page(vma->vm_mm, vma, vmf->address, pte, vmf->orig_pte); > + set_ptes(vma->vm_mm, start_address, start_pte, pte, nr_pages); > + > + arch_do_swap_page(vma->vm_mm, vma, start_address, pte, vmf->orig_pte); > > folio_unlock(folio); > if (folio != swapcache && swapcache) { > @@ -4105,6 +4181,9 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > } > > if (vmf->flags & FAULT_FLAG_WRITE) { > + if (folio_test_large(folio) && nr_pages > 1) > + vmf->orig_pte = ptep_get(vmf->pte); > + > ret |= do_wp_page(vmf); > if (ret & VM_FAULT_ERROR) > ret &= VM_FAULT_ERROR; > @@ -4112,7 +4191,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > } > > /* No need to invalidate - it was non-present before */ > - update_mmu_cache_range(vmf, vma, vmf->address, vmf->pte, 1); > + update_mmu_cache_range(vmf, vma, start_address, start_pte, nr_pages); > unlock: > if (vmf->pte) > pte_unmap_unlock(vmf->pte, vmf->ptl); > @@ -4148,7 +4227,8 @@ static bool pte_range_none(pte_t *pte, int nr_pages) > return true; > } > > -static struct folio *alloc_anon_folio(struct vm_fault *vmf) > +static struct folio *alloc_anon_folio(struct vm_fault *vmf, > + bool (*pte_range_check)(pte_t *, int)) > { > #ifdef CONFIG_TRANSPARENT_HUGEPAGE > struct vm_area_struct *vma = vmf->vma; > @@ -4190,7 +4270,7 @@ static struct folio *alloc_anon_folio(struct vm_fault *vmf) About this patch context we have the following comments in the source code. /* * Find the highest order where the aligned range is completely * pte_none(). Note that all remaining orders will be completely * pte_none(). */ > order = highest_order(orders); > while (orders) { > addr = ALIGN_DOWN(vmf->address, PAGE_SIZE << order); > - if (pte_range_none(pte + pte_index(addr), 1 << order)) > + if (pte_range_check(pte + pte_index(addr), 1 << order)) Again, I don't think we need to pass in the pte_range_check() as call back functions. There are only two call sites, all within this file. This will totally invalide the above comments about pte_none(). In the worst case, just make it accept one argument: it is checking swap range or none range or not. Depending on the argument, do check none or swap range. We should make it blend in with alloc_anon_folio better. My gut feeling is that there should be a better way to make the range check blend in with alloc_anon_folio better. e.g. Maybe store some of the large swap context in the vmf and pass to different places etc. I need to spend more time thinking about it to come up with happier solutions. Chris > break; > order = next_order(&orders, order); > } > @@ -4269,7 +4349,7 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf) > if (unlikely(anon_vma_prepare(vma))) > goto oom; > /* Returns NULL on OOM or ERR_PTR(-EAGAIN) if we must retry the fault */ > - folio = alloc_anon_folio(vmf); > + folio = alloc_anon_folio(vmf, pte_range_none); > if (IS_ERR(folio)) > return 0; > if (!folio) > -- > 2.34.1 > >
On Thu, Jan 18, 2024 at 3:12 AM Barry Song <21cnbao@gmail.com> wrote: > > From: Chuanhua Han <hanchuanhua@oppo.com> > > On an embedded system like Android, more than half of anon memory is actually > in swap devices such as zRAM. For example, while an app is switched to back- > ground, its most memory might be swapped-out. > > Now we have mTHP features, unfortunately, if we don't support large folios > swap-in, once those large folios are swapped-out, we immediately lose the > performance gain we can get through large folios and hardware optimization > such as CONT-PTE. > > This patch brings up mTHP swap-in support. Right now, we limit mTHP swap-in > to those contiguous swaps which were likely swapped out from mTHP as a whole. > > On the other hand, the current implementation only covers the SWAP_SYCHRONOUS > case. It doesn't support swapin_readahead as large folios yet. > > Right now, we are re-faulting large folios which are still in swapcache as a > whole, this can effectively decrease extra loops and early-exitings which we > have increased in arch_swap_restore() while supporting MTE restore for folios > rather than page. > > Signed-off-by: Chuanhua Han <hanchuanhua@oppo.com> > Co-developed-by: Barry Song <v-songbaohua@oppo.com> > Signed-off-by: Barry Song <v-songbaohua@oppo.com> > --- > mm/memory.c | 108 +++++++++++++++++++++++++++++++++++++++++++++------- > 1 file changed, 94 insertions(+), 14 deletions(-) > > diff --git a/mm/memory.c b/mm/memory.c > index f61a48929ba7..928b3f542932 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -107,6 +107,8 @@ EXPORT_SYMBOL(mem_map); > static vm_fault_t do_fault(struct vm_fault *vmf); > static vm_fault_t do_anonymous_page(struct vm_fault *vmf); > static bool vmf_pte_changed(struct vm_fault *vmf); > +static struct folio *alloc_anon_folio(struct vm_fault *vmf, > + bool (*pte_range_check)(pte_t *, int)); > > /* > * Return true if the original pte was a uffd-wp pte marker (so the pte was > @@ -3784,6 +3786,34 @@ static vm_fault_t handle_pte_marker(struct vm_fault *vmf) > return VM_FAULT_SIGBUS; > } > > +static bool pte_range_swap(pte_t *pte, int nr_pages) > +{ > + int i; > + swp_entry_t entry; > + unsigned type; > + pgoff_t start_offset; > + > + entry = pte_to_swp_entry(ptep_get_lockless(pte)); > + if (non_swap_entry(entry)) > + return false; > + start_offset = swp_offset(entry); > + if (start_offset % nr_pages) > + return false; > + > + type = swp_type(entry); > + for (i = 1; i < nr_pages; i++) { > + entry = pte_to_swp_entry(ptep_get_lockless(pte + i)); > + if (non_swap_entry(entry)) > + return false; > + if (swp_offset(entry) != start_offset + i) > + return false; > + if (swp_type(entry) != type) > + return false; > + } > + > + return true; > +} > + > /* > * We enter with non-exclusive mmap_lock (to exclude vma changes, > * but allow concurrent faults), and pte mapped but not yet locked. > @@ -3804,6 +3834,9 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > pte_t pte; > vm_fault_t ret = 0; > void *shadow = NULL; > + int nr_pages = 1; > + unsigned long start_address; > + pte_t *start_pte; > > if (!pte_unmap_same(vmf)) > goto out; > @@ -3868,13 +3901,20 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > if (data_race(si->flags & SWP_SYNCHRONOUS_IO) && > __swap_count(entry) == 1) { > /* skip swapcache */ > - folio = vma_alloc_folio(GFP_HIGHUSER_MOVABLE, 0, > - vma, vmf->address, false); > + folio = alloc_anon_folio(vmf, pte_range_swap); > page = &folio->page; > if (folio) { > __folio_set_locked(folio); > __folio_set_swapbacked(folio); > > + if (folio_test_large(folio)) { > + unsigned long start_offset; > + > + nr_pages = folio_nr_pages(folio); > + start_offset = swp_offset(entry) & ~(nr_pages - 1); > + entry = swp_entry(swp_type(entry), start_offset); > + } > + > if (mem_cgroup_swapin_charge_folio(folio, > vma->vm_mm, GFP_KERNEL, > entry)) { > @@ -3980,6 +4020,39 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > */ > vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, vmf->address, > &vmf->ptl); > + > + start_address = vmf->address; > + start_pte = vmf->pte; > + if (folio_test_large(folio)) { > + unsigned long nr = folio_nr_pages(folio); > + unsigned long addr = ALIGN_DOWN(vmf->address, nr * PAGE_SIZE); > + pte_t *pte_t = vmf->pte - (vmf->address - addr) / PAGE_SIZE; I forgot about one comment here. Please change the variable name other than "pte_t", it is a bit strange to use the typedef name as variable name here. Chris > + > + /* > + * case 1: we are allocating large_folio, try to map it as a whole > + * iff the swap entries are still entirely mapped; > + * case 2: we hit a large folio in swapcache, and all swap entries > + * are still entirely mapped, try to map a large folio as a whole. > + * otherwise, map only the faulting page within the large folio > + * which is swapcache > + */ > + if (pte_range_swap(pte_t, nr)) { > + start_address = addr; > + start_pte = pte_t; > + if (unlikely(folio == swapcache)) { > + /* > + * the below has been done before swap_read_folio() > + * for case 1 > + */ > + nr_pages = nr; > + entry = pte_to_swp_entry(ptep_get(start_pte)); > + page = &folio->page; > + } > + } else if (nr_pages > 1) { /* ptes have changed for case 1 */ > + goto out_nomap; > + } > + } > + > if (unlikely(!vmf->pte || !pte_same(ptep_get(vmf->pte), vmf->orig_pte))) > goto out_nomap; > > @@ -4047,12 +4120,14 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > * We're already holding a reference on the page but haven't mapped it > * yet. > */ > - swap_free(entry); > + swap_nr_free(entry, nr_pages); > if (should_try_to_free_swap(folio, vma, vmf->flags)) > folio_free_swap(folio); > > - inc_mm_counter(vma->vm_mm, MM_ANONPAGES); > - dec_mm_counter(vma->vm_mm, MM_SWAPENTS); > + folio_ref_add(folio, nr_pages - 1); > + add_mm_counter(vma->vm_mm, MM_ANONPAGES, nr_pages); > + add_mm_counter(vma->vm_mm, MM_SWAPENTS, -nr_pages); > + > pte = mk_pte(page, vma->vm_page_prot); > > /* > @@ -4062,14 +4137,14 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > * exclusivity. > */ > if (!folio_test_ksm(folio) && > - (exclusive || folio_ref_count(folio) == 1)) { > + (exclusive || folio_ref_count(folio) == nr_pages)) { > if (vmf->flags & FAULT_FLAG_WRITE) { > pte = maybe_mkwrite(pte_mkdirty(pte), vma); > vmf->flags &= ~FAULT_FLAG_WRITE; > } > rmap_flags |= RMAP_EXCLUSIVE; > } > - flush_icache_page(vma, page); > + flush_icache_pages(vma, page, nr_pages); > if (pte_swp_soft_dirty(vmf->orig_pte)) > pte = pte_mksoft_dirty(pte); > if (pte_swp_uffd_wp(vmf->orig_pte)) > @@ -4081,14 +4156,15 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > folio_add_new_anon_rmap(folio, vma, vmf->address); > folio_add_lru_vma(folio, vma); > } else { > - folio_add_anon_rmap_pte(folio, page, vma, vmf->address, > + folio_add_anon_rmap_ptes(folio, page, nr_pages, vma, start_address, > rmap_flags); > } > > VM_BUG_ON(!folio_test_anon(folio) || > (pte_write(pte) && !PageAnonExclusive(page))); > - set_pte_at(vma->vm_mm, vmf->address, vmf->pte, pte); > - arch_do_swap_page(vma->vm_mm, vma, vmf->address, pte, vmf->orig_pte); > + set_ptes(vma->vm_mm, start_address, start_pte, pte, nr_pages); > + > + arch_do_swap_page(vma->vm_mm, vma, start_address, pte, vmf->orig_pte); > > folio_unlock(folio); > if (folio != swapcache && swapcache) { > @@ -4105,6 +4181,9 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > } > > if (vmf->flags & FAULT_FLAG_WRITE) { > + if (folio_test_large(folio) && nr_pages > 1) > + vmf->orig_pte = ptep_get(vmf->pte); > + > ret |= do_wp_page(vmf); > if (ret & VM_FAULT_ERROR) > ret &= VM_FAULT_ERROR; > @@ -4112,7 +4191,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > } > > /* No need to invalidate - it was non-present before */ > - update_mmu_cache_range(vmf, vma, vmf->address, vmf->pte, 1); > + update_mmu_cache_range(vmf, vma, start_address, start_pte, nr_pages); > unlock: > if (vmf->pte) > pte_unmap_unlock(vmf->pte, vmf->ptl); > @@ -4148,7 +4227,8 @@ static bool pte_range_none(pte_t *pte, int nr_pages) > return true; > } > > -static struct folio *alloc_anon_folio(struct vm_fault *vmf) > +static struct folio *alloc_anon_folio(struct vm_fault *vmf, > + bool (*pte_range_check)(pte_t *, int)) > { > #ifdef CONFIG_TRANSPARENT_HUGEPAGE > struct vm_area_struct *vma = vmf->vma; > @@ -4190,7 +4270,7 @@ static struct folio *alloc_anon_folio(struct vm_fault *vmf) > order = highest_order(orders); > while (orders) { > addr = ALIGN_DOWN(vmf->address, PAGE_SIZE << order); > - if (pte_range_none(pte + pte_index(addr), 1 << order)) > + if (pte_range_check(pte + pte_index(addr), 1 << order)) > break; > order = next_order(&orders, order); > } > @@ -4269,7 +4349,7 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf) > if (unlikely(anon_vma_prepare(vma))) > goto oom; > /* Returns NULL on OOM or ERR_PTR(-EAGAIN) if we must retry the fault */ > - folio = alloc_anon_folio(vmf); > + folio = alloc_anon_folio(vmf, pte_range_none); > if (IS_ERR(folio)) > return 0; > if (!folio) > -- > 2.34.1 > >
On Sun, Jan 28, 2024 at 8:53 AM Chris Li <chrisl@kernel.org> wrote: > > On Thu, Jan 18, 2024 at 3:12 AM Barry Song <21cnbao@gmail.com> wrote: > > > > From: Chuanhua Han <hanchuanhua@oppo.com> > > > > On an embedded system like Android, more than half of anon memory is actually > > in swap devices such as zRAM. For example, while an app is switched to back- > > ground, its most memory might be swapped-out. > > > > Now we have mTHP features, unfortunately, if we don't support large folios > > swap-in, once those large folios are swapped-out, we immediately lose the > > performance gain we can get through large folios and hardware optimization > > such as CONT-PTE. > > > > This patch brings up mTHP swap-in support. Right now, we limit mTHP swap-in > > to those contiguous swaps which were likely swapped out from mTHP as a whole. > > > > On the other hand, the current implementation only covers the SWAP_SYCHRONOUS > > case. It doesn't support swapin_readahead as large folios yet. > > > > Right now, we are re-faulting large folios which are still in swapcache as a > > whole, this can effectively decrease extra loops and early-exitings which we > > have increased in arch_swap_restore() while supporting MTE restore for folios > > rather than page. > > > > Signed-off-by: Chuanhua Han <hanchuanhua@oppo.com> > > Co-developed-by: Barry Song <v-songbaohua@oppo.com> > > Signed-off-by: Barry Song <v-songbaohua@oppo.com> > > --- > > mm/memory.c | 108 +++++++++++++++++++++++++++++++++++++++++++++------- > > 1 file changed, 94 insertions(+), 14 deletions(-) > > > > diff --git a/mm/memory.c b/mm/memory.c > > index f61a48929ba7..928b3f542932 100644 > > --- a/mm/memory.c > > +++ b/mm/memory.c > > @@ -107,6 +107,8 @@ EXPORT_SYMBOL(mem_map); > > static vm_fault_t do_fault(struct vm_fault *vmf); > > static vm_fault_t do_anonymous_page(struct vm_fault *vmf); > > static bool vmf_pte_changed(struct vm_fault *vmf); > > +static struct folio *alloc_anon_folio(struct vm_fault *vmf, > > + bool (*pte_range_check)(pte_t *, int)); > > Instead of returning "bool", the pte_range_check() can return the > start of the swap entry of the large folio. > That will save some of the later code needed to get the start of the > large folio. I am trying to reuse alloc_anon_folio() for both do_anon_page and do_swap_page. Unfortunately, this func returns a folio, no more place to return a swap entry unless we add a parameter. Getting start swap is quite cheap on the other hand. > > > > > /* > > * Return true if the original pte was a uffd-wp pte marker (so the pte was > > @@ -3784,6 +3786,34 @@ static vm_fault_t handle_pte_marker(struct vm_fault *vmf) > > return VM_FAULT_SIGBUS; > > } > > > > +static bool pte_range_swap(pte_t *pte, int nr_pages) > > This function name seems to suggest it will perform the range swap. > That is not what it is doing. > Suggest change to some other name reflecting that it is only a > condition test without actual swap action. > I am not very good at naming functions. Just think it out loud: e.g. > pte_range_swap_check, pte_test_range_swap. You can come up with > something better. Ryan has a function named pte_range_none, which is checking the whole range is pte_none. Maybe we can have an is_pte_range_contig_swap which includes both swap and contiguous as we only need contiguous swap entries. > > > > +{ > > + int i; > > + swp_entry_t entry; > > + unsigned type; > > + pgoff_t start_offset; > > + > > + entry = pte_to_swp_entry(ptep_get_lockless(pte)); > > + if (non_swap_entry(entry)) > > + return false; > > + start_offset = swp_offset(entry); > > + if (start_offset % nr_pages) > > + return false; > > This suggests the pte argument needs to point to the beginning of the > large folio equivalent of swap entry(not sure what to call it. Let me > call it "large folio swap" here.). > We might want to unify the terms for that. > Any way, might want to document this requirement, otherwise the caller > might consider passing the current pte that generates the fault. From > the function name it is not obvious which pte should pass it. ok, Ryan's swap-out will allocate swap entries whose start offset is aligned with nr_pages. will add some doc to describe the first entry. > > > + > > + type = swp_type(entry); > > + for (i = 1; i < nr_pages; i++) { > > You might want to test the last page backwards, because if the entry > is not the large folio swap, most likely it will have the last entry > invalid. Some of the beginning swap entries might match due to batch > allocation etc. The SSD likes to group the nearby swap entry write out > together on the disk. I am not sure I got your point. This is checking all pages within the range of a large folio, Ryan's patch allocates swap entries all together as a whole for a large folio while swapping out. @@ -1073,14 +1133,13 @@ int get_swap_pages(int n_goal, swp_entry_t swp_entries[], int entry_size) spin_unlock(&si->lock); goto nextsi; } - if (size == SWAPFILE_CLUSTER) { - if (si->flags & SWP_BLKDEV) - n_ret = swap_alloc_cluster(si, swp_entries); + if (size > 1) { + n_ret = swap_alloc_large(si, swp_entries, size); } else n_ret = scan_swap_map_slots(si, SWAP_HAS_CACHE, n_goal, swp_entries); > > > > > + entry = pte_to_swp_entry(ptep_get_lockless(pte + i)); > > > + if (non_swap_entry(entry)) > > + return false; > > + if (swp_offset(entry) != start_offset + i) > > + return false; > > + if (swp_type(entry) != type) > > + return false; > > + } > > + > > + return true; > > +} > > + > > /* > > * We enter with non-exclusive mmap_lock (to exclude vma changes, > > * but allow concurrent faults), and pte mapped but not yet locked. > > @@ -3804,6 +3834,9 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > > pte_t pte; > > vm_fault_t ret = 0; > > void *shadow = NULL; > > + int nr_pages = 1; > > + unsigned long start_address; > > + pte_t *start_pte; > > > > if (!pte_unmap_same(vmf)) > > goto out; > > @@ -3868,13 +3901,20 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > > if (data_race(si->flags & SWP_SYNCHRONOUS_IO) && > > __swap_count(entry) == 1) { > > /* skip swapcache */ > > - folio = vma_alloc_folio(GFP_HIGHUSER_MOVABLE, 0, > > - vma, vmf->address, false); > > + folio = alloc_anon_folio(vmf, pte_range_swap); > > This function can call pte_range_swap() twice(), one here, another one > in folio_test_large(). > Consider caching the result so it does not need to walk the pte range > swap twice. > > I think alloc_anon_folio should either be told what is the > size(prefered) or just figure out the right size. I don't think it > needs to pass in the checking function as function callbacks. There > are two call sites of alloc_anon_folio, they are all within this > function. The call back seems a bit overkill here. Also duplicate the > range swap walk. alloc_anon_folio is reusing the one for do_anon_page. in both cases, scanning PTEs to figure out the proper size is done. The other call site is within do_anonymous_page(). > > > page = &folio->page; > > if (folio) { > > __folio_set_locked(folio); > > __folio_set_swapbacked(folio); > > > > + if (folio_test_large(folio)) { > > + unsigned long start_offset; > > + > > + nr_pages = folio_nr_pages(folio); > > + start_offset = swp_offset(entry) & ~(nr_pages - 1); > Here is the first place place we roll up the start offset with folio size > > > + entry = swp_entry(swp_type(entry), start_offset); > > + } > > + > > if (mem_cgroup_swapin_charge_folio(folio, > > vma->vm_mm, GFP_KERNEL, > > entry)) { > > @@ -3980,6 +4020,39 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > > */ > > vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, vmf->address, > > &vmf->ptl); > > + > > + start_address = vmf->address; > > + start_pte = vmf->pte; > > + if (folio_test_large(folio)) { > > + unsigned long nr = folio_nr_pages(folio); > > + unsigned long addr = ALIGN_DOWN(vmf->address, nr * PAGE_SIZE); > > + pte_t *pte_t = vmf->pte - (vmf->address - addr) / PAGE_SIZE; > > Here is the second place we roll up the folio size. > Maybe we can cache results and avoid repetition? We have two paths getting into large folios 1. we allocate a new large folio 2. we find a large folio in swapcache We have rolled up the folio size for case 1 before, but here we need to take care of case 2 as well. so that is why we need both. let me think if we can have some way to remove some redundant code for case 1. > > > + > > + /* > > + * case 1: we are allocating large_folio, try to map it as a whole > > + * iff the swap entries are still entirely mapped; > > + * case 2: we hit a large folio in swapcache, and all swap entries > > + * are still entirely mapped, try to map a large folio as a whole. > > + * otherwise, map only the faulting page within the large folio > > + * which is swapcache > > + */ > > One question I have in mind is that the swap device is locked. We > can't change the swap slot allocations. > It does not stop the pte entry getting changed right? Then we can have > someone in the user pace racing to change the PTE vs we checking the > pte there. > > > + if (pte_range_swap(pte_t, nr)) { > > After this pte_range_swap() check, some of the PTE entries get changed > and now we don't have the full large page swap any more? > At least I can't conclude this possibility can't happen yet, please > enlighten me. This check is under PTL. no one else can change it as they have to hold PTL to change pte. vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, vmf->address, &vmf->ptl); > > > + start_address = addr; > > + start_pte = pte_t; > > + if (unlikely(folio == swapcache)) { > > + /* > > + * the below has been done before swap_read_folio() > > + * for case 1 > > + */ > > + nr_pages = nr; > > + entry = pte_to_swp_entry(ptep_get(start_pte)); > > If we make pte_range_swap() return the entry, we can avoid refetching > the swap entry here. we will have to add a parameter swp_entry_t *first_entry to return the entry. The difficulty is we will have to add this parameter in alloc_anon_folio() as well, that's a bit overkill for that function. > > > + page = &folio->page; > > + } > > + } else if (nr_pages > 1) { /* ptes have changed for case 1 */ > > + goto out_nomap; > > + } > > + } > > + > I rewrote the above to make the code indentation matching the execution flow. > I did not add any functional change. Just rearrange the code to be a > bit more streamlined. Get rid of the "else if goto". > if (!pte_range_swap(pte_t, nr)) { > if (nr_pages > 1) /* ptes have changed for case 1 */ > goto out_nomap; > goto check_pte; > } > > start_address = addr; > start_pte = pte_t; > if (unlikely(folio == swapcache)) { > /* > * the below has been done before swap_read_folio() > * for case 1 > */ > nr_pages = nr; > entry = pte_to_swp_entry(ptep_get(start_pte)); > page = &folio->page; > } > } looks good to me. > > check_pte: > > > if (unlikely(!vmf->pte || !pte_same(ptep_get(vmf->pte), vmf->orig_pte))) > > goto out_nomap; > > > > @@ -4047,12 +4120,14 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > > * We're already holding a reference on the page but haven't mapped it > > * yet. > > */ > > - swap_free(entry); > > + swap_nr_free(entry, nr_pages); > > if (should_try_to_free_swap(folio, vma, vmf->flags)) > > folio_free_swap(folio); > > > > - inc_mm_counter(vma->vm_mm, MM_ANONPAGES); > > - dec_mm_counter(vma->vm_mm, MM_SWAPENTS); > > + folio_ref_add(folio, nr_pages - 1); > > + add_mm_counter(vma->vm_mm, MM_ANONPAGES, nr_pages); > > + add_mm_counter(vma->vm_mm, MM_SWAPENTS, -nr_pages); > > + > > pte = mk_pte(page, vma->vm_page_prot); > > > > /* > > @@ -4062,14 +4137,14 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > > * exclusivity. > > */ > > if (!folio_test_ksm(folio) && > > - (exclusive || folio_ref_count(folio) == 1)) { > > + (exclusive || folio_ref_count(folio) == nr_pages)) { > > if (vmf->flags & FAULT_FLAG_WRITE) { > > pte = maybe_mkwrite(pte_mkdirty(pte), vma); > > vmf->flags &= ~FAULT_FLAG_WRITE; > > } > > rmap_flags |= RMAP_EXCLUSIVE; > > } > > - flush_icache_page(vma, page); > > + flush_icache_pages(vma, page, nr_pages); > > if (pte_swp_soft_dirty(vmf->orig_pte)) > > pte = pte_mksoft_dirty(pte); > > if (pte_swp_uffd_wp(vmf->orig_pte)) > > @@ -4081,14 +4156,15 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > > folio_add_new_anon_rmap(folio, vma, vmf->address); > > folio_add_lru_vma(folio, vma); > > } else { > > - folio_add_anon_rmap_pte(folio, page, vma, vmf->address, > > + folio_add_anon_rmap_ptes(folio, page, nr_pages, vma, start_address, > > rmap_flags); > > } > > > > VM_BUG_ON(!folio_test_anon(folio) || > > (pte_write(pte) && !PageAnonExclusive(page))); > > - set_pte_at(vma->vm_mm, vmf->address, vmf->pte, pte); > > - arch_do_swap_page(vma->vm_mm, vma, vmf->address, pte, vmf->orig_pte); > > + set_ptes(vma->vm_mm, start_address, start_pte, pte, nr_pages); > > + > > + arch_do_swap_page(vma->vm_mm, vma, start_address, pte, vmf->orig_pte); > > > > folio_unlock(folio); > > if (folio != swapcache && swapcache) { > > @@ -4105,6 +4181,9 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > > } > > > > if (vmf->flags & FAULT_FLAG_WRITE) { > > + if (folio_test_large(folio) && nr_pages > 1) > > + vmf->orig_pte = ptep_get(vmf->pte); > > + > > ret |= do_wp_page(vmf); > > if (ret & VM_FAULT_ERROR) > > ret &= VM_FAULT_ERROR; > > @@ -4112,7 +4191,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > > } > > > > /* No need to invalidate - it was non-present before */ > > - update_mmu_cache_range(vmf, vma, vmf->address, vmf->pte, 1); > > + update_mmu_cache_range(vmf, vma, start_address, start_pte, nr_pages); > > unlock: > > if (vmf->pte) > > pte_unmap_unlock(vmf->pte, vmf->ptl); > > @@ -4148,7 +4227,8 @@ static bool pte_range_none(pte_t *pte, int nr_pages) > > return true; > > } > > > > -static struct folio *alloc_anon_folio(struct vm_fault *vmf) > > +static struct folio *alloc_anon_folio(struct vm_fault *vmf, > > + bool (*pte_range_check)(pte_t *, int)) > > { > > #ifdef CONFIG_TRANSPARENT_HUGEPAGE > > struct vm_area_struct *vma = vmf->vma; > > @@ -4190,7 +4270,7 @@ static struct folio *alloc_anon_folio(struct vm_fault *vmf) > > About this patch context we have the following comments in the source code. > /* > * Find the highest order where the aligned range is completely > * pte_none(). Note that all remaining orders will be completely > * pte_none(). > */ > > order = highest_order(orders); > > while (orders) { > > addr = ALIGN_DOWN(vmf->address, PAGE_SIZE << order); > > - if (pte_range_none(pte + pte_index(addr), 1 << order)) > > + if (pte_range_check(pte + pte_index(addr), 1 << order)) > > Again, I don't think we need to pass in the pte_range_check() as call > back functions. > There are only two call sites, all within this file. This will totally > invalide the above comments about pte_none(). In the worst case, just > make it accept one argument: it is checking swap range or none range > or not. Depending on the argument, do check none or swap range. > We should make it blend in with alloc_anon_folio better. My gut > feeling is that there should be a better way to make the range check > blend in with alloc_anon_folio better. e.g. Maybe store some of the > large swap context in the vmf and pass to different places etc. I need > to spend more time thinking about it to come up with happier > solutions. could pass a type to hint pte_range_none or pte_range_swap. i'd like to avoid changing any global variable like vmf, as people will have to cross two or more functions to understand what is going on though the second function might be able to use the changed vmf value in the first function. but it really makes the code have more couples. > > Chris > > > break; > > order = next_order(&orders, order); > > } > > @@ -4269,7 +4349,7 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf) > > if (unlikely(anon_vma_prepare(vma))) > > goto oom; > > /* Returns NULL on OOM or ERR_PTR(-EAGAIN) if we must retry the fault */ > > - folio = alloc_anon_folio(vmf); > > + folio = alloc_anon_folio(vmf, pte_range_none); > > if (IS_ERR(folio)) > > return 0; > > if (!folio) > > -- > > 2.34.1 > > Thanks Barry
On Sun, Jan 28, 2024 at 9:06 AM Chris Li <chrisl@kernel.org> wrote: > > On Thu, Jan 18, 2024 at 3:12 AM Barry Song <21cnbao@gmail.com> wrote: > > > > From: Chuanhua Han <hanchuanhua@oppo.com> > > > > On an embedded system like Android, more than half of anon memory is actually > > in swap devices such as zRAM. For example, while an app is switched to back- > > ground, its most memory might be swapped-out. > > > > Now we have mTHP features, unfortunately, if we don't support large folios > > swap-in, once those large folios are swapped-out, we immediately lose the > > performance gain we can get through large folios and hardware optimization > > such as CONT-PTE. > > > > This patch brings up mTHP swap-in support. Right now, we limit mTHP swap-in > > to those contiguous swaps which were likely swapped out from mTHP as a whole. > > > > On the other hand, the current implementation only covers the SWAP_SYCHRONOUS > > case. It doesn't support swapin_readahead as large folios yet. > > > > Right now, we are re-faulting large folios which are still in swapcache as a > > whole, this can effectively decrease extra loops and early-exitings which we > > have increased in arch_swap_restore() while supporting MTE restore for folios > > rather than page. > > > > Signed-off-by: Chuanhua Han <hanchuanhua@oppo.com> > > Co-developed-by: Barry Song <v-songbaohua@oppo.com> > > Signed-off-by: Barry Song <v-songbaohua@oppo.com> > > --- > > mm/memory.c | 108 +++++++++++++++++++++++++++++++++++++++++++++------- > > 1 file changed, 94 insertions(+), 14 deletions(-) > > > > diff --git a/mm/memory.c b/mm/memory.c > > index f61a48929ba7..928b3f542932 100644 > > --- a/mm/memory.c > > +++ b/mm/memory.c > > @@ -107,6 +107,8 @@ EXPORT_SYMBOL(mem_map); > > static vm_fault_t do_fault(struct vm_fault *vmf); > > static vm_fault_t do_anonymous_page(struct vm_fault *vmf); > > static bool vmf_pte_changed(struct vm_fault *vmf); > > +static struct folio *alloc_anon_folio(struct vm_fault *vmf, > > + bool (*pte_range_check)(pte_t *, int)); > > > > /* > > * Return true if the original pte was a uffd-wp pte marker (so the pte was > > @@ -3784,6 +3786,34 @@ static vm_fault_t handle_pte_marker(struct vm_fault *vmf) > > return VM_FAULT_SIGBUS; > > } > > > > +static bool pte_range_swap(pte_t *pte, int nr_pages) > > +{ > > + int i; > > + swp_entry_t entry; > > + unsigned type; > > + pgoff_t start_offset; > > + > > + entry = pte_to_swp_entry(ptep_get_lockless(pte)); > > + if (non_swap_entry(entry)) > > + return false; > > + start_offset = swp_offset(entry); > > + if (start_offset % nr_pages) > > + return false; > > + > > + type = swp_type(entry); > > + for (i = 1; i < nr_pages; i++) { > > + entry = pte_to_swp_entry(ptep_get_lockless(pte + i)); > > + if (non_swap_entry(entry)) > > + return false; > > + if (swp_offset(entry) != start_offset + i) > > + return false; > > + if (swp_type(entry) != type) > > + return false; > > + } > > + > > + return true; > > +} > > + > > /* > > * We enter with non-exclusive mmap_lock (to exclude vma changes, > > * but allow concurrent faults), and pte mapped but not yet locked. > > @@ -3804,6 +3834,9 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > > pte_t pte; > > vm_fault_t ret = 0; > > void *shadow = NULL; > > + int nr_pages = 1; > > + unsigned long start_address; > > + pte_t *start_pte; > > > > if (!pte_unmap_same(vmf)) > > goto out; > > @@ -3868,13 +3901,20 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > > if (data_race(si->flags & SWP_SYNCHRONOUS_IO) && > > __swap_count(entry) == 1) { > > /* skip swapcache */ > > - folio = vma_alloc_folio(GFP_HIGHUSER_MOVABLE, 0, > > - vma, vmf->address, false); > > + folio = alloc_anon_folio(vmf, pte_range_swap); > > page = &folio->page; > > if (folio) { > > __folio_set_locked(folio); > > __folio_set_swapbacked(folio); > > > > + if (folio_test_large(folio)) { > > + unsigned long start_offset; > > + > > + nr_pages = folio_nr_pages(folio); > > + start_offset = swp_offset(entry) & ~(nr_pages - 1); > > + entry = swp_entry(swp_type(entry), start_offset); > > + } > > + > > if (mem_cgroup_swapin_charge_folio(folio, > > vma->vm_mm, GFP_KERNEL, > > entry)) { > > @@ -3980,6 +4020,39 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > > */ > > vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, vmf->address, > > &vmf->ptl); > > + > > + start_address = vmf->address; > > + start_pte = vmf->pte; > > + if (folio_test_large(folio)) { > > + unsigned long nr = folio_nr_pages(folio); > > + unsigned long addr = ALIGN_DOWN(vmf->address, nr * PAGE_SIZE); > > + pte_t *pte_t = vmf->pte - (vmf->address - addr) / PAGE_SIZE; > > I forgot about one comment here. > Please change the variable name other than "pte_t", it is a bit > strange to use the typedef name as variable name here. > make sense! > Chris Thanks Barry
diff --git a/mm/memory.c b/mm/memory.c index f61a48929ba7..928b3f542932 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -107,6 +107,8 @@ EXPORT_SYMBOL(mem_map); static vm_fault_t do_fault(struct vm_fault *vmf); static vm_fault_t do_anonymous_page(struct vm_fault *vmf); static bool vmf_pte_changed(struct vm_fault *vmf); +static struct folio *alloc_anon_folio(struct vm_fault *vmf, + bool (*pte_range_check)(pte_t *, int)); /* * Return true if the original pte was a uffd-wp pte marker (so the pte was @@ -3784,6 +3786,34 @@ static vm_fault_t handle_pte_marker(struct vm_fault *vmf) return VM_FAULT_SIGBUS; } +static bool pte_range_swap(pte_t *pte, int nr_pages) +{ + int i; + swp_entry_t entry; + unsigned type; + pgoff_t start_offset; + + entry = pte_to_swp_entry(ptep_get_lockless(pte)); + if (non_swap_entry(entry)) + return false; + start_offset = swp_offset(entry); + if (start_offset % nr_pages) + return false; + + type = swp_type(entry); + for (i = 1; i < nr_pages; i++) { + entry = pte_to_swp_entry(ptep_get_lockless(pte + i)); + if (non_swap_entry(entry)) + return false; + if (swp_offset(entry) != start_offset + i) + return false; + if (swp_type(entry) != type) + return false; + } + + return true; +} + /* * We enter with non-exclusive mmap_lock (to exclude vma changes, * but allow concurrent faults), and pte mapped but not yet locked. @@ -3804,6 +3834,9 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) pte_t pte; vm_fault_t ret = 0; void *shadow = NULL; + int nr_pages = 1; + unsigned long start_address; + pte_t *start_pte; if (!pte_unmap_same(vmf)) goto out; @@ -3868,13 +3901,20 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) if (data_race(si->flags & SWP_SYNCHRONOUS_IO) && __swap_count(entry) == 1) { /* skip swapcache */ - folio = vma_alloc_folio(GFP_HIGHUSER_MOVABLE, 0, - vma, vmf->address, false); + folio = alloc_anon_folio(vmf, pte_range_swap); page = &folio->page; if (folio) { __folio_set_locked(folio); __folio_set_swapbacked(folio); + if (folio_test_large(folio)) { + unsigned long start_offset; + + nr_pages = folio_nr_pages(folio); + start_offset = swp_offset(entry) & ~(nr_pages - 1); + entry = swp_entry(swp_type(entry), start_offset); + } + if (mem_cgroup_swapin_charge_folio(folio, vma->vm_mm, GFP_KERNEL, entry)) { @@ -3980,6 +4020,39 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) */ vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, vmf->address, &vmf->ptl); + + start_address = vmf->address; + start_pte = vmf->pte; + if (folio_test_large(folio)) { + unsigned long nr = folio_nr_pages(folio); + unsigned long addr = ALIGN_DOWN(vmf->address, nr * PAGE_SIZE); + pte_t *pte_t = vmf->pte - (vmf->address - addr) / PAGE_SIZE; + + /* + * case 1: we are allocating large_folio, try to map it as a whole + * iff the swap entries are still entirely mapped; + * case 2: we hit a large folio in swapcache, and all swap entries + * are still entirely mapped, try to map a large folio as a whole. + * otherwise, map only the faulting page within the large folio + * which is swapcache + */ + if (pte_range_swap(pte_t, nr)) { + start_address = addr; + start_pte = pte_t; + if (unlikely(folio == swapcache)) { + /* + * the below has been done before swap_read_folio() + * for case 1 + */ + nr_pages = nr; + entry = pte_to_swp_entry(ptep_get(start_pte)); + page = &folio->page; + } + } else if (nr_pages > 1) { /* ptes have changed for case 1 */ + goto out_nomap; + } + } + if (unlikely(!vmf->pte || !pte_same(ptep_get(vmf->pte), vmf->orig_pte))) goto out_nomap; @@ -4047,12 +4120,14 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) * We're already holding a reference on the page but haven't mapped it * yet. */ - swap_free(entry); + swap_nr_free(entry, nr_pages); if (should_try_to_free_swap(folio, vma, vmf->flags)) folio_free_swap(folio); - inc_mm_counter(vma->vm_mm, MM_ANONPAGES); - dec_mm_counter(vma->vm_mm, MM_SWAPENTS); + folio_ref_add(folio, nr_pages - 1); + add_mm_counter(vma->vm_mm, MM_ANONPAGES, nr_pages); + add_mm_counter(vma->vm_mm, MM_SWAPENTS, -nr_pages); + pte = mk_pte(page, vma->vm_page_prot); /* @@ -4062,14 +4137,14 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) * exclusivity. */ if (!folio_test_ksm(folio) && - (exclusive || folio_ref_count(folio) == 1)) { + (exclusive || folio_ref_count(folio) == nr_pages)) { if (vmf->flags & FAULT_FLAG_WRITE) { pte = maybe_mkwrite(pte_mkdirty(pte), vma); vmf->flags &= ~FAULT_FLAG_WRITE; } rmap_flags |= RMAP_EXCLUSIVE; } - flush_icache_page(vma, page); + flush_icache_pages(vma, page, nr_pages); if (pte_swp_soft_dirty(vmf->orig_pte)) pte = pte_mksoft_dirty(pte); if (pte_swp_uffd_wp(vmf->orig_pte)) @@ -4081,14 +4156,15 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) folio_add_new_anon_rmap(folio, vma, vmf->address); folio_add_lru_vma(folio, vma); } else { - folio_add_anon_rmap_pte(folio, page, vma, vmf->address, + folio_add_anon_rmap_ptes(folio, page, nr_pages, vma, start_address, rmap_flags); } VM_BUG_ON(!folio_test_anon(folio) || (pte_write(pte) && !PageAnonExclusive(page))); - set_pte_at(vma->vm_mm, vmf->address, vmf->pte, pte); - arch_do_swap_page(vma->vm_mm, vma, vmf->address, pte, vmf->orig_pte); + set_ptes(vma->vm_mm, start_address, start_pte, pte, nr_pages); + + arch_do_swap_page(vma->vm_mm, vma, start_address, pte, vmf->orig_pte); folio_unlock(folio); if (folio != swapcache && swapcache) { @@ -4105,6 +4181,9 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) } if (vmf->flags & FAULT_FLAG_WRITE) { + if (folio_test_large(folio) && nr_pages > 1) + vmf->orig_pte = ptep_get(vmf->pte); + ret |= do_wp_page(vmf); if (ret & VM_FAULT_ERROR) ret &= VM_FAULT_ERROR; @@ -4112,7 +4191,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) } /* No need to invalidate - it was non-present before */ - update_mmu_cache_range(vmf, vma, vmf->address, vmf->pte, 1); + update_mmu_cache_range(vmf, vma, start_address, start_pte, nr_pages); unlock: if (vmf->pte) pte_unmap_unlock(vmf->pte, vmf->ptl); @@ -4148,7 +4227,8 @@ static bool pte_range_none(pte_t *pte, int nr_pages) return true; } -static struct folio *alloc_anon_folio(struct vm_fault *vmf) +static struct folio *alloc_anon_folio(struct vm_fault *vmf, + bool (*pte_range_check)(pte_t *, int)) { #ifdef CONFIG_TRANSPARENT_HUGEPAGE struct vm_area_struct *vma = vmf->vma; @@ -4190,7 +4270,7 @@ static struct folio *alloc_anon_folio(struct vm_fault *vmf) order = highest_order(orders); while (orders) { addr = ALIGN_DOWN(vmf->address, PAGE_SIZE << order); - if (pte_range_none(pte + pte_index(addr), 1 << order)) + if (pte_range_check(pte + pte_index(addr), 1 << order)) break; order = next_order(&orders, order); } @@ -4269,7 +4349,7 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf) if (unlikely(anon_vma_prepare(vma))) goto oom; /* Returns NULL on OOM or ERR_PTR(-EAGAIN) if we must retry the fault */ - folio = alloc_anon_folio(vmf); + folio = alloc_anon_folio(vmf, pte_range_none); if (IS_ERR(folio)) return 0; if (!folio)