Message ID | 20240223041550.77157-1-21cnbao@gmail.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel+bounces-77754-ouuuleilei=gmail.com@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:a81b:b0:108:e6aa:91d0 with SMTP id bq27csp375843dyb; Thu, 22 Feb 2024 20:17:54 -0800 (PST) X-Forwarded-Encrypted: i=3; AJvYcCVhzu7RJ4/1fgqlg4fDB67dTKFOFgQAoUYr0OunGDNAy+Cpr5GRyyZ9JCvwpsfWkEFtnlKBWuRmmivyf09qi59MmrpCQQ== X-Google-Smtp-Source: AGHT+IES5D4WJEguXu3vFzoyRLJEoIkloU4OGYNejCrRd2Nz2QWif9EA/e7zWuiEEFLutudSxcW1 X-Received: by 2002:a05:6830:dc4:b0:6e4:7ae1:8dbe with SMTP id bw4-20020a0568300dc400b006e47ae18dbemr1216733otb.20.1708661874506; Thu, 22 Feb 2024 20:17:54 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1708661874; cv=pass; d=google.com; s=arc-20160816; b=a+fTmnZzHtMwI9alDPEk+1PCGccxaHmAo9tVIaOvWNNDD5zWV+YydyaPd/BestSZ8V Xk3g5ZDIhXiF5JB2Bw5UGI2zTv9YlnnWhBLz7KGpNWR+u0DVGyK+MktQEpj3TxYLT9cG 0Pxq4NL5DghH6k2wNDR6vWYVvMauJ0hzKBH75L9AC7DR0st1zvSdB3AdP7Bx+GnliO2N 5iJz6Mxuad38EUCJg/eRzNw7kaJ2xJ145lY+x37lLMVfHgQhiFCk9JwyFzb9/i5DHKRR ObnCxrirPrqLPYbl+ybPKUq4a8ofg06wovWMCZwGWcjt30oKEy1k5fjCskCZD4ThGi8A /LDw== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:message-id:date:subject:cc:to :from:dkim-signature; bh=5+9+uMP+Bu3TiyfCDkgIe0O8ZamRmCynrJ5soRd+G7M=; fh=bziSrBwoZJIUvjE+g0ehalyZ9e3iIGnlg09l4Cbe1Mc=; b=Ivt6BO8kHfkmROWFOYfbIjik4VEV+kru4od5tqDzf3pk52+mRRo2L4RfYaAgshZW75 4IQ1ZtqucDSPjvsPS2KYjf1RdQ11VpUs7Ung2WxzRzQLKEV/gOad+kzV1Nl2wJvTAPnV Lqn3deAqhLKfeHe0KORjK3R0XLfLtGI474HizHiX0rlII6Ls3TofFqC++7DoGU/CW8bg Oz3KuySmJeOvr+qiDO0IFoPkvl6hQ8HxwfBMT6LPCFddNY2w9l7kW40IGlUK1LhdPoLx tgu/2e6t1EUSKOBdoAIGRCUxjnTGpy2PGuhKdfr7sSHVexAh5HPi7BSBVbRxYitlcDLR pvwQ==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@gmail.com header.s=20230601 header.b=R107QGsI; arc=pass (i=1 spf=pass spfdomain=gmail.com dkim=pass dkdomain=gmail.com dmarc=pass fromdomain=gmail.com); spf=pass (google.com: domain of linux-kernel+bounces-77754-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-77754-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [139.178.88.99]) by mx.google.com with ESMTPS id o24-20020a635a18000000b005dc6fa39b70si6199981pgb.54.2024.02.22.20.17.54 for <ouuuleilei@gmail.com> (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 22 Feb 2024 20:17:54 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-77754-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) client-ip=139.178.88.99; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20230601 header.b=R107QGsI; arc=pass (i=1 spf=pass spfdomain=gmail.com dkim=pass dkdomain=gmail.com dmarc=pass fromdomain=gmail.com); spf=pass (google.com: domain of linux-kernel+bounces-77754-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-77754-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id E52152883F5 for <ouuuleilei@gmail.com>; Fri, 23 Feb 2024 04:17:53 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 2DC36111BA; Fri, 23 Feb 2024 04:17:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="R107QGsI" Received: from mail-pl1-f182.google.com (mail-pl1-f182.google.com [209.85.214.182]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B074910A01 for <linux-kernel@vger.kernel.org>; Fri, 23 Feb 2024 04:17:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.182 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708661861; cv=none; b=dxwq4RnbIAyvu9Jnl/EhV/xS1d1F9biOvymuBNLyy9xX/YBE9vgp87nhYmiXhvorWpn4Gh2A0UnXL3YfHgDud2IhrXxgh1I6frP4qEMExJUUr6yUByLqp54LW17+xRbobtp1p6qK51R2/EMJm2Yb+BbLhLjAxwqmILspS8+b348= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708661861; c=relaxed/simple; bh=lBGBnghS4F7tZXGu+GMAmAoN/xklFqNUk5MgFHyDgjs=; h=From:To:Cc:Subject:Date:Message-Id:MIME-Version; b=Jpdh/3LKH4DdXMy4MOalVajUU4MvKSRhBjTtePTbudhxZkQdWPl5nQNDz579Sm5Bsp/2LmUIsgqAIZMX8YHxf42MSBSuuM5ReOzHGt0R7np6OTOr5wHHxHoSibPS9d0W5H8vf87NcgG9RVag0Kto1epDdV5uH60LhD9SKZ69Vfo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=R107QGsI; arc=none smtp.client-ip=209.85.214.182 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Received: by mail-pl1-f182.google.com with SMTP id d9443c01a7336-1dc0e5b223eso3083955ad.1 for <linux-kernel@vger.kernel.org>; Thu, 22 Feb 2024 20:17:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1708661859; x=1709266659; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date:message-id:reply-to; bh=5+9+uMP+Bu3TiyfCDkgIe0O8ZamRmCynrJ5soRd+G7M=; b=R107QGsI7MYqSy62d1RynQasOMKAO1i5ABtIHOddvCI0gLyOepqBtPyI1OOUpiNodk 7S5fP02l74zeVvbC5Iw23yAVcvYoAJJ6t2xDt+sVdWrwl8hQ6rp9vqZlMIeUQwHFCGrj uHadkSg9i2CToKaNsY+jnbUSZcDGB8KzALgqdYdGrIgnOKpFbaLXG7SgbAmImvdwCrW/ UvTiOLwjtroqmrnwX9ZIRTY02QuasoYGzW0A33Ae9BhbFU6qQWDf3A4PBb4kKjpPz5ox etAmg8CzYtVAVyLODwL8Vq+zcPxbhnmR8tBJARiInQ62i07MqgSXIroqYgUdNGjgX3Us RHsg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1708661859; x=1709266659; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=5+9+uMP+Bu3TiyfCDkgIe0O8ZamRmCynrJ5soRd+G7M=; b=Jpy4vt12dV7KrG4sLo5vI4GL3hHblsZ9mnvZFi7Cg9QZLl+i1N4LK1EPqE4XI9K7Y9 areU7y2xef2UI2pg9SonPATDAZPJeKJ3F2XuIw/+46ZeTYjQuNGTYnDesa/9uQWD3A1k NGVUkO8VdnYyuN4wJMgE5MpzdfLSP+z+7ihJ8OtAgVBXyu05j3Az4E5mcU+7sVYlJjri G1gz5xgTvDNp4S6Hfncsd9reiEJ31jyl7Yue74xLPIRhQte9XD7Xm2WW0VY4CaISooxL DMlxTnLlh90X7ElsaDkKOldkIfitmJnWNt1FpunJPmfAnIwbzvu3x6ctsxDcq00l4t80 ZBJg== X-Gm-Message-State: AOJu0Yx4iPijAjLvyTaOF1oQkbXTA2yaDCG/ihnKqr4LykvowlDVSQM+ WQhIcRL+RPE3AjXr2Zb3omTJEw0OkQ9eep87q0Wc/Vv3wEJ+oHvg+sRJYfcplQ4= X-Received: by 2002:a17:902:be0a:b0:1dc:6373:3cc with SMTP id r10-20020a170902be0a00b001dc637303ccmr652610pls.50.1708661858923; Thu, 22 Feb 2024 20:17:38 -0800 (PST) Received: from barry-desktop.hub ([2407:7000:8942:5500:5a27:dbae:d10:c2d6]) by smtp.gmail.com with ESMTPSA id p12-20020a170902eacc00b001dbb06b6133sm10662054pld.127.2024.02.22.20.17.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 22 Feb 2024 20:17:38 -0800 (PST) From: Barry Song <21cnbao@gmail.com> To: sj@kernel.org, akpm@linux-foundation.org, damon@lists.linux.dev, linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, minchan@kernel.org, mhocko@suse.com, hannes@cmpxchg.org, Barry Song <v-songbaohua@oppo.com> Subject: [PATCH RFC] mm: madvise: pageout: ignore references rather than clearing young Date: Fri, 23 Feb 2024 17:15:50 +1300 Message-Id: <20240223041550.77157-1-21cnbao@gmail.com> X-Mailer: git-send-email 2.34.1 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: <linux-kernel.vger.kernel.org> List-Subscribe: <mailto:linux-kernel+subscribe@vger.kernel.org> List-Unsubscribe: <mailto:linux-kernel+unsubscribe@vger.kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1791661833751886385 X-GMAIL-MSGID: 1791661833751886385 |
Series |
[RFC] mm: madvise: pageout: ignore references rather than clearing young
|
|
Commit Message
Barry Song
Feb. 23, 2024, 4:15 a.m. UTC
From: Barry Song <v-songbaohua@oppo.com> While doing MADV_PAGEOUT, the current code will clear PTE young so that vmscan won't read young flags to allow the reclamation of madvised folios to go ahead. It seems we can do it by directly ignoring references, thus we can remove tlb flush in madvise and rmap overhead in vmscan. Regarding the side effect, in the original code, if a parallel thread runs side by side to access the madvised memory with the thread doing madvise, folios will get a chance to be re-activated by vmscan. But with the patch, they will still be reclaimed. But this behaviour doing PAGEOUT and doing access at the same time is quite silly like DoS. So probably, we don't need to care. A microbench as below has shown 6% decrement on the latency of MADV_PAGEOUT, #define PGSIZE 4096 main() { int i; #define SIZE 512*1024*1024 volatile long *p = mmap(NULL, SIZE, PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANONYMOUS, -1, 0); for (i = 0; i < SIZE/sizeof(long); i += PGSIZE / sizeof(long)) p[i] = 0x11; madvise(p, SIZE, MADV_PAGEOUT); } w/o patch w/ patch root@10:~# time ./a.out root@10:~# time ./a.out real 0m49.634s real 0m46.334s user 0m0.637s user 0m0.648s sys 0m47.434s sys 0m44.265s Signed-off-by: Barry Song <v-songbaohua@oppo.com> --- mm/damon/paddr.c | 2 +- mm/internal.h | 2 +- mm/madvise.c | 8 ++++---- mm/vmscan.c | 12 +++++++----- 4 files changed, 13 insertions(+), 11 deletions(-)
Comments
Hi Barry, On Fri, Feb 23, 2024 at 05:15:50PM +1300, Barry Song wrote: > From: Barry Song <v-songbaohua@oppo.com> > > While doing MADV_PAGEOUT, the current code will clear PTE young > so that vmscan won't read young flags to allow the reclamation > of madvised folios to go ahead. Isn't it good to accelerate reclaiming? vmscan checks whether the page was accessed recenlty by the young bit from pte and if it is, it doesn't reclaim the page. Since we have cleared the young bit in pte in madvise_pageout, vmscan is likely to reclaim the page since it wouldn't see the ferencecd_ptes from folio_check_references. Could you clarify if I miss something here? > It seems we can do it by directly ignoring references, thus we > can remove tlb flush in madvise and rmap overhead in vmscan. > > Regarding the side effect, in the original code, if a parallel > thread runs side by side to access the madvised memory with the > thread doing madvise, folios will get a chance to be re-activated > by vmscan. But with the patch, they will still be reclaimed. But > this behaviour doing PAGEOUT and doing access at the same time is > quite silly like DoS. So probably, we don't need to care. > > A microbench as below has shown 6% decrement on the latency of > MADV_PAGEOUT, > > #define PGSIZE 4096 > main() > { > int i; > #define SIZE 512*1024*1024 > volatile long *p = mmap(NULL, SIZE, PROT_READ | PROT_WRITE, > MAP_PRIVATE | MAP_ANONYMOUS, -1, 0); > > for (i = 0; i < SIZE/sizeof(long); i += PGSIZE / sizeof(long)) > p[i] = 0x11; > > madvise(p, SIZE, MADV_PAGEOUT); > } > > w/o patch w/ patch > root@10:~# time ./a.out root@10:~# time ./a.out > real 0m49.634s real 0m46.334s > user 0m0.637s user 0m0.648s > sys 0m47.434s sys 0m44.265s > > Signed-off-by: Barry Song <v-songbaohua@oppo.com> > --- > mm/damon/paddr.c | 2 +- > mm/internal.h | 2 +- > mm/madvise.c | 8 ++++---- > mm/vmscan.c | 12 +++++++----- > 4 files changed, 13 insertions(+), 11 deletions(-) > > diff --git a/mm/damon/paddr.c b/mm/damon/paddr.c > index 081e2a325778..5e6dc312072c 100644 > --- a/mm/damon/paddr.c > +++ b/mm/damon/paddr.c > @@ -249,7 +249,7 @@ static unsigned long damon_pa_pageout(struct damon_region *r, struct damos *s) > put_folio: > folio_put(folio); > } > - applied = reclaim_pages(&folio_list); > + applied = reclaim_pages(&folio_list, false); > cond_resched(); > return applied * PAGE_SIZE; > } > diff --git a/mm/internal.h b/mm/internal.h > index 93e229112045..36c11ea41f47 100644 > --- a/mm/internal.h > +++ b/mm/internal.h > @@ -868,7 +868,7 @@ extern unsigned long __must_check vm_mmap_pgoff(struct file *, unsigned long, > unsigned long, unsigned long); > > extern void set_pageblock_order(void); > -unsigned long reclaim_pages(struct list_head *folio_list); > +unsigned long reclaim_pages(struct list_head *folio_list, bool ignore_references); > unsigned int reclaim_clean_pages_from_list(struct zone *zone, > struct list_head *folio_list); > /* The ALLOC_WMARK bits are used as an index to zone->watermark */ > diff --git a/mm/madvise.c b/mm/madvise.c > index abde3edb04f0..44a498c94158 100644 > --- a/mm/madvise.c > +++ b/mm/madvise.c > @@ -386,7 +386,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd, > return 0; > } > > - if (pmd_young(orig_pmd)) { > + if (!pageout && pmd_young(orig_pmd)) { > pmdp_invalidate(vma, addr, pmd); > orig_pmd = pmd_mkold(orig_pmd); > > @@ -410,7 +410,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd, > huge_unlock: > spin_unlock(ptl); > if (pageout) > - reclaim_pages(&folio_list); > + reclaim_pages(&folio_list, true); > return 0; > } > > @@ -490,7 +490,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd, > > VM_BUG_ON_FOLIO(folio_test_large(folio), folio); > > - if (pte_young(ptent)) { > + if (!pageout && pte_young(ptent)) { > ptent = ptep_get_and_clear_full(mm, addr, pte, > tlb->fullmm); > ptent = pte_mkold(ptent); > @@ -524,7 +524,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd, > pte_unmap_unlock(start_pte, ptl); > } > if (pageout) > - reclaim_pages(&folio_list); > + reclaim_pages(&folio_list, true); > cond_resched(); > > return 0; > diff --git a/mm/vmscan.c b/mm/vmscan.c > index 402c290fbf5a..ba2f37f46a73 100644 > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -2102,7 +2102,8 @@ static void shrink_active_list(unsigned long nr_to_scan, > } > > static unsigned int reclaim_folio_list(struct list_head *folio_list, > - struct pglist_data *pgdat) > + struct pglist_data *pgdat, > + bool ignore_references) > { > struct reclaim_stat dummy_stat; > unsigned int nr_reclaimed; > @@ -2115,7 +2116,7 @@ static unsigned int reclaim_folio_list(struct list_head *folio_list, > .no_demotion = 1, > }; > > - nr_reclaimed = shrink_folio_list(folio_list, pgdat, &sc, &dummy_stat, false); > + nr_reclaimed = shrink_folio_list(folio_list, pgdat, &sc, &dummy_stat, ignore_references); > while (!list_empty(folio_list)) { > folio = lru_to_folio(folio_list); > list_del(&folio->lru); > @@ -2125,7 +2126,7 @@ static unsigned int reclaim_folio_list(struct list_head *folio_list, > return nr_reclaimed; > } > > -unsigned long reclaim_pages(struct list_head *folio_list) > +unsigned long reclaim_pages(struct list_head *folio_list, bool ignore_references) > { > int nid; > unsigned int nr_reclaimed = 0; > @@ -2147,11 +2148,12 @@ unsigned long reclaim_pages(struct list_head *folio_list) > continue; > } > > - nr_reclaimed += reclaim_folio_list(&node_folio_list, NODE_DATA(nid)); > + nr_reclaimed += reclaim_folio_list(&node_folio_list, NODE_DATA(nid), > + ignore_references); > nid = folio_nid(lru_to_folio(folio_list)); > } while (!list_empty(folio_list)); > > - nr_reclaimed += reclaim_folio_list(&node_folio_list, NODE_DATA(nid)); > + nr_reclaimed += reclaim_folio_list(&node_folio_list, NODE_DATA(nid), ignore_references); > > memalloc_noreclaim_restore(noreclaim_flag); > > -- > 2.34.1 >
On Sat, Feb 24, 2024 at 11:09 AM Minchan Kim <minchan@kernel.org> wrote: > > Hi Barry, > > On Fri, Feb 23, 2024 at 05:15:50PM +1300, Barry Song wrote: > > From: Barry Song <v-songbaohua@oppo.com> > > > > While doing MADV_PAGEOUT, the current code will clear PTE young > > so that vmscan won't read young flags to allow the reclamation > > of madvised folios to go ahead. > > Isn't it good to accelerate reclaiming? vmscan checks whether the > page was accessed recenlty by the young bit from pte and if it is, > it doesn't reclaim the page. Since we have cleared the young bit > in pte in madvise_pageout, vmscan is likely to reclaim the page > since it wouldn't see the ferencecd_ptes from folio_check_references. right, but the proposal is asking vmscan to skip the folio_check_references if this is a PAGEOUT. so we remove both pte_clear_young and rmap of folio_check_references. > > Could you clarify if I miss something here? guest you missed we are skipping folio_check_references now. we remove both, thus, make MADV_PAGEOUT 6% faster. > > > > It seems we can do it by directly ignoring references, thus we > > can remove tlb flush in madvise and rmap overhead in vmscan. > > > > Regarding the side effect, in the original code, if a parallel > > thread runs side by side to access the madvised memory with the > > thread doing madvise, folios will get a chance to be re-activated > > by vmscan. But with the patch, they will still be reclaimed. But > > this behaviour doing PAGEOUT and doing access at the same time is > > quite silly like DoS. So probably, we don't need to care. > > > > A microbench as below has shown 6% decrement on the latency of > > MADV_PAGEOUT, > > > > #define PGSIZE 4096 > > main() > > { > > int i; > > #define SIZE 512*1024*1024 > > volatile long *p = mmap(NULL, SIZE, PROT_READ | PROT_WRITE, > > MAP_PRIVATE | MAP_ANONYMOUS, -1, 0); > > > > for (i = 0; i < SIZE/sizeof(long); i += PGSIZE / sizeof(long)) > > p[i] = 0x11; > > > > madvise(p, SIZE, MADV_PAGEOUT); > > } > > > > w/o patch w/ patch > > root@10:~# time ./a.out root@10:~# time ./a.out > > real 0m49.634s real 0m46.334s > > user 0m0.637s user 0m0.648s > > sys 0m47.434s sys 0m44.265s > > > > Signed-off-by: Barry Song <v-songbaohua@oppo.com> > > --- > > mm/damon/paddr.c | 2 +- > > mm/internal.h | 2 +- > > mm/madvise.c | 8 ++++---- > > mm/vmscan.c | 12 +++++++----- > > 4 files changed, 13 insertions(+), 11 deletions(-) > > > > diff --git a/mm/damon/paddr.c b/mm/damon/paddr.c > > index 081e2a325778..5e6dc312072c 100644 > > --- a/mm/damon/paddr.c > > +++ b/mm/damon/paddr.c > > @@ -249,7 +249,7 @@ static unsigned long damon_pa_pageout(struct damon_region *r, struct damos *s) > > put_folio: > > folio_put(folio); > > } > > - applied = reclaim_pages(&folio_list); > > + applied = reclaim_pages(&folio_list, false); > > cond_resched(); > > return applied * PAGE_SIZE; > > } > > diff --git a/mm/internal.h b/mm/internal.h > > index 93e229112045..36c11ea41f47 100644 > > --- a/mm/internal.h > > +++ b/mm/internal.h > > @@ -868,7 +868,7 @@ extern unsigned long __must_check vm_mmap_pgoff(struct file *, unsigned long, > > unsigned long, unsigned long); > > > > extern void set_pageblock_order(void); > > -unsigned long reclaim_pages(struct list_head *folio_list); > > +unsigned long reclaim_pages(struct list_head *folio_list, bool ignore_references); > > unsigned int reclaim_clean_pages_from_list(struct zone *zone, > > struct list_head *folio_list); > > /* The ALLOC_WMARK bits are used as an index to zone->watermark */ > > diff --git a/mm/madvise.c b/mm/madvise.c > > index abde3edb04f0..44a498c94158 100644 > > --- a/mm/madvise.c > > +++ b/mm/madvise.c > > @@ -386,7 +386,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd, > > return 0; > > } > > > > - if (pmd_young(orig_pmd)) { > > + if (!pageout && pmd_young(orig_pmd)) { > > pmdp_invalidate(vma, addr, pmd); > > orig_pmd = pmd_mkold(orig_pmd); > > > > @@ -410,7 +410,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd, > > huge_unlock: > > spin_unlock(ptl); > > if (pageout) > > - reclaim_pages(&folio_list); > > + reclaim_pages(&folio_list, true); > > return 0; > > } > > > > @@ -490,7 +490,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd, > > > > VM_BUG_ON_FOLIO(folio_test_large(folio), folio); > > > > - if (pte_young(ptent)) { > > + if (!pageout && pte_young(ptent)) { > > ptent = ptep_get_and_clear_full(mm, addr, pte, > > tlb->fullmm); > > ptent = pte_mkold(ptent); > > @@ -524,7 +524,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd, > > pte_unmap_unlock(start_pte, ptl); > > } > > if (pageout) > > - reclaim_pages(&folio_list); > > + reclaim_pages(&folio_list, true); > > cond_resched(); > > > > return 0; > > diff --git a/mm/vmscan.c b/mm/vmscan.c > > index 402c290fbf5a..ba2f37f46a73 100644 > > --- a/mm/vmscan.c > > +++ b/mm/vmscan.c > > @@ -2102,7 +2102,8 @@ static void shrink_active_list(unsigned long nr_to_scan, > > } > > > > static unsigned int reclaim_folio_list(struct list_head *folio_list, > > - struct pglist_data *pgdat) > > + struct pglist_data *pgdat, > > + bool ignore_references) > > { > > struct reclaim_stat dummy_stat; > > unsigned int nr_reclaimed; > > @@ -2115,7 +2116,7 @@ static unsigned int reclaim_folio_list(struct list_head *folio_list, > > .no_demotion = 1, > > }; > > > > - nr_reclaimed = shrink_folio_list(folio_list, pgdat, &sc, &dummy_stat, false); > > + nr_reclaimed = shrink_folio_list(folio_list, pgdat, &sc, &dummy_stat, ignore_references); > > while (!list_empty(folio_list)) { > > folio = lru_to_folio(folio_list); > > list_del(&folio->lru); > > @@ -2125,7 +2126,7 @@ static unsigned int reclaim_folio_list(struct list_head *folio_list, > > return nr_reclaimed; > > } > > > > -unsigned long reclaim_pages(struct list_head *folio_list) > > +unsigned long reclaim_pages(struct list_head *folio_list, bool ignore_references) > > { > > int nid; > > unsigned int nr_reclaimed = 0; > > @@ -2147,11 +2148,12 @@ unsigned long reclaim_pages(struct list_head *folio_list) > > continue; > > } > > > > - nr_reclaimed += reclaim_folio_list(&node_folio_list, NODE_DATA(nid)); > > + nr_reclaimed += reclaim_folio_list(&node_folio_list, NODE_DATA(nid), > > + ignore_references); > > nid = folio_nid(lru_to_folio(folio_list)); > > } while (!list_empty(folio_list)); > > > > - nr_reclaimed += reclaim_folio_list(&node_folio_list, NODE_DATA(nid)); > > + nr_reclaimed += reclaim_folio_list(&node_folio_list, NODE_DATA(nid), ignore_references); > > > > memalloc_noreclaim_restore(noreclaim_flag); > > > > -- > > 2.34.1 > > Thanks Barry
On Sat, Feb 24, 2024 at 11:20:36AM +1300, Barry Song wrote: > On Sat, Feb 24, 2024 at 11:09 AM Minchan Kim <minchan@kernel.org> wrote: > > > > Hi Barry, > > > > On Fri, Feb 23, 2024 at 05:15:50PM +1300, Barry Song wrote: > > > From: Barry Song <v-songbaohua@oppo.com> > > > > > > While doing MADV_PAGEOUT, the current code will clear PTE young > > > so that vmscan won't read young flags to allow the reclamation > > > of madvised folios to go ahead. > > > > Isn't it good to accelerate reclaiming? vmscan checks whether the > > page was accessed recenlty by the young bit from pte and if it is, > > it doesn't reclaim the page. Since we have cleared the young bit > > in pte in madvise_pageout, vmscan is likely to reclaim the page > > since it wouldn't see the ferencecd_ptes from folio_check_references. > > right, but the proposal is asking vmscan to skip the folio_check_references > if this is a PAGEOUT. so we remove both pte_clear_young and rmap > of folio_check_references. > > > > > Could you clarify if I miss something here? > > guest you missed we are skipping folio_check_references now. > we remove both, thus, make MADV_PAGEOUT 6% faster. This makes sense to me. Only concern was race with mlock during the reclaim but the race was already there for normal page reclaming. Thus, mlock would already handle it. Thanks.
On Sat, Feb 24, 2024 at 7:24 AM Minchan Kim <minchan@kernel.org> wrote: > > On Sat, Feb 24, 2024 at 11:20:36AM +1300, Barry Song wrote: > > On Sat, Feb 24, 2024 at 11:09 AM Minchan Kim <minchan@kernel.org> wrote: > > > > > > Hi Barry, > > > > > > On Fri, Feb 23, 2024 at 05:15:50PM +1300, Barry Song wrote: > > > > From: Barry Song <v-songbaohua@oppo.com> > > > > > > > > While doing MADV_PAGEOUT, the current code will clear PTE young > > > > so that vmscan won't read young flags to allow the reclamation > > > > of madvised folios to go ahead. > > > > > > Isn't it good to accelerate reclaiming? vmscan checks whether the > > > page was accessed recenlty by the young bit from pte and if it is, > > > it doesn't reclaim the page. Since we have cleared the young bit > > > in pte in madvise_pageout, vmscan is likely to reclaim the page > > > since it wouldn't see the ferencecd_ptes from folio_check_references. > > > > right, but the proposal is asking vmscan to skip the folio_check_references > > if this is a PAGEOUT. so we remove both pte_clear_young and rmap > > of folio_check_references. > > > > > > > > Could you clarify if I miss something here? > > > > guest you missed we are skipping folio_check_references now. > > we remove both, thus, make MADV_PAGEOUT 6% faster. > > This makes sense to me. > > Only concern was race with mlock during the reclaim but the race was already > there for normal page reclaming. Thus, mlock would already handle it. yes. in try_to_unmap_one(), mlock()'s vma is not reclaimed, while (page_vma_mapped_walk(&pvmw)) { /* Unexpected PMD-mapped THP? */ VM_BUG_ON_FOLIO(!pvmw.pte, folio); /* * If the folio is in an mlock()d vma, we must not swap it out. */ if (!(flags & TTU_IGNORE_MLOCK) && (vma->vm_flags & VM_LOCKED)) { /* Restore the mlock which got missed */ if (!folio_test_large(folio)) mlock_vma_folio(folio, vma); page_vma_mapped_walk_done(&pvmw); ret = false; break; } BTW, Hi SeongJae, I am not quite sure if damon also needs this, so I have kept damon as is by setting ignore_references = false. MADV_PAGEOUT is an explicit hint users don't want the memory to be reclaimed, I don't know if it is true for damon as well. If you have some comments, please chime in. > > Thanks. Thanks Barry
On Fri, 23 Feb 2024 17:15:50 +1300 Barry Song <21cnbao@gmail.com> wrote: > From: Barry Song <v-songbaohua@oppo.com> > > While doing MADV_PAGEOUT, the current code will clear PTE young > so that vmscan won't read young flags to allow the reclamation > of madvised folios to go ahead. > It seems we can do it by directly ignoring references, thus we > can remove tlb flush in madvise and rmap overhead in vmscan. > > Regarding the side effect, in the original code, if a parallel > thread runs side by side to access the madvised memory with the > thread doing madvise, folios will get a chance to be re-activated > by vmscan. But with the patch, they will still be reclaimed. But > this behaviour doing PAGEOUT and doing access at the same time is > quite silly like DoS. So probably, we don't need to care. I think we might need to take care of the case, since users may use just a best-effort estimation like DAMON for the target pages. In such cases, the page granularity re-check of the access could be helpful. So I concern if this could be a visible behavioral change for some valid use cases. > > A microbench as below has shown 6% decrement on the latency of > MADV_PAGEOUT, I assume some of the users may use MADV_PAGEOUT for proactive reclamation of the memory. In the use case, I think latency of MADV_PAGEOUT might be not that important. Hence I think the cons of the behavioral change might outweigh the pros of the latench improvement, for such best-effort proactive reclamation use case. Hope to hear and learn from others' opinions. > > #define PGSIZE 4096 > main() > { > int i; > #define SIZE 512*1024*1024 > volatile long *p = mmap(NULL, SIZE, PROT_READ | PROT_WRITE, > MAP_PRIVATE | MAP_ANONYMOUS, -1, 0); > > for (i = 0; i < SIZE/sizeof(long); i += PGSIZE / sizeof(long)) > p[i] = 0x11; > > madvise(p, SIZE, MADV_PAGEOUT); > } > > w/o patch w/ patch > root@10:~# time ./a.out root@10:~# time ./a.out > real 0m49.634s real 0m46.334s > user 0m0.637s user 0m0.648s > sys 0m47.434s sys 0m44.265s > > Signed-off-by: Barry Song <v-songbaohua@oppo.com> Thanks, SJ [...]
Hi Barry, On Sat, 24 Feb 2024 12:37:59 +0800 Barry Song <21cnbao@gmail.com> wrote: [...] > > BTW, > Hi SeongJae, > I am not quite sure if damon also needs this, so I have kept damon as is by > setting ignore_references = false. MADV_PAGEOUT is an explicit hint users > don't want the memory to be reclaimed, I don't know if it is true for damon as > well. If you have some comments, please chime in. Thank you for calling my name :) For DAMON's usecase, the document simply says the behavior would be same to MADV_PAGEOUT, so if we conclude to change MADV_PAGEOUT, I think same change should be made for DAMON's usecase, or update DAMON document. Thanks, SJ > > > > > Thanks. > > Thanks > Barry
On Sun, Feb 25, 2024 at 3:02 AM SeongJae Park <sj@kernel.org> wrote: > > On Fri, 23 Feb 2024 17:15:50 +1300 Barry Song <21cnbao@gmail.com> wrote: > > > From: Barry Song <v-songbaohua@oppo.com> > > > > While doing MADV_PAGEOUT, the current code will clear PTE young > > so that vmscan won't read young flags to allow the reclamation > > of madvised folios to go ahead. > > It seems we can do it by directly ignoring references, thus we > > can remove tlb flush in madvise and rmap overhead in vmscan. > > > > Regarding the side effect, in the original code, if a parallel > > thread runs side by side to access the madvised memory with the > > thread doing madvise, folios will get a chance to be re-activated > > by vmscan. But with the patch, they will still be reclaimed. But > > this behaviour doing PAGEOUT and doing access at the same time is > > quite silly like DoS. So probably, we don't need to care. > > I think we might need to take care of the case, since users may use just a > best-effort estimation like DAMON for the target pages. In such cases, the > page granularity re-check of the access could be helpful. So I concern if this > could be a visible behavioral change for some valid use cases. Hi SeongJae, If you read the code of MADV_PAGEOUT, you will find it is not the best-effort. It does clearing pte young and immediately after the ptes are cleared, it reads pte and checks if the ptes are young. If not, reclaim it. So the purpose of clearing PTE young is helping the check of young in folio_references to return false. The gap between clearing ptes and re-checking ptes is quite small at microseconds level. > > > > > A microbench as below has shown 6% decrement on the latency of > > MADV_PAGEOUT, > > I assume some of the users may use MADV_PAGEOUT for proactive reclamation of > the memory. In the use case, I think latency of MADV_PAGEOUT might be not that > important. > > Hence I think the cons of the behavioral change might outweigh the pros of the > latench improvement, for such best-effort proactive reclamation use case. Hope > to hear and learn from others' opinions. I don't see the behavioral change for MADV_PAGEOUT as just the ping-pong is removed. The only chance is in that very small time gap, somebody accesses the cleared ptes and makes it young again, considering this time gap is so small, i don't think it is worth caring. thus, i don't see pros for MADV_PAGEOUT case, but we improve the efficiency of MADV_PAGEOUT and save the power of Android phones. > > > > > #define PGSIZE 4096 > > main() > > { > > int i; > > #define SIZE 512*1024*1024 > > volatile long *p = mmap(NULL, SIZE, PROT_READ | PROT_WRITE, > > MAP_PRIVATE | MAP_ANONYMOUS, -1, 0); > > > > for (i = 0; i < SIZE/sizeof(long); i += PGSIZE / sizeof(long)) > > p[i] = 0x11; > > > > madvise(p, SIZE, MADV_PAGEOUT); > > } > > > > w/o patch w/ patch > > root@10:~# time ./a.out root@10:~# time ./a.out > > real 0m49.634s real 0m46.334s > > user 0m0.637s user 0m0.648s > > sys 0m47.434s sys 0m44.265s > > > > Signed-off-by: Barry Song <v-songbaohua@oppo.com> > > Thanks Barry
On Sun, Feb 25, 2024 at 3:07 AM SeongJae Park <sj@kernel.org> wrote: > > Hi Barry, > > On Sat, 24 Feb 2024 12:37:59 +0800 Barry Song <21cnbao@gmail.com> wrote: > > [...] > > > > BTW, > > Hi SeongJae, > > I am not quite sure if damon also needs this, so I have kept damon as is by > > setting ignore_references = false. MADV_PAGEOUT is an explicit hint users > > don't want the memory to be reclaimed, I don't know if it is true for damon as > > well. If you have some comments, please chime in. > > Thank you for calling my name :) > > For DAMON's usecase, the document simply says the behavior would be same to > MADV_PAGEOUT, so if we conclude to change MADV_PAGEOUT, I think same change > should be made for DAMON's usecase, or update DAMON document. Hi SeongJae, I don't find similar clearing pte young in damon_pa_pageout(), so i guess damon's behaviour is actually different with MADV_PAGEOUT which has pte-clearing. damon is probably the best-effort but MADV_PAGEOUT isn't . static unsigned long damon_pa_pageout(struct damon_region *r, struct damos *s) { unsigned long addr, applied; LIST_HEAD(folio_list); for (addr = r->ar.start; addr < r->ar.end; addr += PAGE_SIZE) { struct folio *folio = damon_get_folio(PHYS_PFN(addr)); .... if (damos_pa_filter_out(s, folio)) goto put_folio; folio_clear_referenced(folio); folio_test_clear_young(folio); if (!folio_isolate_lru(folio)) goto put_folio; if (folio_test_unevictable(folio)) folio_putback_lru(folio); else list_add(&folio->lru, &folio_list); put_folio: folio_put(folio); } applied = reclaim_pages(&folio_list); cond_resched(); return applied * PAGE_SIZE; } am i missing something? > > > Thanks, > SJ > Thanks Barry
On Sun, 25 Feb 2024 03:50:48 +0800 Barry Song <21cnbao@gmail.com> wrote: > On Sun, Feb 25, 2024 at 3:02 AM SeongJae Park <sj@kernel.org> wrote: > > > > On Fri, 23 Feb 2024 17:15:50 +1300 Barry Song <21cnbao@gmail.com> wrote: > > > > > From: Barry Song <v-songbaohua@oppo.com> > > > > > > While doing MADV_PAGEOUT, the current code will clear PTE young > > > so that vmscan won't read young flags to allow the reclamation > > > of madvised folios to go ahead. > > > It seems we can do it by directly ignoring references, thus we > > > can remove tlb flush in madvise and rmap overhead in vmscan. > > > > > > Regarding the side effect, in the original code, if a parallel > > > thread runs side by side to access the madvised memory with the > > > thread doing madvise, folios will get a chance to be re-activated > > > by vmscan. But with the patch, they will still be reclaimed. But > > > this behaviour doing PAGEOUT and doing access at the same time is > > > quite silly like DoS. So probably, we don't need to care. > > > > I think we might need to take care of the case, since users may use just a > > best-effort estimation like DAMON for the target pages. In such cases, the > > page granularity re-check of the access could be helpful. So I concern if this > > could be a visible behavioral change for some valid use cases. > > Hi SeongJae, > > If you read the code of MADV_PAGEOUT, you will find it is not the best-effort. I'm not saying about MADV_PAGEOUT, but the logic of ther user of MADV_PAGEOUT, which being used for finding the pages to reclaim. > It does clearing pte young and immediately after the ptes are cleared, it reads > pte and checks if the ptes are young. If not, reclaim it. So the > purpose of clearing > PTE young is helping the check of young in folio_references to return false. > The gap between clearing ptes and re-checking ptes is quite small at > microseconds > level. > > > > > > > > > A microbench as below has shown 6% decrement on the latency of > > > MADV_PAGEOUT, > > > > I assume some of the users may use MADV_PAGEOUT for proactive reclamation of > > the memory. In the use case, I think latency of MADV_PAGEOUT might be not that > > important. > > > > Hence I think the cons of the behavioral change might outweigh the pros of the > > latench improvement, for such best-effort proactive reclamation use case. Hope > > to hear and learn from others' opinions. > > I don't see the behavioral change for MADV_PAGEOUT as just the ping-pong > is removed. The only chance is in that very small time gap, somebody accesses > the cleared ptes and makes it young again, considering this time gap > is so small, > i don't think it is worth caring. thus, i don't see pros for MADV_PAGEOUT > case, but we improve the efficiency of MADV_PAGEOUT and save the power of > Android phones. Ok, I agree the time gap is small enough and the benefit could be significant on such use case. Thank you for enlightening me with the nice examples and the numbers :) Thanks, SJ [...]
On Sat, 24 Feb 2024 11:07:23 -0800 SeongJae Park <sj@kernel.org> wrote: > Hi Barry, > > On Sat, 24 Feb 2024 12:37:59 +0800 Barry Song <21cnbao@gmail.com> wrote: > > [...] > > > > BTW\uff0c > > Hi SeongJae, > > I am not quite sure if damon also needs this, so I have kept damon as is by > > setting ignore_references = false. MADV_PAGEOUT is an explicit hint users > > don't want the memory to be reclaimed, I don't know if it is true for damon as > > well. If you have some comments, please chime in. > > Thank you for calling my name :) > > For DAMON's usecase, the document simply says the behavior would be same to > MADV_PAGEOUT, so if we conclude to change MADV_PAGEOUT, I think same change > should be made for DAMON's usecase, or update DAMON document. Thanks to Barry's nice explanation on my other reply to the patch, now I think the change is modest, and therefore I'd prefer the first way: Changing DAMON's usecase, and keep the document as is. Thanks, SJ > > > Thanks, > SJ > > > > > > > > > Thanks. > > > > Thanks > > Barry
On Sun, Feb 25, 2024 at 4:12 AM SeongJae Park <sj@kernel.org> wrote: > > On Sat, 24 Feb 2024 11:07:23 -0800 SeongJae Park <sj@kernel.org> wrote: > > > Hi Barry, > > > > On Sat, 24 Feb 2024 12:37:59 +0800 Barry Song <21cnbao@gmail.com> wrote: > > > > [...] > > > > > > BTW\uff0c > > > Hi SeongJae, > > > I am not quite sure if damon also needs this, so I have kept damon as is by > > > setting ignore_references = false. MADV_PAGEOUT is an explicit hint users > > > don't want the memory to be reclaimed, I don't know if it is true for damon as > > > well. If you have some comments, please chime in. > > > > Thank you for calling my name :) > > > > For DAMON's usecase, the document simply says the behavior would be same to > > MADV_PAGEOUT, so if we conclude to change MADV_PAGEOUT, I think same change > > should be made for DAMON's usecase, or update DAMON document. > > Thanks to Barry's nice explanation on my other reply to the patch, now I think > the change is modest, and therefore I'd prefer the first way: Changing DAMON's > usecase, and keep the document as is. Hi SeongJae, thanks! I actually blindly voted for keeping DAMON's behaviour but slightly updated the document as I set ignore_references to false for the DAMON case in the RFC :-) --- a/mm/damon/paddr.c +++ b/mm/damon/paddr.c @@ -249,7 +249,7 @@ static unsigned long damon_pa_pageout(struct damon_region *r, struct damos *s) put_folio: folio_put(folio); } - applied = reclaim_pages(&folio_list); + applied = reclaim_pages(&folio_list, false); cond_resched(); return applied * PAGE_SIZE; } MADV_PAGEOUT comes from userspace by a specific process to tell the kernel to reclaim its own memory(actually focus on non-shared memory as it skips folios with mapcount>1). The range is a virtual address and the app does know it doesn't want to access the range in the foreseeable future. and the affected app is itself not global. In the DAMON case, it seems the range is the physical address. if the pa is mapped by more than one process, it seems safer to double-check in the kernel as it might affect multiple processes? Please correct me if I am wrong. > > > Thanks, > SJ > > > > > > > Thanks, > > SJ Thanks Barry
On Sun, 25 Feb 2024 04:01:40 +0800 Barry Song <21cnbao@gmail.com> wrote: > On Sun, Feb 25, 2024 at 3:07 AM SeongJae Park <sj@kernel.org> wrote: > > > > Hi Barry, > > > > On Sat, 24 Feb 2024 12:37:59 +0800 Barry Song <21cnbao@gmail.com> wrote: > > > > [...] > > > > > > BTW, > > > Hi SeongJae, > > > I am not quite sure if damon also needs this, so I have kept damon as is by > > > setting ignore_references = false. MADV_PAGEOUT is an explicit hint users > > > don't want the memory to be reclaimed, I don't know if it is true for damon as > > > well. If you have some comments, please chime in. > > > > Thank you for calling my name :) > > > > For DAMON's usecase, the document simply says the behavior would be same to > > MADV_PAGEOUT, so if we conclude to change MADV_PAGEOUT, I think same change > > should be made for DAMON's usecase, or update DAMON document. > > Hi SeongJae, > > I don't find similar clearing pte young in damon_pa_pageout(), so i > guess damon's > behaviour is actually different with MADV_PAGEOUT which has pte-clearing. damon > is probably the best-effort but MADV_PAGEOUT isn't . > > static unsigned long damon_pa_pageout(struct damon_region *r, struct damos *s) > { > unsigned long addr, applied; > LIST_HEAD(folio_list); > > for (addr = r->ar.start; addr < r->ar.end; addr += PAGE_SIZE) { > struct folio *folio = damon_get_folio(PHYS_PFN(addr)); > .... > > if (damos_pa_filter_out(s, folio)) > goto put_folio; > > folio_clear_referenced(folio); > folio_test_clear_young(folio); > if (!folio_isolate_lru(folio)) > goto put_folio; > if (folio_test_unevictable(folio)) > folio_putback_lru(folio); > else > list_add(&folio->lru, &folio_list); > put_folio: > folio_put(folio); > } > applied = reclaim_pages(&folio_list); > cond_resched(); > return applied * PAGE_SIZE; > } > > am i missing something? Thank you for checking this again. You're right. Technically speaking, DAMON's usage of MADV_PAGEOUT is in vaddr.c. paddr.c is using not MADV_PAGEOUT but reclaim_pages(). Usage of reclaim_pages() from paddr is different from that of MADV_PAGEOUT since paddr doesn't clear PTE. I was confused from the difference between vaddr and paddr. I actually wanted to document the difference but haven't had a time for that yet. Thank you for letting me remind this. So, your change on MADV_PAGEOUT will make an effect to vaddr, and I think it's ok. Your change on reclaim_pages() could make an effect to paddr, depending on the additional parameter's value. I now think it would better to make no effect here. That is, let's keep the change for paddr.c in your patch as is. Thanks, SJ > > > > > > > Thanks, > > SJ > > > > Thanks > Barry
On Sun, 25 Feb 2024 04:33:25 +0800 Barry Song <21cnbao@gmail.com> wrote: > On Sun, Feb 25, 2024 at 4:12 AM SeongJae Park <sj@kernel.org> wrote: > > > > On Sat, 24 Feb 2024 11:07:23 -0800 SeongJae Park <sj@kernel.org> wrote: > > > > > Hi Barry, > > > > > > On Sat, 24 Feb 2024 12:37:59 +0800 Barry Song <21cnbao@gmail.com> wrote: > > > > > > [...] > > > > > > > > BTW\uff0c > > > > Hi SeongJae, > > > > I am not quite sure if damon also needs this, so I have kept damon as is by > > > > setting ignore_references = false. MADV_PAGEOUT is an explicit hint users > > > > don't want the memory to be reclaimed, I don't know if it is true for damon as > > > > well. If you have some comments, please chime in. > > > > > > Thank you for calling my name :) > > > > > > For DAMON's usecase, the document simply says the behavior would be same to > > > MADV_PAGEOUT, so if we conclude to change MADV_PAGEOUT, I think same change > > > should be made for DAMON's usecase, or update DAMON document. > > > > Thanks to Barry's nice explanation on my other reply to the patch, now I think > > the change is modest, and therefore I'd prefer the first way: Changing DAMON's > > usecase, and keep the document as is. > > Hi SeongJae, > > thanks! I actually blindly voted for keeping DAMON's behaviour but > slightly updated the > document as I set ignore_references to false for the DAMON case in the RFC :-) > > --- a/mm/damon/paddr.c > +++ b/mm/damon/paddr.c > @@ -249,7 +249,7 @@ static unsigned long damon_pa_pageout(struct > damon_region *r, struct damos *s) > put_folio: > folio_put(folio); > } > - applied = reclaim_pages(&folio_list); > + applied = reclaim_pages(&folio_list, false); > cond_resched(); > return applied * PAGE_SIZE; > } > > MADV_PAGEOUT comes from userspace by a specific process to tell the kernel > to reclaim its own memory(actually focus on non-shared memory as it > skips folios with > mapcount>1). > The range is a virtual address and the app does know it doesn't want > to access the > range in the foreseeable future. and the affected app is itself not global. > > In the DAMON case, it seems the range is the physical address. if > the pa is mapped > by more than one process, it seems safer to double-check in the kernel > as it might > affect multiple processes? > > Please correct me if I am wrong. You're correct. Please consider below in my previous reply[1] as my opinion. let's keep the change for paddr.c in your patch as is. [1] https://lore.kernel.org/r/20240224205453.47096-1-sj@kernel.org Thanks, SJ > > > > > > > Thanks, > > SJ > > > > > > > > > > > Thanks, > > > SJ > > Thanks > Barry
On Sun, Feb 25, 2024 at 9:54 AM SeongJae Park <sj@kernel.org> wrote: > > On Sun, 25 Feb 2024 04:01:40 +0800 Barry Song <21cnbao@gmail.com> wrote: > > > On Sun, Feb 25, 2024 at 3:07 AM SeongJae Park <sj@kernel.org> wrote: > > > > > > Hi Barry, > > > > > > On Sat, 24 Feb 2024 12:37:59 +0800 Barry Song <21cnbao@gmail.com> wrote: > > > > > > [...] > > > > > > > > BTW, > > > > Hi SeongJae, > > > > I am not quite sure if damon also needs this, so I have kept damon as is by > > > > setting ignore_references = false. MADV_PAGEOUT is an explicit hint users > > > > don't want the memory to be reclaimed, I don't know if it is true for damon as > > > > well. If you have some comments, please chime in. > > > > > > Thank you for calling my name :) > > > > > > For DAMON's usecase, the document simply says the behavior would be same to > > > MADV_PAGEOUT, so if we conclude to change MADV_PAGEOUT, I think same change > > > should be made for DAMON's usecase, or update DAMON document. > > > > Hi SeongJae, > > > > I don't find similar clearing pte young in damon_pa_pageout(), so i > > guess damon's > > behaviour is actually different with MADV_PAGEOUT which has pte-clearing. damon > > is probably the best-effort but MADV_PAGEOUT isn't . > > > > static unsigned long damon_pa_pageout(struct damon_region *r, struct damos *s) > > { > > unsigned long addr, applied; > > LIST_HEAD(folio_list); > > > > for (addr = r->ar.start; addr < r->ar.end; addr += PAGE_SIZE) { > > struct folio *folio = damon_get_folio(PHYS_PFN(addr)); > > .... > > > > if (damos_pa_filter_out(s, folio)) > > goto put_folio; > > > > folio_clear_referenced(folio); > > folio_test_clear_young(folio); > > if (!folio_isolate_lru(folio)) > > goto put_folio; > > if (folio_test_unevictable(folio)) > > folio_putback_lru(folio); > > else > > list_add(&folio->lru, &folio_list); > > put_folio: > > folio_put(folio); > > } > > applied = reclaim_pages(&folio_list); > > cond_resched(); > > return applied * PAGE_SIZE; > > } > > > > am i missing something? > > Thank you for checking this again. You're right. > > Technically speaking, DAMON's usage of MADV_PAGEOUT is in vaddr.c. paddrc is > using not MADV_PAGEOUT but reclaim_pages(). Usage of reclaim_pages() from > paddr is different from that of MADV_PAGEOUT since paddr doesn't clear PTE. I > was confused from the difference between vaddr and paddr. I actually wanted to > document the difference but haven't had a time for that yet. Thank you for > letting me remind this. Hi SeongJae, thanks! I bravely had a go at fixing the damon's doc[1]. as it seems the fix is anyway needed no matter if we have my patch to optimize MADV_PAGEOUT. [1] https://lore.kernel.org/linux-mm/20240224215023.5271-1-21cnbao@gmail.com/ > > So, your change on MADV_PAGEOUT will make an effect to vaddr, and I think it's > ok. Your change on reclaim_pages() could make an effect to paddr, depending on > the additional parameter's value. I now think it would better to make no > effect here. That is, let's keep the change for paddr.c in your patch as is. thanks! it seems everything is quite clear now. > > > Thanks, > SJ > > > > > > > > > > > > Thanks, > > > SJ > > > > > Thanks Barry
diff --git a/mm/damon/paddr.c b/mm/damon/paddr.c index 081e2a325778..5e6dc312072c 100644 --- a/mm/damon/paddr.c +++ b/mm/damon/paddr.c @@ -249,7 +249,7 @@ static unsigned long damon_pa_pageout(struct damon_region *r, struct damos *s) put_folio: folio_put(folio); } - applied = reclaim_pages(&folio_list); + applied = reclaim_pages(&folio_list, false); cond_resched(); return applied * PAGE_SIZE; } diff --git a/mm/internal.h b/mm/internal.h index 93e229112045..36c11ea41f47 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -868,7 +868,7 @@ extern unsigned long __must_check vm_mmap_pgoff(struct file *, unsigned long, unsigned long, unsigned long); extern void set_pageblock_order(void); -unsigned long reclaim_pages(struct list_head *folio_list); +unsigned long reclaim_pages(struct list_head *folio_list, bool ignore_references); unsigned int reclaim_clean_pages_from_list(struct zone *zone, struct list_head *folio_list); /* The ALLOC_WMARK bits are used as an index to zone->watermark */ diff --git a/mm/madvise.c b/mm/madvise.c index abde3edb04f0..44a498c94158 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -386,7 +386,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd, return 0; } - if (pmd_young(orig_pmd)) { + if (!pageout && pmd_young(orig_pmd)) { pmdp_invalidate(vma, addr, pmd); orig_pmd = pmd_mkold(orig_pmd); @@ -410,7 +410,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd, huge_unlock: spin_unlock(ptl); if (pageout) - reclaim_pages(&folio_list); + reclaim_pages(&folio_list, true); return 0; } @@ -490,7 +490,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd, VM_BUG_ON_FOLIO(folio_test_large(folio), folio); - if (pte_young(ptent)) { + if (!pageout && pte_young(ptent)) { ptent = ptep_get_and_clear_full(mm, addr, pte, tlb->fullmm); ptent = pte_mkold(ptent); @@ -524,7 +524,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd, pte_unmap_unlock(start_pte, ptl); } if (pageout) - reclaim_pages(&folio_list); + reclaim_pages(&folio_list, true); cond_resched(); return 0; diff --git a/mm/vmscan.c b/mm/vmscan.c index 402c290fbf5a..ba2f37f46a73 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -2102,7 +2102,8 @@ static void shrink_active_list(unsigned long nr_to_scan, } static unsigned int reclaim_folio_list(struct list_head *folio_list, - struct pglist_data *pgdat) + struct pglist_data *pgdat, + bool ignore_references) { struct reclaim_stat dummy_stat; unsigned int nr_reclaimed; @@ -2115,7 +2116,7 @@ static unsigned int reclaim_folio_list(struct list_head *folio_list, .no_demotion = 1, }; - nr_reclaimed = shrink_folio_list(folio_list, pgdat, &sc, &dummy_stat, false); + nr_reclaimed = shrink_folio_list(folio_list, pgdat, &sc, &dummy_stat, ignore_references); while (!list_empty(folio_list)) { folio = lru_to_folio(folio_list); list_del(&folio->lru); @@ -2125,7 +2126,7 @@ static unsigned int reclaim_folio_list(struct list_head *folio_list, return nr_reclaimed; } -unsigned long reclaim_pages(struct list_head *folio_list) +unsigned long reclaim_pages(struct list_head *folio_list, bool ignore_references) { int nid; unsigned int nr_reclaimed = 0; @@ -2147,11 +2148,12 @@ unsigned long reclaim_pages(struct list_head *folio_list) continue; } - nr_reclaimed += reclaim_folio_list(&node_folio_list, NODE_DATA(nid)); + nr_reclaimed += reclaim_folio_list(&node_folio_list, NODE_DATA(nid), + ignore_references); nid = folio_nid(lru_to_folio(folio_list)); } while (!list_empty(folio_list)); - nr_reclaimed += reclaim_folio_list(&node_folio_list, NODE_DATA(nid)); + nr_reclaimed += reclaim_folio_list(&node_folio_list, NODE_DATA(nid), ignore_references); memalloc_noreclaim_restore(noreclaim_flag);