Message ID | 20240202221026.1055122-1-tjmercier@google.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel+bounces-50650-ouuuleilei=gmail.com@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7301:9bc1:b0:106:209c:c626 with SMTP id op1csp725677dyc; Fri, 2 Feb 2024 14:11:03 -0800 (PST) X-Google-Smtp-Source: AGHT+IG1qPDa6Ut+ZDHSQDOohmsmLBH20GA6iYJ/fmSMXT6hQS/H5po3lu3lzMe4x830c/aEsgyx X-Received: by 2002:a17:902:db07:b0:1d9:6420:7867 with SMTP id m7-20020a170902db0700b001d964207867mr5480477plx.11.1706911863512; Fri, 02 Feb 2024 14:11:03 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1706911863; cv=pass; d=google.com; s=arc-20160816; b=QmP/UK7+yMYO1Ucw3OBjpTPQt4mdQeC0UuFOZK40jtJE4c1t08UM4tdqHtxZ5OzIfJ HidG3+IwguFbpd8LfO1FICgRc710A27x4ycDHjBt7QAL/YrSVb8ghG5U8/K3nxSARQ5o OfQIBatYNg/bHmG63AR/9L+EBSZu1/JMmmngVrljLZ9ZJfX2EhegvZkQuqLdU3OhyvhZ 0ThKWTEDAydDSuVAYv4H5J0tiNvcCy8d5XGRTGPLflmnLbvaDWUiea8A/pH95r2SrorE 5lSjUO5kWM9LQHCj/Y7YIIw3kyp3Sb1G6bxnqwzTf/3higyQTQKS1sJ4aggK/X0gOMl6 dT9Q== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:cc:to:from:subject:message-id :mime-version:list-unsubscribe:list-subscribe:list-id:precedence :date:dkim-signature; bh=HHbC9+IlrYehd7NC2Y5ENY9x+M2AXax5bbOnGa1kNgU=; fh=qHoxQXV+HbMP4C5ea6UHel9FFzTIcC3nGYk5tENLy6k=; b=bn4ZVBNQIdQPqUUxkX3IusS2JqVmywkV0TlIZ0DzlpclXmDEgAsiz1L0WPftPUrII+ xRxLPe6FUcW8j5e8n8LNOzS5/GkJSvuMf+F4cE+PoIC1I6XtkbwWTJpx48Y1ggtvYcy9 ao/Mnk4boS0gTuHaPldpySoGo43BPbHn6gc8HoLkmWGtsnsHWBnvdY8GLMaF1kWJD7yn wmwM0HDIk9RdWBsTFPNAXOejU2Q30YEKgmjPDZH2uq8bnLn4CtJbg8+UA5vIGOx4oPzd q5spNoIMfyPEFp5FtQ3eGeRp2oaaeuCdOs3gUNPzXmryxjQ3X2oJEFyleYSUhGsW4eYO JNkA==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=MnYxGiLT; arc=pass (i=1 spf=pass spfdomain=flex--tjmercier.bounces.google.com dkim=pass dkdomain=google.com dmarc=pass fromdomain=google.com); spf=pass (google.com: domain of linux-kernel+bounces-50650-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-50650-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com X-Forwarded-Encrypted: i=1; AJvYcCVhDhP7/WJ+lpiAcaVAlr7gmBD5dhJRDormv4qAwV2mmS/waUdfut7WHLbit8a61yE3nckMAugtfgjLZed4YOseACWG4A== Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [2604:1380:45e3:2400::1]) by mx.google.com with ESMTPS id d13-20020a170902654d00b001d9871e4da9si418949pln.105.2024.02.02.14.11.03 for <ouuuleilei@gmail.com> (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 02 Feb 2024 14:11:03 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-50650-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) client-ip=2604:1380:45e3:2400::1; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=MnYxGiLT; arc=pass (i=1 spf=pass spfdomain=flex--tjmercier.bounces.google.com dkim=pass dkdomain=google.com dmarc=pass fromdomain=google.com); spf=pass (google.com: domain of linux-kernel+bounces-50650-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-50650-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id 49A772815B3 for <ouuuleilei@gmail.com>; Fri, 2 Feb 2024 22:11:03 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 011428593E; Fri, 2 Feb 2024 22:10:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="MnYxGiLT" Received: from mail-pg1-f202.google.com (mail-pg1-f202.google.com [209.85.215.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A035A839F2 for <linux-kernel@vger.kernel.org>; Fri, 2 Feb 2024 22:10:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706911847; cv=none; b=ZDhpfo50+aCj/KOCSQ8kTuqMtsYFrRnHA9ialBH+095uMfQPQtNeDkGLrezrg5iSpEY8W9zgPXWCPC0Wxl051jLCPrw0o9Z1UK64PikqApZSBAZ57WL1LSDZa6Z/4nGvw6rW/cCKPIWX/wZ2F/6tssypJzl/P5FdrhnN++oSl8I= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706911847; c=relaxed/simple; bh=dNE+7Pmfahq+jtcv7ewyVbuK2eYzjPqtgYoDFx3azy4=; h=Date:Mime-Version:Message-ID:Subject:From:To:Cc:Content-Type; b=S1TubDkwTtYdlW7eWyrmmRGbhhYkmINTgC2JY7ykNXPyzQR9GE/ITxXZXgMO3l5MF2I5fl+8sdxn/NnOK9yYaVPrE6VGEwpw3cUGFt5jF4Pr0DmPEAMMxYtWIa48WNSnoHsyPSHCTA95SYj+twzQCe+M99d6EaRxL/UonAkQxOw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tjmercier.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=MnYxGiLT; arc=none smtp.client-ip=209.85.215.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tjmercier.bounces.google.com Received: by mail-pg1-f202.google.com with SMTP id 41be03b00d2f7-5cfd94f9433so2500648a12.2 for <linux-kernel@vger.kernel.org>; Fri, 02 Feb 2024 14:10:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1706911845; x=1707516645; darn=vger.kernel.org; h=content-transfer-encoding:cc:to:from:subject:message-id :mime-version:date:from:to:cc:subject:date:message-id:reply-to; bh=HHbC9+IlrYehd7NC2Y5ENY9x+M2AXax5bbOnGa1kNgU=; b=MnYxGiLTDESex/Mgtwzrsk3BJ++W+wA19KnLbN8A6C5ZBcb6gG6ADTXY20MmUfG6XW YFmmJDhUYcy2sGfXRI40ZWWFZj/fO1g4u0W6jWYdoaUpqR4HEq1aUkS7F/o2vjifnrAm VmSJEhzMfi57b2YBwnyrQ/OMiruVbb8FI9D+x0F2bf8vhvRJlU4ToBW22SD7gmUrWzIs zeVfGq3O1IJwaoaCsz5kRPPoV7CLMyKI/BAoJxhplreVp+GAFzUK8MOn6SncBT1ucvIe i6S/K2eZFv2NcsPDZnxyxjgb+6h91/r2TVtrlWdh/X9TT4m4dnApl110f1voyuiiZhmd AEUw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1706911845; x=1707516645; h=content-transfer-encoding:cc:to:from:subject:message-id :mime-version:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=HHbC9+IlrYehd7NC2Y5ENY9x+M2AXax5bbOnGa1kNgU=; b=o8/hSuyMwEaS1j+8NWYk/bzCWMotazGJfuXnDB+rnE8MtpSTUs6Z3NBYWPcyINqDo5 6KYhWXO7CC2PFJIFkq1xi9cltOlp3rWP1uidKR4I7rX7SsE6po/Jxg80BwLWvAO/Xvz6 mXPuMEdBFESgiN0Xce/0Y+pzSST+nwwqYFbLEDNjHAlAUECgzWlH9Y8HMD0WXUFid3a7 n7wwsAx5YTpPRl1qxV52zwi2uvqZefrCnF+TwZF+GHAkk7QVDccxkhML2LCw9wsg2Bsp 4m2Z6peV6dv2WDTfY6sivTSD6R1TcKUn7EYQLHHcCbD49hz54dsvHPQYSF1kphi2qXfF WGjg== X-Gm-Message-State: AOJu0YxI+ZsIbdkcYNDvxpqtfsl1yb4iyoaWQGQqrnP2PDkeIFWISZ7b owQQ1C2JsEyEC2ORfNCAa/QDL48meo111etGxpejKbn1mZPBE2bfM7iyXmUPg42ALStCQj0mffU WIjm8EWpKBrF/+w== X-Received: from tj-virt.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5683]) (user=tjmercier job=sendgmr) by 2002:a65:624a:0:b0:5cf:c149:8dc with SMTP id q10-20020a65624a000000b005cfc14908dcmr17021pgv.11.1706911844971; Fri, 02 Feb 2024 14:10:44 -0800 (PST) Date: Fri, 2 Feb 2024 22:10:25 +0000 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: <linux-kernel.vger.kernel.org> List-Subscribe: <mailto:linux-kernel+subscribe@vger.kernel.org> List-Unsubscribe: <mailto:linux-kernel+unsubscribe@vger.kernel.org> Mime-Version: 1.0 X-Mailer: git-send-email 2.43.0.594.gd9cf4e227d-goog Message-ID: <20240202221026.1055122-1-tjmercier@google.com> Subject: [PATCH v2] mm: memcg: Use larger batches for proactive reclaim From: "T.J. Mercier" <tjmercier@google.com> To: tjmercier@google.com, Johannes Weiner <hannes@cmpxchg.org>, Michal Hocko <mhocko@kernel.org>, Roman Gushchin <roman.gushchin@linux.dev>, Shakeel Butt <shakeelb@google.com>, Muchun Song <muchun.song@linux.dev>, Andrew Morton <akpm@linux-foundation.org>, Efly Young <yangyifei03@kuaishou.com> Cc: android-mm@google.com, yuzhao@google.com, mkoutny@suse.com, cgroups@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1789826814329332771 X-GMAIL-MSGID: 1789826814329332771 |
Series |
[v2] mm: memcg: Use larger batches for proactive reclaim
|
|
Commit Message
T.J. Mercier
Feb. 2, 2024, 10:10 p.m. UTC
Before 388536ac291 ("mm:vmscan: fix inaccurate reclaim during proactive
reclaim") we passed the number of pages for the reclaim request directly
to try_to_free_mem_cgroup_pages, which could lead to significant
overreclaim. After 0388536ac291 the number of pages was limited to a
maximum 32 (SWAP_CLUSTER_MAX) to reduce the amount of overreclaim.
However such a small batch size caused a regression in reclaim
performance due to many more reclaim start/stop cycles inside
memory_reclaim.
Reclaim tries to balance nr_to_reclaim fidelity with fairness across
nodes and cgroups over which the pages are spread. As such, the bigger
the request, the bigger the absolute overreclaim error. Historic
in-kernel users of reclaim have used fixed, small sized requests to
approach an appropriate reclaim rate over time. When we reclaim a user
request of arbitrary size, use decaying batch sizes to manage error while
maintaining reasonable throughput.
root - full reclaim pages/sec time (sec)
pre-0388536ac291 : 68047 10.46
post-0388536ac291 : 13742 inf
(reclaim-reclaimed)/4 : 67352 10.51
/uid_0 - 1G reclaim pages/sec time (sec) overreclaim (MiB)
pre-0388536ac291 : 258822 1.12 107.8
post-0388536ac291 : 105174 2.49 3.5
(reclaim-reclaimed)/4 : 233396 1.12 -7.4
/uid_0 - full reclaim pages/sec time (sec)
pre-0388536ac291 : 72334 7.09
post-0388536ac291 : 38105 14.45
(reclaim-reclaimed)/4 : 72914 6.96
Fixes: 0388536ac291 ("mm:vmscan: fix inaccurate reclaim during proactive reclaim")
Signed-off-by: T.J. Mercier <tjmercier@google.com>
---
v2: Simplify the request size calculation per Johannes Weiner and Michal Koutný
mm/memcontrol.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
Comments
On Fri, Feb 2, 2024 at 2:10 PM T.J. Mercier <tjmercier@google.com> wrote: > > Before 388536ac291 ("mm:vmscan: fix inaccurate reclaim during proactive > reclaim") we passed the number of pages for the reclaim request directly > to try_to_free_mem_cgroup_pages, which could lead to significant > overreclaim. After 0388536ac291 the number of pages was limited to a > maximum 32 (SWAP_CLUSTER_MAX) to reduce the amount of overreclaim. > However such a small batch size caused a regression in reclaim > performance due to many more reclaim start/stop cycles inside > memory_reclaim. > > Reclaim tries to balance nr_to_reclaim fidelity with fairness across > nodes and cgroups over which the pages are spread. As such, the bigger > the request, the bigger the absolute overreclaim error. Historic > in-kernel users of reclaim have used fixed, small sized requests to > approach an appropriate reclaim rate over time. When we reclaim a user > request of arbitrary size, use decaying batch sizes to manage error while > maintaining reasonable throughput. > > root - full reclaim pages/sec time (sec) > pre-0388536ac291 : 68047 10.46 > post-0388536ac291 : 13742 inf > (reclaim-reclaimed)/4 : 67352 10.51 > > /uid_0 - 1G reclaim pages/sec time (sec) overreclaim (MiB) > pre-0388536ac291 : 258822 1.12 107.8 > post-0388536ac291 : 105174 2.49 3.5 > (reclaim-reclaimed)/4 : 233396 1.12 -7.4 > > /uid_0 - full reclaim pages/sec time (sec) > pre-0388536ac291 : 72334 7.09 > post-0388536ac291 : 38105 14.45 > (reclaim-reclaimed)/4 : 72914 6.96 > > Fixes: 0388536ac291 ("mm:vmscan: fix inaccurate reclaim during proactive reclaim") > Signed-off-by: T.J. Mercier <tjmercier@google.com> LGTM with a nit below: Reviewed-by: Yosry Ahmed <yosryahmed@google.com> > > --- > v2: Simplify the request size calculation per Johannes Weiner and Michal Koutný > > mm/memcontrol.c | 5 ++++- > 1 file changed, 4 insertions(+), 1 deletion(-) > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > index 46d8d02114cf..e6f921555e07 100644 > --- a/mm/memcontrol.c > +++ b/mm/memcontrol.c > @@ -6965,6 +6965,9 @@ static ssize_t memory_reclaim(struct kernfs_open_file *of, char *buf, > while (nr_reclaimed < nr_to_reclaim) { > unsigned long reclaimed; > > + /* Will converge on zero, but reclaim enforces a minimum */ > + unsigned long batch_size = (nr_to_reclaim - nr_reclaimed) / 4; > + > if (signal_pending(current)) > return -EINTR; > > @@ -6977,7 +6980,7 @@ static ssize_t memory_reclaim(struct kernfs_open_file *of, char *buf, > lru_add_drain_all(); > > reclaimed = try_to_free_mem_cgroup_pages(memcg, > - min(nr_to_reclaim - nr_reclaimed, SWAP_CLUSTER_MAX), > + batch_size, > GFP_KERNEL, reclaim_options); I think the above two lines should now fit into one.
On Fri, Feb 2, 2024 at 2:14 PM Yosry Ahmed <yosryahmed@google.com> wrote: > > On Fri, Feb 2, 2024 at 2:10 PM T.J. Mercier <tjmercier@google.com> wrote: > > > > Before 388536ac291 ("mm:vmscan: fix inaccurate reclaim during proactive > > reclaim") we passed the number of pages for the reclaim request directly > > to try_to_free_mem_cgroup_pages, which could lead to significant > > overreclaim. After 0388536ac291 the number of pages was limited to a > > maximum 32 (SWAP_CLUSTER_MAX) to reduce the amount of overreclaim. > > However such a small batch size caused a regression in reclaim > > performance due to many more reclaim start/stop cycles inside > > memory_reclaim. > > > > Reclaim tries to balance nr_to_reclaim fidelity with fairness across > > nodes and cgroups over which the pages are spread. As such, the bigger > > the request, the bigger the absolute overreclaim error. Historic > > in-kernel users of reclaim have used fixed, small sized requests to > > approach an appropriate reclaim rate over time. When we reclaim a user > > request of arbitrary size, use decaying batch sizes to manage error while > > maintaining reasonable throughput. > > > > root - full reclaim pages/sec time (sec) > > pre-0388536ac291 : 68047 10.46 > > post-0388536ac291 : 13742 inf > > (reclaim-reclaimed)/4 : 67352 10.51 > > > > /uid_0 - 1G reclaim pages/sec time (sec) overreclaim (MiB) > > pre-0388536ac291 : 258822 1.12 107.8 > > post-0388536ac291 : 105174 2.49 3.5 > > (reclaim-reclaimed)/4 : 233396 1.12 -7.4 > > > > /uid_0 - full reclaim pages/sec time (sec) > > pre-0388536ac291 : 72334 7.09 > > post-0388536ac291 : 38105 14.45 > > (reclaim-reclaimed)/4 : 72914 6.96 > > > > Fixes: 0388536ac291 ("mm:vmscan: fix inaccurate reclaim during proactive reclaim") > > Signed-off-by: T.J. Mercier <tjmercier@google.com> > > LGTM with a nit below: > Reviewed-by: Yosry Ahmed <yosryahmed@google.com> Thanks > > > > --- > > v2: Simplify the request size calculation per Johannes Weiner and Michal Koutný > > > > mm/memcontrol.c | 5 ++++- > > 1 file changed, 4 insertions(+), 1 deletion(-) > > > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > > index 46d8d02114cf..e6f921555e07 100644 > > --- a/mm/memcontrol.c > > +++ b/mm/memcontrol.c > > @@ -6965,6 +6965,9 @@ static ssize_t memory_reclaim(struct kernfs_open_file *of, char *buf, > > while (nr_reclaimed < nr_to_reclaim) { > > unsigned long reclaimed; > > > > + /* Will converge on zero, but reclaim enforces a minimum */ > > + unsigned long batch_size = (nr_to_reclaim - nr_reclaimed) / 4; > > + > > if (signal_pending(current)) > > return -EINTR; > > > > @@ -6977,7 +6980,7 @@ static ssize_t memory_reclaim(struct kernfs_open_file *of, char *buf, > > lru_add_drain_all(); > > > > reclaimed = try_to_free_mem_cgroup_pages(memcg, > > - min(nr_to_reclaim - nr_reclaimed, SWAP_CLUSTER_MAX), > > + batch_size, > > GFP_KERNEL, reclaim_options); > > I think the above two lines should now fit into one. It goes out to 81 characters. I wasn't brave enough, even though the 80 char limit is no more. :) This takes it out to 100 but gets rid of batch_size if folks are ok with it: reclaimed = try_to_free_mem_cgroup_pages(memcg, - min(nr_to_reclaim - nr_reclaimed, SWAP_CLUSTER_MAX), + /* Will converge on zero, but reclaim enforces a minimum */ + (nr_to_reclaim - nr_reclaimed) / 4, GFP_KERNEL, reclaim_options);
> > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > > > index 46d8d02114cf..e6f921555e07 100644 > > > --- a/mm/memcontrol.c > > > +++ b/mm/memcontrol.c > > > @@ -6965,6 +6965,9 @@ static ssize_t memory_reclaim(struct kernfs_open_file *of, char *buf, > > > while (nr_reclaimed < nr_to_reclaim) { > > > unsigned long reclaimed; > > > > > > + /* Will converge on zero, but reclaim enforces a minimum */ > > > + unsigned long batch_size = (nr_to_reclaim - nr_reclaimed) / 4; > > > + I think it's clearer with no blank lines between declarations. Perhaps add these two lines right above the declaration of "reclaimed"? > > > if (signal_pending(current)) > > > return -EINTR; > > > > > > @@ -6977,7 +6980,7 @@ static ssize_t memory_reclaim(struct kernfs_open_file *of, char *buf, > > > lru_add_drain_all(); > > > > > > reclaimed = try_to_free_mem_cgroup_pages(memcg, > > > - min(nr_to_reclaim - nr_reclaimed, SWAP_CLUSTER_MAX), > > > + batch_size, > > > GFP_KERNEL, reclaim_options); > > > > I think the above two lines should now fit into one. > > It goes out to 81 characters. I wasn't brave enough, even though the > 80 char limit is no more. :) Oh okay, I would leave it as-is or rename batch_size to something slightly shorter. Not a big deal either way. Going to 81 chars is probably fine too.
On Fri, Feb 02, 2024 at 02:13:20PM -0800, Yosry Ahmed wrote: > On Fri, Feb 2, 2024 at 2:10 PM T.J. Mercier <tjmercier@google.com> wrote: > > @@ -6965,6 +6965,9 @@ static ssize_t memory_reclaim(struct kernfs_open_file *of, char *buf, > > while (nr_reclaimed < nr_to_reclaim) { > > unsigned long reclaimed; > > > > + /* Will converge on zero, but reclaim enforces a minimum */ > > + unsigned long batch_size = (nr_to_reclaim - nr_reclaimed) / 4; > > + > > if (signal_pending(current)) > > return -EINTR; > > > > @@ -6977,7 +6980,7 @@ static ssize_t memory_reclaim(struct kernfs_open_file *of, char *buf, > > lru_add_drain_all(); > > > > reclaimed = try_to_free_mem_cgroup_pages(memcg, > > - min(nr_to_reclaim - nr_reclaimed, SWAP_CLUSTER_MAX), > > + batch_size, > > GFP_KERNEL, reclaim_options); > > I think the above two lines should now fit into one. Yeah might as well compact that again. The newline in the declarations is a bit unusual for this codebase as well, and puts the comment sort of away from the "reclaim" it refers to. This? /* Will converge on zero, but reclaim enforces a minimum */ batch_size = (nr_to_reclaim - nr_reclaimed) / 4; reclaimed = try_to_free_mem_cgroup_pages(memcg, batch_size, GFP_KERNEL, reclaim_options); But agreed, it's all just nitpickety nickpicking. :) Acked-by: Johannes Weiner <hannes@cmpxchg.org>
On Fri, Feb 2, 2024 at 2:41 PM Johannes Weiner <hannes@cmpxchg.org> wrote: > > On Fri, Feb 02, 2024 at 02:13:20PM -0800, Yosry Ahmed wrote: > > On Fri, Feb 2, 2024 at 2:10 PM T.J. Mercier <tjmercier@google.com> wrote: > > > @@ -6965,6 +6965,9 @@ static ssize_t memory_reclaim(struct kernfs_open_file *of, char *buf, > > > while (nr_reclaimed < nr_to_reclaim) { > > > unsigned long reclaimed; > > > > > > + /* Will converge on zero, but reclaim enforces a minimum */ > > > + unsigned long batch_size = (nr_to_reclaim - nr_reclaimed) / 4; > > > + > > > if (signal_pending(current)) > > > return -EINTR; > > > > > > @@ -6977,7 +6980,7 @@ static ssize_t memory_reclaim(struct kernfs_open_file *of, char *buf, > > > lru_add_drain_all(); > > > > > > reclaimed = try_to_free_mem_cgroup_pages(memcg, > > > - min(nr_to_reclaim - nr_reclaimed, SWAP_CLUSTER_MAX), > > > + batch_size, > > > GFP_KERNEL, reclaim_options); > > > > I think the above two lines should now fit into one. > > Yeah might as well compact that again. The newline in the declarations > is a bit unusual for this codebase as well, and puts the comment sort > of away from the "reclaim" it refers to. This? > > /* Will converge on zero, but reclaim enforces a minimum */ > batch_size = (nr_to_reclaim - nr_reclaimed) / 4; > > reclaimed = try_to_free_mem_cgroup_pages(memcg, batch_size, > GFP_KERNEL, reclaim_options); > > But agreed, it's all just nitpickety nickpicking. :) > > Acked-by: Johannes Weiner <hannes@cmpxchg.org> -std=gnu11 to the rescue + /* Will converge on zero, but reclaim enforces a minimum */ + unsigned long batch_size = (nr_to_reclaim - nr_reclaimed) / 4; reclaimed = try_to_free_mem_cgroup_pages(memcg, - min(nr_to_reclaim - nr_reclaimed, SWAP_CLUSTER_MAX), + batch_size, GFP_KERNEL, reclaim_options);
diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 46d8d02114cf..e6f921555e07 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -6965,6 +6965,9 @@ static ssize_t memory_reclaim(struct kernfs_open_file *of, char *buf, while (nr_reclaimed < nr_to_reclaim) { unsigned long reclaimed; + /* Will converge on zero, but reclaim enforces a minimum */ + unsigned long batch_size = (nr_to_reclaim - nr_reclaimed) / 4; + if (signal_pending(current)) return -EINTR; @@ -6977,7 +6980,7 @@ static ssize_t memory_reclaim(struct kernfs_open_file *of, char *buf, lru_add_drain_all(); reclaimed = try_to_free_mem_cgroup_pages(memcg, - min(nr_to_reclaim - nr_reclaimed, SWAP_CLUSTER_MAX), + batch_size, GFP_KERNEL, reclaim_options); if (!reclaimed && !nr_retries--)