Message ID | 1684737363-31554-1-git-send-email-zhaoyang.huang@unisoc.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp1251513vqo; Sun, 21 May 2023 23:53:10 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ7TPIZmB0N5BdqhbkXmuPkzq2mFtKEwn/0N/wO0G1Kcly16/bUJxIGoZyS2fz2vFDe3h7+F X-Received: by 2002:a17:903:1212:b0:1ae:2013:4bc8 with SMTP id l18-20020a170903121200b001ae20134bc8mr14407221plh.18.1684738389967; Sun, 21 May 2023 23:53:09 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1684738389; cv=none; d=google.com; s=arc-20160816; b=E9V0Q+ponx9TRJQg5kdz3LPoaxNiYq0BSlWHji3N1BX8fD+3qShz4WzFRlA/mnOIDO llJdlkUTiKslPosGkPjfFe7UDwy6JRo94n7wgsXLnNqPcwJrQctZCxqPqmesRACKGE6s cbNfPDeTw2CI0WBV8JKrhm8r/l9WvDa+gm5/6uSsAWCe8adanlcjS13oDSHi3UingU9Y fCmaprgJXq4M4gs6AUFLg5YCXoeW9zzFiePfHFaCy4Rfc14OSYKDXrIZsWrSX8tJkkug /FDcj5UOHtWQKFrvdB0SN4o/1D2vdcpg6IlKxksMWw8smHG4twaU0mjFYbcdWMjFueaB E8bg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:message-id:date:subject:to:from; bh=e7R5jz/aRvqKC7RiwV479vXV806WYI8sbWwI9oXOUJI=; b=Mitvmd+vslrBl0UH0zAqZUoniEJyaaqi909clalBTOAFVpB93p63hXrAV9AeNQXAqI jbcQuoAMKwAj2MBvYY1itwtOyH+h9w1LLtjOWjl63kYGUFp4QK69SVaGjRHo6PG1Tafg PFezUwgK/FGDog1Km9vRqvecUYT0wLtPpD+Na/IHikVmvvBfJTgmsamGgcTh8j25kjLx Xb4lXKyj1FTBxIjore+B0BsfYlpEpdOdEkr2y6MINTM/cS4S1IgC1pr9H3jQrDxdHxAr GR3hgZNke7wR12RCqYk6cdFTmO+6UM12FwUB8kFPEj4lp7aNrj/KkBB7EnZIiD+pamYp QYxQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id q8-20020a170902a3c800b001ae7f85061bsi4046084plb.609.2023.05.21.23.52.54; Sun, 21 May 2023 23:53:09 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229974AbjEVGgr (ORCPT <rfc822;cscallsign@gmail.com> + 99 others); Mon, 22 May 2023 02:36:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48898 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229649AbjEVGgo (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Mon, 22 May 2023 02:36:44 -0400 Received: from SHSQR01.spreadtrum.com (mx1.unisoc.com [222.66.158.135]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0880FCF for <linux-kernel@vger.kernel.org>; Sun, 21 May 2023 23:36:39 -0700 (PDT) Received: from SHSend.spreadtrum.com (bjmbx01.spreadtrum.com [10.0.64.7]) by SHSQR01.spreadtrum.com with ESMTP id 34M6aJ7n021937; Mon, 22 May 2023 14:36:19 +0800 (+08) (envelope-from zhaoyang.huang@unisoc.com) Received: from bj03382pcu.spreadtrum.com (10.0.73.76) by BJMBX01.spreadtrum.com (10.0.64.7) with Microsoft SMTP Server (TLS) id 15.0.1497.23; Mon, 22 May 2023 14:36:18 +0800 From: "zhaoyang.huang" <zhaoyang.huang@unisoc.com> To: Andrew Morton <akpm@linux-foundation.org>, Matthew Wilcox <willy@infradead.org>, Minchan Kim <minchan@kernel.org>, Joonsoo Kim <iamjoonsoo.kim@lge.com>, <linux-mm@kvack.org>, <linux-kernel@vger.kernel.org>, Zhaoyang Huang <huangzhaoyang@gmail.com>, <ke.wang@unisoc.com> Subject: [PATCHv4] mm: skip CMA pages when they are not available Date: Mon, 22 May 2023 14:36:03 +0800 Message-ID: <1684737363-31554-1-git-send-email-zhaoyang.huang@unisoc.com> X-Mailer: git-send-email 1.9.1 MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.0.73.76] X-ClientProxiedBy: SHCAS03.spreadtrum.com (10.0.1.207) To BJMBX01.spreadtrum.com (10.0.64.7) X-MAIL: SHSQR01.spreadtrum.com 34M6aJ7n021937 X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,SPF_HELO_NONE, SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1766576241753056657?= X-GMAIL-MSGID: =?utf-8?q?1766576241753056657?= |
Series |
[PATCHv4] mm: skip CMA pages when they are not available
|
|
Commit Message
zhaoyang.huang
May 22, 2023, 6:36 a.m. UTC
From: Zhaoyang Huang <zhaoyang.huang@unisoc.com> This patch fixes unproductive reclaiming of CMA pages by skipping them when they are not available for current context. It is arise from bellowing OOM issue, which caused by large proportion of MIGRATE_CMA pages among free pages. [ 36.172486] [03-19 10:05:52.172] ActivityManager: page allocation failure: order:0, mode:0xc00(GFP_NOIO), nodemask=(null),cpuset=foreground,mems_allowed=0 [ 36.189447] [03-19 10:05:52.189] DMA32: 0*4kB 447*8kB (C) 217*16kB (C) 124*32kB (C) 136*64kB (C) 70*128kB (C) 22*256kB (C) 3*512kB (C) 0*1024kB 0*2048kB 0*4096kB = 35848kB [ 36.193125] [03-19 10:05:52.193] Normal: 231*4kB (UMEH) 49*8kB (MEH) 14*16kB (H) 13*32kB (H) 8*64kB (H) 2*128kB (H) 0*256kB 1*512kB (H) 0*1024kB 0*2048kB 0*4096kB = 3236kB ... [ 36.234447] [03-19 10:05:52.234] SLUB: Unable to allocate memory on node -1, gfp=0xa20(GFP_ATOMIC) [ 36.234455] [03-19 10:05:52.234] cache: ext4_io_end, object size: 64, buffer size: 64, default order: 0, min order: 0 [ 36.234459] [03-19 10:05:52.234] node 0: slabs: 53,objs: 3392, free: 0 Signed-off-by: Zhaoyang Huang <zhaoyang.huang@unisoc.com> --- v2: update commit message and fix build error when CONFIG_CMA is not set v3,v4: update code and comments --- --- mm/vmscan.c | 23 ++++++++++++++++++++++- 1 file changed, 22 insertions(+), 1 deletion(-)
Comments
On Mon, May 22, 2023 at 02:36:03PM +0800, zhaoyang.huang wrote: > +#ifdef CONFIG_CMA > +/* > + * It is waste of effort to scan and reclaim CMA pages if it is not available > + * for current allocation context > + */ > +static bool skip_cma(struct folio *folio, struct scan_control *sc) > +{ > + if (!current_is_kswapd() && > + gfp_migratetype(sc->gfp_mask) != MIGRATE_MOVABLE && > + get_pageblock_migratetype(&folio->page) == MIGRATE_CMA) > + return true; > + return false; > +} > +#else > +static bool skip_cma(struct folio *folio, struct scan_control *sc) > +{ > + return false; > +} > +#endif > + > /* > * Isolating page from the lruvec to fill in @dst list by nr_to_scan times. > * > @@ -2239,7 +2259,8 @@ static unsigned long isolate_lru_folios(unsigned long nr_to_scan, > nr_pages = folio_nr_pages(folio); > total_scan += nr_pages; > > - if (folio_zonenum(folio) > sc->reclaim_idx) { > + if (folio_zonenum(folio) > sc->reclaim_idx || > + skip_cma(folio, sc)) { > nr_skipped[folio_zonenum(folio)] += nr_pages; > move_to = &folios_skipped; > goto move; I have no idea if what this patch is trying to accomplish is correct, but I no longer object to how it is doing it.
On Fri, May 26, 2023 at 4:03 AM Matthew Wilcox <willy@infradead.org> wrote: > > On Mon, May 22, 2023 at 02:36:03PM +0800, zhaoyang.huang wrote: > > +#ifdef CONFIG_CMA > > +/* > > + * It is waste of effort to scan and reclaim CMA pages if it is not available > > + * for current allocation context > > + */ > > +static bool skip_cma(struct folio *folio, struct scan_control *sc) > > +{ > > + if (!current_is_kswapd() && > > + gfp_migratetype(sc->gfp_mask) != MIGRATE_MOVABLE && > > + get_pageblock_migratetype(&folio->page) == MIGRATE_CMA) > > + return true; > > + return false; > > +} > > +#else > > +static bool skip_cma(struct folio *folio, struct scan_control *sc) > > +{ > > + return false; > > +} > > +#endif > > + > > /* > > * Isolating page from the lruvec to fill in @dst list by nr_to_scan times. > > * > > @@ -2239,7 +2259,8 @@ static unsigned long isolate_lru_folios(unsigned long nr_to_scan, > > nr_pages = folio_nr_pages(folio); > > total_scan += nr_pages; > > > > - if (folio_zonenum(folio) > sc->reclaim_idx) { > > + if (folio_zonenum(folio) > sc->reclaim_idx || > > + skip_cma(folio, sc)) { > > nr_skipped[folio_zonenum(folio)] += nr_pages; > > move_to = &folios_skipped; > > goto move; > > I have no idea if what this patch is trying to accomplish is correct, > but I no longer object to how it is doing it. IMO, this is necessary as there could be such weird scenario, that is an GFP_KERNEL allocation might get 32 MIGRATE_CMA pages via direct_reclaim which lead to a low PSI_MEM/vmpressure value but return a NULL pointer
On 22.05.23 08:36, zhaoyang.huang wrote: > From: Zhaoyang Huang <zhaoyang.huang@unisoc.com> > > This patch fixes unproductive reclaiming of CMA pages by skipping them when they > are not available for current context. It is arise from bellowing OOM issue, which > caused by large proportion of MIGRATE_CMA pages among free pages. > > [ 36.172486] [03-19 10:05:52.172] ActivityManager: page allocation failure: order:0, mode:0xc00(GFP_NOIO), nodemask=(null),cpuset=foreground,mems_allowed=0 > [ 36.189447] [03-19 10:05:52.189] DMA32: 0*4kB 447*8kB (C) 217*16kB (C) 124*32kB (C) 136*64kB (C) 70*128kB (C) 22*256kB (C) 3*512kB (C) 0*1024kB 0*2048kB 0*4096kB = 35848kB > [ 36.193125] [03-19 10:05:52.193] Normal: 231*4kB (UMEH) 49*8kB (MEH) 14*16kB (H) 13*32kB (H) 8*64kB (H) 2*128kB (H) 0*256kB 1*512kB (H) 0*1024kB 0*2048kB 0*4096kB = 3236kB > ... > [ 36.234447] [03-19 10:05:52.234] SLUB: Unable to allocate memory on node -1, gfp=0xa20(GFP_ATOMIC) > [ 36.234455] [03-19 10:05:52.234] cache: ext4_io_end, object size: 64, buffer size: 64, default order: 0, min order: 0 > [ 36.234459] [03-19 10:05:52.234] node 0: slabs: 53,objs: 3392, free: 0 > > Signed-off-by: Zhaoyang Huang <zhaoyang.huang@unisoc.com> > --- > v2: update commit message and fix build error when CONFIG_CMA is not set > v3,v4: update code and comments > --- > --- > mm/vmscan.c | 23 ++++++++++++++++++++++- > 1 file changed, 22 insertions(+), 1 deletion(-) > > diff --git a/mm/vmscan.c b/mm/vmscan.c > index bd6637f..20facec 100644 > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -2193,6 +2193,26 @@ static __always_inline void update_lru_sizes(struct lruvec *lruvec, > > } > > +#ifdef CONFIG_CMA > +/* > + * It is waste of effort to scan and reclaim CMA pages if it is not available > + * for current allocation context > + */ /* * Only movable allocations may end up on MIGRATE_CMA pageblocks. If * we're not dealing with a movable allocation, it doesn't make sense to * reclaim from these pageblocks: the reclaimed memory is unusable for * this allocation. */ Did I get it right? > +static bool skip_cma(struct folio *folio, struct scan_control *sc) > +{ > + if (!current_is_kswapd() && > + gfp_migratetype(sc->gfp_mask) != MIGRATE_MOVABLE && > + get_pageblock_migratetype(&folio->page) == MIGRATE_CMA) > + return true; > + return false; return !current_is_kswapd() && gfp_migratetype(sc->gfp_mask) != MIGRATE_MOVABLE && get_pageblock_migratetype(&folio->page) == MIGRATE_CMA;
On Mon, May 22, 2023 at 02:36:03PM +0800, zhaoyang.huang wrote: > From: Zhaoyang Huang <zhaoyang.huang@unisoc.com> > > This patch fixes unproductive reclaiming of CMA pages by skipping them when they > are not available for current context. It is arise from bellowing OOM issue, which > caused by large proportion of MIGRATE_CMA pages among free pages. > > [ 36.172486] [03-19 10:05:52.172] ActivityManager: page allocation failure: order:0, mode:0xc00(GFP_NOIO), nodemask=(null),cpuset=foreground,mems_allowed=0 > [ 36.189447] [03-19 10:05:52.189] DMA32: 0*4kB 447*8kB (C) 217*16kB (C) 124*32kB (C) 136*64kB (C) 70*128kB (C) 22*256kB (C) 3*512kB (C) 0*1024kB 0*2048kB 0*4096kB = 35848kB > [ 36.193125] [03-19 10:05:52.193] Normal: 231*4kB (UMEH) 49*8kB (MEH) 14*16kB (H) 13*32kB (H) 8*64kB (H) 2*128kB (H) 0*256kB 1*512kB (H) 0*1024kB 0*2048kB 0*4096kB = 3236kB > ... > [ 36.234447] [03-19 10:05:52.234] SLUB: Unable to allocate memory on node -1, gfp=0xa20(GFP_ATOMIC) > [ 36.234455] [03-19 10:05:52.234] cache: ext4_io_end, object size: 64, buffer size: 64, default order: 0, min order: 0 > [ 36.234459] [03-19 10:05:52.234] node 0: slabs: 53,objs: 3392, free: 0 > > Signed-off-by: Zhaoyang Huang <zhaoyang.huang@unisoc.com> > --- > v2: update commit message and fix build error when CONFIG_CMA is not set > v3,v4: update code and comments > --- > --- > mm/vmscan.c | 23 ++++++++++++++++++++++- > 1 file changed, 22 insertions(+), 1 deletion(-) > > diff --git a/mm/vmscan.c b/mm/vmscan.c > index bd6637f..20facec 100644 > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -2193,6 +2193,26 @@ static __always_inline void update_lru_sizes(struct lruvec *lruvec, > > } > > +#ifdef CONFIG_CMA > +/* > + * It is waste of effort to scan and reclaim CMA pages if it is not available > + * for current allocation context > + */ > +static bool skip_cma(struct folio *folio, struct scan_control *sc) > +{ > + if (!current_is_kswapd() && The function is called by isolate_lru_folios which is used by both background and direct reclaims at the same time. And sc->reclaim_idx below to filter unproductive reclaim out is used for both cases but why does the cma is considering only direct reclaim path? > + gfp_migratetype(sc->gfp_mask) != MIGRATE_MOVABLE && > + get_pageblock_migratetype(&folio->page) == MIGRATE_CMA) > + return true; > + return false; > +} > +#else > +static bool skip_cma(struct folio *folio, struct scan_control *sc) > +{ > + return false; > +} > +#endif > + > /* > * Isolating page from the lruvec to fill in @dst list by nr_to_scan times. > * > @@ -2239,7 +2259,8 @@ static unsigned long isolate_lru_folios(unsigned long nr_to_scan, > nr_pages = folio_nr_pages(folio); > total_scan += nr_pages; > > - if (folio_zonenum(folio) > sc->reclaim_idx) { > + if (folio_zonenum(folio) > sc->reclaim_idx || > + skip_cma(folio, sc)) { > nr_skipped[folio_zonenum(folio)] += nr_pages; > move_to = &folios_skipped; > goto move; > -- > 1.9.1 >
On Sat, May 27, 2023 at 3:36 AM David Hildenbrand <david@redhat.com> wrote: > > On 22.05.23 08:36, zhaoyang.huang wrote: > > From: Zhaoyang Huang <zhaoyang.huang@unisoc.com> > > > > This patch fixes unproductive reclaiming of CMA pages by skipping them when they > > are not available for current context. It is arise from bellowing OOM issue, which > > caused by large proportion of MIGRATE_CMA pages among free pages. > > > > [ 36.172486] [03-19 10:05:52.172] ActivityManager: page allocation failure: order:0, mode:0xc00(GFP_NOIO), nodemask=(null),cpuset=foreground,mems_allowed=0 > > [ 36.189447] [03-19 10:05:52.189] DMA32: 0*4kB 447*8kB (C) 217*16kB (C) 124*32kB (C) 136*64kB (C) 70*128kB (C) 22*256kB (C) 3*512kB (C) 0*1024kB 0*2048kB 0*4096kB = 35848kB > > [ 36.193125] [03-19 10:05:52.193] Normal: 231*4kB (UMEH) 49*8kB (MEH) 14*16kB (H) 13*32kB (H) 8*64kB (H) 2*128kB (H) 0*256kB 1*512kB (H) 0*1024kB 0*2048kB 0*4096kB = 3236kB > > ... > > [ 36.234447] [03-19 10:05:52.234] SLUB: Unable to allocate memory on node -1, gfp=0xa20(GFP_ATOMIC) > > [ 36.234455] [03-19 10:05:52.234] cache: ext4_io_end, object size: 64, buffer size: 64, default order: 0, min order: 0 > > [ 36.234459] [03-19 10:05:52.234] node 0: slabs: 53,objs: 3392, free: 0 > > > > Signed-off-by: Zhaoyang Huang <zhaoyang.huang@unisoc.com> > > --- > > v2: update commit message and fix build error when CONFIG_CMA is not set > > v3,v4: update code and comments > > --- > > --- > > mm/vmscan.c | 23 ++++++++++++++++++++++- > > 1 file changed, 22 insertions(+), 1 deletion(-) > > > > diff --git a/mm/vmscan.c b/mm/vmscan.c > > index bd6637f..20facec 100644 > > --- a/mm/vmscan.c > > +++ b/mm/vmscan.c > > @@ -2193,6 +2193,26 @@ static __always_inline void update_lru_sizes(struct lruvec *lruvec, > > > > } > > > > +#ifdef CONFIG_CMA > > +/* > > + * It is waste of effort to scan and reclaim CMA pages if it is not available > > + * for current allocation context > > + */ > > /* > * Only movable allocations may end up on MIGRATE_CMA pageblocks. If > * we're not dealing with a movable allocation, it doesn't make sense to > * reclaim from these pageblocks: the reclaimed memory is unusable for > * this allocation. > */ > > Did I get it right? Yes, it is right. > > > +static bool skip_cma(struct folio *folio, struct scan_control *sc) > > +{ > > + if (!current_is_kswapd() && > > + gfp_migratetype(sc->gfp_mask) != MIGRATE_MOVABLE && > > + get_pageblock_migratetype(&folio->page) == MIGRATE_CMA) > > + return true; > > + return false; > > return !current_is_kswapd() && > gfp_migratetype(sc->gfp_mask) != MIGRATE_MOVABLE && > get_pageblock_migratetype(&folio->page) == MIGRATE_CMA; ok, thanks > > > -- > Thanks, > > David / dhildenb >
On Sat, May 27, 2023 at 7:03 AM Minchan Kim <minchan@kernel.org> wrote: > > On Mon, May 22, 2023 at 02:36:03PM +0800, zhaoyang.huang wrote: > > From: Zhaoyang Huang <zhaoyang.huang@unisoc.com> > > > > This patch fixes unproductive reclaiming of CMA pages by skipping them when they > > are not available for current context. It is arise from bellowing OOM issue, which > > caused by large proportion of MIGRATE_CMA pages among free pages. > > > > [ 36.172486] [03-19 10:05:52.172] ActivityManager: page allocation failure: order:0, mode:0xc00(GFP_NOIO), nodemask=(null),cpuset=foreground,mems_allowed=0 > > [ 36.189447] [03-19 10:05:52.189] DMA32: 0*4kB 447*8kB (C) 217*16kB (C) 124*32kB (C) 136*64kB (C) 70*128kB (C) 22*256kB (C) 3*512kB (C) 0*1024kB 0*2048kB 0*4096kB = 35848kB > > [ 36.193125] [03-19 10:05:52.193] Normal: 231*4kB (UMEH) 49*8kB (MEH) 14*16kB (H) 13*32kB (H) 8*64kB (H) 2*128kB (H) 0*256kB 1*512kB (H) 0*1024kB 0*2048kB 0*4096kB = 3236kB > > ... > > [ 36.234447] [03-19 10:05:52.234] SLUB: Unable to allocate memory on node -1, gfp=0xa20(GFP_ATOMIC) > > [ 36.234455] [03-19 10:05:52.234] cache: ext4_io_end, object size: 64, buffer size: 64, default order: 0, min order: 0 > > [ 36.234459] [03-19 10:05:52.234] node 0: slabs: 53,objs: 3392, free: 0 > > > > Signed-off-by: Zhaoyang Huang <zhaoyang.huang@unisoc.com> > > --- > > v2: update commit message and fix build error when CONFIG_CMA is not set > > v3,v4: update code and comments > > --- > > --- > > mm/vmscan.c | 23 ++++++++++++++++++++++- > > 1 file changed, 22 insertions(+), 1 deletion(-) > > > > diff --git a/mm/vmscan.c b/mm/vmscan.c > > index bd6637f..20facec 100644 > > --- a/mm/vmscan.c > > +++ b/mm/vmscan.c > > @@ -2193,6 +2193,26 @@ static __always_inline void update_lru_sizes(struct lruvec *lruvec, > > > > } > > > > +#ifdef CONFIG_CMA > > +/* > > + * It is waste of effort to scan and reclaim CMA pages if it is not available > > + * for current allocation context > > + */ > > +static bool skip_cma(struct folio *folio, struct scan_control *sc) > > +{ > > + if (!current_is_kswapd() && > > The function is called by isolate_lru_folios which is used by both background > and direct reclaims at the same time. And sc->reclaim_idx below to filter > unproductive reclaim out is used for both cases but why does the cma is considering > only direct reclaim path? Because kswapd's sc->gfp_mask = GFP_KERNEL which can not distinguish this scenario > > > > + gfp_migratetype(sc->gfp_mask) != MIGRATE_MOVABLE && > > + get_pageblock_migratetype(&folio->page) == MIGRATE_CMA) > > + return true; > > + return false; > > +} > > +#else > > +static bool skip_cma(struct folio *folio, struct scan_control *sc) > > +{ > > + return false; > > +} > > +#endif > > + > > /* > > * Isolating page from the lruvec to fill in @dst list by nr_to_scan times. > > * > > @@ -2239,7 +2259,8 @@ static unsigned long isolate_lru_folios(unsigned long nr_to_scan, > > nr_pages = folio_nr_pages(folio); > > total_scan += nr_pages; > > > > - if (folio_zonenum(folio) > sc->reclaim_idx) { > > + if (folio_zonenum(folio) > sc->reclaim_idx || > > + skip_cma(folio, sc)) { > > nr_skipped[folio_zonenum(folio)] += nr_pages; > > move_to = &folios_skipped; > > goto move; > > -- > > 1.9.1 > >
diff --git a/mm/vmscan.c b/mm/vmscan.c index bd6637f..20facec 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -2193,6 +2193,26 @@ static __always_inline void update_lru_sizes(struct lruvec *lruvec, } +#ifdef CONFIG_CMA +/* + * It is waste of effort to scan and reclaim CMA pages if it is not available + * for current allocation context + */ +static bool skip_cma(struct folio *folio, struct scan_control *sc) +{ + if (!current_is_kswapd() && + gfp_migratetype(sc->gfp_mask) != MIGRATE_MOVABLE && + get_pageblock_migratetype(&folio->page) == MIGRATE_CMA) + return true; + return false; +} +#else +static bool skip_cma(struct folio *folio, struct scan_control *sc) +{ + return false; +} +#endif + /* * Isolating page from the lruvec to fill in @dst list by nr_to_scan times. * @@ -2239,7 +2259,8 @@ static unsigned long isolate_lru_folios(unsigned long nr_to_scan, nr_pages = folio_nr_pages(folio); total_scan += nr_pages; - if (folio_zonenum(folio) > sc->reclaim_idx) { + if (folio_zonenum(folio) > sc->reclaim_idx || + skip_cma(folio, sc)) { nr_skipped[folio_zonenum(folio)] += nr_pages; move_to = &folios_skipped; goto move;