From patchwork Mon May 15 09:38:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "zhaoyang.huang" X-Patchwork-Id: 94041 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp6792600vqo; Mon, 15 May 2023 02:49:22 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ5a7UZ6ohfwlnX1fQZyHTs3JVhfSXPKOSNg6m9YPJO45hWhWgweeqSozoWmF4SEs0eylLPw X-Received: by 2002:a05:6a20:4288:b0:100:9d6c:b49e with SMTP id o8-20020a056a20428800b001009d6cb49emr33888000pzj.58.1684144161951; Mon, 15 May 2023 02:49:21 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1684144161; cv=none; d=google.com; s=arc-20160816; b=KhRtRk5fYi07SH91pKJTPvuJ2DumSic9X0W+yCtjCUv0mkxTNY80uJ65QA2B19POT+ h4FDlkaehkdBfovtfmdseTzMZumOfDg2+BJNYt0R6+JYaej6XTG6NhuER0XBzwlSD0H6 qAiotBUrRAKflP0IuVkCQ4Ce5uzCRkpFcCVGicEGjjI4wG4DKF/OjMQ2SXPRq8ezFQ92 FN9ATz3ZroAU3m/ruZ15mlx+BXz7x+T/TD3i9rpc+AAMLHdQGFjp+QF6m8vGDup0uOd7 0fGiCpMtB8ZIvEulMdnehkn4V6E9YCADwcIw2dGmUaewMqIfAEsylMMXU01WR03bgzen fcUQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:message-id:date:subject:to:from; bh=iiWOKtHtIyydA4AgntJEknjCMUl8hqeYHFcPBaGXHDs=; b=e/BtK+5HlNppj8n52qG03Pz0rdbaY7H7Z3QPgXqq2RpOzDZbtfAEA1WB+KH3sPgGDp 7Gv/0kOte3irDToR6rigG0J3btPq5PFrK+S/jdiyIDOfrFXpiQCsnTK9gN5/mQuItl6/ kjHIUvFnFkq2z9PpK8za89TqdjOvlX6skFZw/jFqYk3K4hMLvkBB9GkjzhVOvqtvJUdr VClfVN/bpMROSnJNEOGz0GO4RkehzuQLKl4DSCdu7Xb1u3amx9kTr0LIdRXXVpnUvUsg nzmLVeP17A++NMftnbnNA5a1DOp4EFNnhKz21BBNH1dpUg009JvrK2XSFlK5+xUx6I2p s6Qw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id j13-20020a63cf0d000000b005133f65dc69si15767995pgg.219.2023.05.15.02.49.09; Mon, 15 May 2023 02:49:21 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233430AbjEOJpa (ORCPT + 99 others); Mon, 15 May 2023 05:45:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49400 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239812AbjEOJon (ORCPT ); Mon, 15 May 2023 05:44:43 -0400 Received: from SHSQR01.spreadtrum.com (unknown [222.66.158.135]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 01B511BDF for ; Mon, 15 May 2023 02:42:14 -0700 (PDT) Received: from SHSend.spreadtrum.com (bjmbx01.spreadtrum.com [10.0.64.7]) by SHSQR01.spreadtrum.com with ESMTP id 34F9cZ6P075415; Mon, 15 May 2023 17:38:35 +0800 (+08) (envelope-from zhaoyang.huang@unisoc.com) Received: from bj03382pcu.spreadtrum.com (10.0.74.65) by BJMBX01.spreadtrum.com (10.0.64.7) with Microsoft SMTP Server (TLS) id 15.0.1497.23; Mon, 15 May 2023 17:38:31 +0800 From: "zhaoyang.huang" To: Andrew Morton , Matthew Wilcox , Minchan Kim , Joonsoo Kim , , , Zhaoyang Huang , Subject: [Resend PATCHv2] mm: skip CMA pages when they are not available Date: Mon, 15 May 2023 17:38:15 +0800 Message-ID: <1684143495-12872-1-git-send-email-zhaoyang.huang@unisoc.com> X-Mailer: git-send-email 1.9.1 MIME-Version: 1.0 X-Originating-IP: [10.0.74.65] X-ClientProxiedBy: SHCAS03.spreadtrum.com (10.0.1.207) To BJMBX01.spreadtrum.com (10.0.64.7) X-MAIL: SHSQR01.spreadtrum.com 34F9cZ6P075415 X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,SPF_HELO_NONE, SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1765953148610170813?= X-GMAIL-MSGID: =?utf-8?q?1765953148610170813?= From: Zhaoyang Huang This patch fixes unproductive reclaiming of CMA pages by skipping them when they are not available for current context. It is arise from bellowing OOM issue, which caused by large proportion of MIGRATE_CMA pages among free pages. There has been commit(168676649) to fix it by trying CMA pages first instead of fallback in rmqueue. I would like to propose another one from reclaiming perspective. 04166 < 4> [ 36.172486] [03-19 10:05:52.172] ActivityManager: page allocation failure: order:0, mode:0xc00(GFP_NOIO), nodemask=(null),cpuset=foreground,mems_allowed=0 0419C < 4> [ 36.189447] [03-19 10:05:52.189] DMA32: 0*4kB 447*8kB (C) 217*16kB (C) 124*32kB (C) 136*64kB (C) 70*128kB (C) 22*256kB (C) 3*512kB (C) 0*1024kB 0*2048kB 0*4096kB = 35848kB 0419D < 4> [ 36.193125] [03-19 10:05:52.193] Normal: 231*4kB (UMEH) 49*8kB (MEH) 14*16kB (H) 13*32kB (H) 8*64kB (H) 2*128kB (H) 0*256kB 1*512kB (H) 0*1024kB 0*2048kB 0*4096kB = 3236kB ...... 041EA < 4> [ 36.234447] [03-19 10:05:52.234] SLUB: Unable to allocate memory on node -1, gfp=0xa20(GFP_ATOMIC) 041EB < 4> [ 36.234455] [03-19 10:05:52.234] cache: ext4_io_end, object size: 64, buffer size: 64, default order: 0, min order: 0 041EC < 4> [ 36.234459] [03-19 10:05:52.234] node 0: slabs: 53,objs: 3392, free: 0 Signed-off-by: Zhaoyang Huang --- v2: update commit message and fix build error when CONFIG_CMA is not set --- --- mm/vmscan.c | 15 +++++++++++++-- 1 file changed, 13 insertions(+), 2 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index bd6637f..19fb445 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -2225,10 +2225,16 @@ static unsigned long isolate_lru_folios(unsigned long nr_to_scan, unsigned long nr_skipped[MAX_NR_ZONES] = { 0, }; unsigned long skipped = 0; unsigned long scan, total_scan, nr_pages; + bool cma_cap = true; + struct page *page; LIST_HEAD(folios_skipped); total_scan = 0; scan = 0; + if ((IS_ENABLED(CONFIG_CMA)) && !current_is_kswapd() + && (gfp_migratetype(sc->gfp_mask) != MIGRATE_MOVABLE)) + cma_cap = false; + while (scan < nr_to_scan && !list_empty(src)) { struct list_head *move_to = src; struct folio *folio; @@ -2239,12 +2245,17 @@ static unsigned long isolate_lru_folios(unsigned long nr_to_scan, nr_pages = folio_nr_pages(folio); total_scan += nr_pages; - if (folio_zonenum(folio) > sc->reclaim_idx) { + page = &folio->page; + + if ((folio_zonenum(folio) > sc->reclaim_idx) +#ifdef CONFIG_CMA + || (get_pageblock_migratetype(page) == MIGRATE_CMA && !cma_cap) +#endif + ) { nr_skipped[folio_zonenum(folio)] += nr_pages; move_to = &folios_skipped; goto move; } - /* * Do not count skipped folios because that makes the function * return with no isolated folios if the LRU mostly contains