From patchwork Mon May 8 09:33:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "zhaoyang.huang" X-Patchwork-Id: 91037 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp2026125vqo; Mon, 8 May 2023 02:41:12 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ5i8x7i7bWzLNp8vOfsR+0Bs2WIiMQ6x3u9/ec8GbeDOXuEyF4a7gbyW9z3+DBTb6F6/HBK X-Received: by 2002:a17:902:c412:b0:1ac:3cf0:fb3c with SMTP id k18-20020a170902c41200b001ac3cf0fb3cmr14309297plk.1.1683538872050; Mon, 08 May 2023 02:41:12 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1683538872; cv=none; d=google.com; s=arc-20160816; b=l+xMmVJ71MOhkjaxw7HX3Sy4Mg59EKNJZqwAgo+O6p1lr4sMIpejFzZs8UJZp3l4Av qkxwBOT1f/lhNCm6rEBrKUyoyiich3Ob/Kshj120lMo0LlFQTshZUo7pZeRUxFyBUifQ wEhV45z5QTkT41aE8a45hwjruQqXbRijMlbdoTv8ZzOZXM3KRA4VouQwZSZzrBabSKS+ AOb/Rn2v0QcVZNPlAoUk5shygxKSW8fLw83uZZndJdwyOFDWlvc2aXJ7nIKBejJ8BJwi LzZMjgngs8VfeGHOzTqkeqdlGU8bX79UeHUaESFcHQpVRq+Z8r2my7j+kFBkhLFXyQ2O EMHA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:in-reply-to:message-id :date:subject:to:from; bh=05B9HLEVIXOyu73vSo8T/gNbsWk1TAFnDf+P1UFDg/w=; b=TmNDbnZFKyj9URVDuYIIOdZwjXWLLRKvQ7SF9CiAAPr1JDkhkNUZNi7ktt40GsOaOY QK9O8N98suwT6hNoywFpKTa2InjM07wHeTk9gPEU0idx85YzrCTN0td9H6CidAXfqiaS VY9iF63BkNcfDcrTGHu+53WLGv2W8oI2pJSemRxCcODlqC/Ln4s4q6jqYtnUPwtZzVMn wz9ktVUwwCvRgMqGMWZRetyJ1kBpVclCdXvzw2taKJOHxwXjGsez8JzwhkLhpsAA5WFI YIitNMhE3VA2NccLWqZs7ZNRWwN3Pr4nxh1NNgpIuAxe5LzHVOJXvtUZ3tWTfrnRvCk7 TvaQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id a19-20020a170902b59300b001a979e702b2si7712821pls.416.2023.05.08.02.40.57; Mon, 08 May 2023 02:41:12 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233725AbjEHJfs (ORCPT + 99 others); Mon, 8 May 2023 05:35:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37290 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233600AbjEHJfq (ORCPT ); Mon, 8 May 2023 05:35:46 -0400 Received: from SHSQR01.spreadtrum.com (unknown [222.66.158.135]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 63BC213A for ; Mon, 8 May 2023 02:35:42 -0700 (PDT) Received: from SHSend.spreadtrum.com (bjmbx01.spreadtrum.com [10.0.64.7]) by SHSQR01.spreadtrum.com with ESMTP id 3489XTVE033277; Mon, 8 May 2023 17:33:29 +0800 (+08) (envelope-from zhaoyang.huang@unisoc.com) Received: from bj03382pcu.spreadtrum.com (10.0.74.65) by BJMBX01.spreadtrum.com (10.0.64.7) with Microsoft SMTP Server (TLS) id 15.0.1497.23; Mon, 8 May 2023 17:33:25 +0800 From: "zhaoyang.huang" To: Andrew Morton , Roman Gushchin , Minchan Kim , Roman Gushchin , Joonsoo Kim , , , Zhaoyang Huang , Subject: [PATCH 1/2] mm: optimization on page allocation when CMA enabled Date: Mon, 8 May 2023 17:33:02 +0800 Message-ID: <1683538383-19685-2-git-send-email-zhaoyang.huang@unisoc.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1683538383-19685-1-git-send-email-zhaoyang.huang@unisoc.com> References: <1683538383-19685-1-git-send-email-zhaoyang.huang@unisoc.com> MIME-Version: 1.0 X-Originating-IP: [10.0.74.65] X-ClientProxiedBy: SHCAS03.spreadtrum.com (10.0.1.207) To BJMBX01.spreadtrum.com (10.0.64.7) X-MAIL: SHSQR01.spreadtrum.com 3489XTVE033277 X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,SPF_HELO_NONE, SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1765318456538253119?= X-GMAIL-MSGID: =?utf-8?q?1765318456538253119?= From: Zhaoyang Huang Let us look at the series of scenarios below with WMARK_LOW=25MB,WMARK_MIN=5MB (managed pages 1.9GB). We can know that current 'fixed 1/2 ratio' start to use CMA since C which actually has caused U&R lower than WMARK_LOW (this should be deemed as against current memory policy, that is, UNMOVABLE & RECLAIMABLE should either stay around WATERMARK_LOW when no allocation or do reclaim via entering slowpath) -- Free_pages | | -- WMARK_LOW | -- Free_CMA | | --- Free_CMA/Free_pages(MB) A(12/30) B(12/25) C(12/20) fixed 1/2 ratio N N Y this commit Y Y Y Signed-off-by: Zhaoyang Huang --- v2: do proportion check when zone_watermark_ok, update commit message v3: update coding style and simplify the logic when zone_watermark_ok --- --- mm/page_alloc.c | 46 ++++++++++++++++++++++++++++++++++++++++------ 1 file changed, 40 insertions(+), 6 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 0745aed..7aca49d 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -3071,6 +3071,41 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac, } +#ifdef CONFIG_CMA +/* + * GFP_MOVABLE allocation could drain UNMOVABLE & RECLAIMABLE page blocks via + * the help of CMA which makes GFP_KERNEL failed. Checking if zone_watermark_ok + * again without ALLOC_CMA to see if to use CMA first. + */ +static bool __if_use_cma_first(struct zone *zone, unsigned int order, unsigned int alloc_flags) +{ + unsigned long watermark; + bool cma_first = false; + + watermark = wmark_pages(zone, alloc_flags & ALLOC_WMARK_MASK); + /* check if GFP_MOVABLE pass previous zone_watermark_ok via the help of CMA */ + if (!zone_watermark_ok(zone, order, watermark, 0, alloc_flags & (~ALLOC_CMA))) + /* + * watermark failed means UNMOVABLE & RECLAIMBLE is not enough + * now, we should use cma first to keep them stay around the + * corresponding watermark + */ + cma_first = true; + else + /* + * remain previous fixed 1/2 logic when watermark ok as we have + * above protection now + */ + cma_first = (zone_page_state(zone, NR_FREE_CMA_PAGES) > + zone_page_state(zone, NR_FREE_PAGES) / 2); + return cma_first; +} +#else +static bool __if_use_cma_first(struct zone *zone, unsigned int order, unsigned int alloc_flags) +{ + return false; +} +#endif /* * Do the hard work of removing an element from the buddy allocator. * Call me with the zone->lock already held. @@ -3084,13 +3119,12 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac, if (IS_ENABLED(CONFIG_CMA)) { /* * Balance movable allocations between regular and CMA areas by - * allocating from CMA when over half of the zone's free memory - * is in the CMA area. + * allocating from CMA base on judging zone_watermark_ok again + * to see if the latest check got pass via the help of CMA */ - if (alloc_flags & ALLOC_CMA && - zone_page_state(zone, NR_FREE_CMA_PAGES) > - zone_page_state(zone, NR_FREE_PAGES) / 2) { - page = __rmqueue_cma_fallback(zone, order); + if (migratetype == MIGRATE_MOVABLE) { + page = __if_use_cma_first(zone, order, alloc_flags) ? + __rmqueue_cma_fallback(zone, order) : NULL; if (page) return page; } From patchwork Mon May 8 09:33:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "zhaoyang.huang" X-Patchwork-Id: 91040 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp2031766vqo; Mon, 8 May 2023 02:56:08 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6wJwnPbIBRk+iwaDNTBW0FGk63wmdCjNMQOIym6b0J9sCsS3k8N2Nm1aR09xCXxnfphrHT X-Received: by 2002:a17:90b:3648:b0:250:32e2:5681 with SMTP id nh8-20020a17090b364800b0025032e25681mr9764580pjb.6.1683539767833; Mon, 08 May 2023 02:56:07 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1683539767; cv=none; d=google.com; s=arc-20160816; b=LAJlUTcPE5s8/0IlCQMC+5Pg4Ryn3FkDVPnMZjwczWMnhTvJzCVuW481r/W0tUrwOt rMXelPgCtRfk3pykka5XMI+Cj9w+58uD5hDGVxBwXl6epn5QFfW/hBMdobFi19ud95jZ UnmvdYzi3ATMH8WBMbhB7XYjLqtZJDCClACFrcF6E8dzC8gvhLg5eR+yVdo8LAepHUp3 g0UIh46cfEDHxBlx4whGcgT6LpTnFhKX2CdJ0oJIdqO+SQZy2wYVhNvgb/q+weLlgaST DJatl9cZXZf7GZVnq96vXLDRXtTmRobYGejTA+1rDOoRebDsc4ujsfsvcuqf1VGhp9yH YMuA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:in-reply-to:message-id :date:subject:to:from; bh=yuzrgfVROjDmFctozl9v2S47wJtS4TpyhBk41fQ/7Ls=; b=rgk9AnhgOrGnkJQTEdFRYbiniDywY5qTFQ/n00EwmoXKB1gBwtkGZiwz83zsyQNlN3 gZfCjRxD2pU5uOru4Gyg7kWDn3qQFCC4Y4j6RMFn8p7Wd4D70NqUXDlQ9G3rDoNriY+X iGMY4x3n7CIpLEGpjueGU7bUSmOF8cb9TPjF4WbGmcT+3Y9ZOQcySO64pYvJ7PEwoCVJ 1CG5LSfSZIOkrKyPRUPZSXE3ei/QDL7c/WlP0A7wXeFlpjKVgIe1sEv+b/2UHD7JW7aw qiD11ej8oNLahsZEMi2ZEuYVcustVrLfG9Sa0ouaYqBzvetJHSd26CH9Q7lI/URV8rtR kvZA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id k93-20020a17090a3ee600b0024de28ed3eesi12460739pjc.33.2023.05.08.02.55.54; Mon, 08 May 2023 02:56:07 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233752AbjEHJfx (ORCPT + 99 others); Mon, 8 May 2023 05:35:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37288 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232854AbjEHJfq (ORCPT ); Mon, 8 May 2023 05:35:46 -0400 Received: from SHSQR01.spreadtrum.com (unknown [222.66.158.135]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 63E4619AC for ; Mon, 8 May 2023 02:35:42 -0700 (PDT) Received: from SHSend.spreadtrum.com (bjmbx01.spreadtrum.com [10.0.64.7]) by SHSQR01.spreadtrum.com with ESMTP id 3489XYUC033919; Mon, 8 May 2023 17:33:35 +0800 (+08) (envelope-from zhaoyang.huang@unisoc.com) Received: from bj03382pcu.spreadtrum.com (10.0.74.65) by BJMBX01.spreadtrum.com (10.0.64.7) with Microsoft SMTP Server (TLS) id 15.0.1497.23; Mon, 8 May 2023 17:33:29 +0800 From: "zhaoyang.huang" To: Andrew Morton , Roman Gushchin , Minchan Kim , Roman Gushchin , Joonsoo Kim , , , Zhaoyang Huang , Subject: [PATCH 2/2] mm: skip CMA pages when they are not available Date: Mon, 8 May 2023 17:33:03 +0800 Message-ID: <1683538383-19685-3-git-send-email-zhaoyang.huang@unisoc.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1683538383-19685-1-git-send-email-zhaoyang.huang@unisoc.com> References: <1683538383-19685-1-git-send-email-zhaoyang.huang@unisoc.com> MIME-Version: 1.0 X-Originating-IP: [10.0.74.65] X-ClientProxiedBy: SHCAS03.spreadtrum.com (10.0.1.207) To BJMBX01.spreadtrum.com (10.0.64.7) X-MAIL: SHSQR01.spreadtrum.com 3489XYUC033919 X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,SPF_HELO_NONE, SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1765319395952699112?= X-GMAIL-MSGID: =?utf-8?q?1765319395952699112?= From: Zhaoyang Huang This patch fixes unproductive reclaiming of CMA pages by skipping them when they are not available for current context. It is arise from bellowing OOM issue, which caused by large proportion of MIGRATE_CMA pages among free pages. There has been commit(168676649) to fix it by trying CMA pages first instead of fallback in rmqueue. 04166 < 4> [ 36.172486] [03-19 10:05:52.172] ActivityManager: page allocation failure: order:0, mode:0xc00(GFP_NOIO), nodemask=(null),cpuset=foreground,mems_allowed=0 0419C < 4> [ 36.189447] [03-19 10:05:52.189] DMA32: 0*4kB 447*8kB (C) 217*16kB (C) 124*32kB (C) 136*64kB (C) 70*128kB (C) 22*256kB (C) 3*512kB (C) 0*1024kB 0*2048kB 0*4096kB = 35848kB 0419D < 4> [ 36.193125] [03-19 10:05:52.193] Normal: 231*4kB (UMEH) 49*8kB (MEH) 14*16kB (H) 13*32kB (H) 8*64kB (H) 2*128kB (H) 0*256kB 1*512kB (H) 0*1024kB 0*2048kB 0*4096kB = 3236kB ...... 041EA < 4> [ 36.234447] [03-19 10:05:52.234] SLUB: Unable to allocate memory on node -1, gfp=0xa20(GFP_ATOMIC) 041EB < 4> [ 36.234455] [03-19 10:05:52.234] cache: ext4_io_end, object size: 64, buffer size: 64, default order: 0, min order: 0 041EC < 4> [ 36.234459] [03-19 10:05:52.234] node 0: slabs: 53,objs: 3392, free: 0 Signed-off-by: Zhaoyang Huang --- mm/vmscan.c | 15 +++++++++++++-- 1 file changed, 13 insertions(+), 2 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index bd6637f..19fb445 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -2225,10 +2225,16 @@ static unsigned long isolate_lru_folios(unsigned long nr_to_scan, unsigned long nr_skipped[MAX_NR_ZONES] = { 0, }; unsigned long skipped = 0; unsigned long scan, total_scan, nr_pages; + bool cma_cap = true; + struct page *page; LIST_HEAD(folios_skipped); total_scan = 0; scan = 0; + if ((IS_ENABLED(CONFIG_CMA)) && !current_is_kswapd() + && (gfp_migratetype(sc->gfp_mask) != MIGRATE_MOVABLE)) + cma_cap = false; + while (scan < nr_to_scan && !list_empty(src)) { struct list_head *move_to = src; struct folio *folio; @@ -2239,12 +2245,17 @@ static unsigned long isolate_lru_folios(unsigned long nr_to_scan, nr_pages = folio_nr_pages(folio); total_scan += nr_pages; - if (folio_zonenum(folio) > sc->reclaim_idx) { + page = &folio->page; + + if ((folio_zonenum(folio) > sc->reclaim_idx) +#ifdef CONFIG_CMA + || (get_pageblock_migratetype(page) == MIGRATE_CMA && !cma_cap) +#endif + ) { nr_skipped[folio_zonenum(folio)] += nr_pages; move_to = &folios_skipped; goto move; } - /* * Do not count skipped folios because that makes the function * return with no isolated folios if the LRU mostly contains