From patchwork Mon Oct 16 07:12:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "zhaoyang.huang" X-Patchwork-Id: 153241 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:612c:2908:b0:403:3b70:6f57 with SMTP id ib8csp3286449vqb; Mon, 16 Oct 2023 00:14:42 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGxwOH1wo3mfNmkRn7h/DlvHbe8PXrGMsOxv4S0b789+ckjPqVuBgLl+nQIZJhCvdBqCPjo X-Received: by 2002:a17:903:2306:b0:1bb:ac37:384b with SMTP id d6-20020a170903230600b001bbac37384bmr37690866plh.6.1697440482436; Mon, 16 Oct 2023 00:14:42 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1697440482; cv=none; d=google.com; s=arc-20160816; b=XPNQJQSwAUQ8UQyMU9RrGjyAUZd3RxQG4di0a/4sIUdooMSGniMVKWRcMfy7KdlSG3 BEx38aL6JbeR2hCWDQWjip++8Ucclz+D1Nkj+D8r0OYHMGeQa34nZiCDEK+0TIqImvrP cpwUa7fVNqjaeESnayrW5XqXjzXse9LFaTalgKiANkQDJYh2oVzPM8bpyyxtsxQoJmtR ImfKi9LH2YjIIPg7ULvcb/mPSUqTj+M3wiP1Z/6gu5jaDGrJUpM3U3VVwEMeNllmzLWX zXDy0GyX2mrwDrE97eAf6QRvXIUIeF+Egbl5E5OylZ4UxbSRz8OZP2ZmlubDxL3L35DD +zMg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:to:from; bh=rbvh5O7avPAgsCze7na39wTf5wDX5P0g3D5WnqbUxJE=; fh=wY7U6F8dgiPLz/DwDfZWxyAVk57Bl0b09DnWj2NpYfo=; b=N5aFLTUMXxxwQjxRS10npf2oFilBkAlsXL/ui22XBFi90Yf0Rknsjd8VaZBi6EvXs3 Fz3CkcnPUy2bfadojBtBaHq0t7X+ZGfhCijlIj/8MHHKsvxWJn9EreqfRuH9LYCzEvDO funAlVEfSnouZFo+NczfyRFSrcQcNDpLtN6FTppg/rCCPfEcbnV9KVHx4xlgtIsOrFL/ 5DgalhgseBovCaSaWspRK9yrm/sAyz/KJIGlBizxslze1XgqGz+o8O/vmxfbWOzw9Pcl sXTdyJI58yvwrY5rcqq1/xPgjzBRghR0M1E/baH9RP289/yPE2B86pjrKlglCyKSbkMD hLrQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:5 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from groat.vger.email (groat.vger.email. [2620:137:e000::3:5]) by mx.google.com with ESMTPS id jn20-20020a170903051400b001bdafae4b7dsi8103232plb.43.2023.10.16.00.14.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 16 Oct 2023 00:14:42 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:5 as permitted sender) client-ip=2620:137:e000::3:5; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:5 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by groat.vger.email (Postfix) with ESMTP id 213398098FC4; Mon, 16 Oct 2023 00:14:40 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at groat.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232056AbjJPHNw (ORCPT + 18 others); Mon, 16 Oct 2023 03:13:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57858 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232064AbjJPHNs (ORCPT ); Mon, 16 Oct 2023 03:13:48 -0400 Received: from SHSQR01.spreadtrum.com (mx1.unisoc.com [222.66.158.135]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8B155ED for ; Mon, 16 Oct 2023 00:13:44 -0700 (PDT) Received: from dlp.unisoc.com ([10.29.3.86]) by SHSQR01.spreadtrum.com with ESMTP id 39G7Cote053037; Mon, 16 Oct 2023 15:12:50 +0800 (+08) (envelope-from zhaoyang.huang@unisoc.com) Received: from SHDLP.spreadtrum.com (bjmbx01.spreadtrum.com [10.0.64.7]) by dlp.unisoc.com (SkyGuard) with ESMTPS id 4S87WF1gbWz2KmQJM; Mon, 16 Oct 2023 15:08:41 +0800 (CST) Received: from bj03382pcu01.spreadtrum.com (10.0.73.40) by BJMBX01.spreadtrum.com (10.0.64.7) with Microsoft SMTP Server (TLS) id 15.0.1497.23; Mon, 16 Oct 2023 15:12:48 +0800 From: "zhaoyang.huang" To: Andrew Morton , Johannes Weiner , Roman Gushchin , , , Zhaoyang Huang , Subject: [PATCHv6 1/1] mm: optimization on page allocation when CMA enabled Date: Mon, 16 Oct 2023 15:12:45 +0800 Message-ID: <20231016071245.2865233-1-zhaoyang.huang@unisoc.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-Originating-IP: [10.0.73.40] X-ClientProxiedBy: SHCAS03.spreadtrum.com (10.0.1.207) To BJMBX01.spreadtrum.com (10.0.64.7) X-MAIL: SHSQR01.spreadtrum.com 39G7Cote053037 X-Spam-Status: No, score=-0.8 required=5.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on groat.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (groat.vger.email [0.0.0.0]); Mon, 16 Oct 2023 00:14:40 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1779895351634602687 X-GMAIL-MSGID: 1779895351634602687 From: Zhaoyang Huang According to current CMA utilization policy, an alloc_pages(GFP_USER) could 'steal' UNMOVABLE & RECLAIMABLE page blocks via the help of CMA(pass zone_watermark_ok by counting CMA in but use U&R in rmqueue), which could lead to following alloc_pages(GFP_KERNEL) fail. Solving this by introducing second watermark checking for GFP_MOVABLE, which could have the allocation use CMA when proper. -- Free_pages(30MB) | | -- WMARK_LOW(25MB) | -- Free_CMA(12MB) | | Signed-off-by: Johannes Weiner --- Signed-off-by: Zhaoyang Huang --- v6: update comments --- --- mm/page_alloc.c | 44 ++++++++++++++++++++++++++++++++++++++++---- 1 file changed, 40 insertions(+), 4 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 452459836b71..5a146aa7c0aa 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -2078,6 +2078,43 @@ __rmqueue_fallback(struct zone *zone, int order, int start_migratetype, } +#ifdef CONFIG_CMA +/* + * GFP_MOVABLE allocation could drain UNMOVABLE & RECLAIMABLE page blocks via + * the help of CMA which makes GFP_KERNEL failed. Checking if zone_watermark_ok + * again without ALLOC_CMA to see if to use CMA first. + */ +static bool use_cma_first(struct zone *zone, unsigned int order, unsigned int alloc_flags) +{ + unsigned long watermark; + bool cma_first = false; + + watermark = wmark_pages(zone, alloc_flags & ALLOC_WMARK_MASK); + /* check if GFP_MOVABLE pass previous zone_watermark_ok via the help of CMA */ + if (zone_watermark_ok(zone, order, watermark, 0, alloc_flags & (~ALLOC_CMA))) { + /* + * Balance movable allocations between regular and CMA areas by + * allocating from CMA when over half of the zone's free memory + * is in the CMA area. + */ + cma_first = (zone_page_state(zone, NR_FREE_CMA_PAGES) > + zone_page_state(zone, NR_FREE_PAGES) / 2); + } else { + /* + * watermark failed means UNMOVABLE & RECLAIMBLE is not enough + * now, we should use cma first to keep them stay around the + * corresponding watermark + */ + cma_first = true; + } + return cma_first; +} +#else +static bool use_cma_first(struct zone *zone, unsigned int order, unsigned int alloc_flags) +{ + return false; +} +#endif /* * Do the hard work of removing an element from the buddy allocator. * Call me with the zone->lock already held. @@ -2091,12 +2128,11 @@ __rmqueue(struct zone *zone, unsigned int order, int migratetype, if (IS_ENABLED(CONFIG_CMA)) { /* * Balance movable allocations between regular and CMA areas by - * allocating from CMA when over half of the zone's free memory - * is in the CMA area. + * allocating from CMA base on judging zone_watermark_ok again + * to see if the latest check got pass via the help of CMA */ if (alloc_flags & ALLOC_CMA && - zone_page_state(zone, NR_FREE_CMA_PAGES) > - zone_page_state(zone, NR_FREE_PAGES) / 2) { + use_cma_first(zone, order, alloc_flags)) { page = __rmqueue_cma_fallback(zone, order); if (page) return page;