Message ID | 20230616092112.387-1-lipeifeng@oppo.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp1195866vqr; Fri, 16 Jun 2023 02:27:04 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ771HqN2OaPZ/BhLc0ZfPUEPYXmd0T1jtnlw4k+EtY/qOzLCnhlgBtQDCMVdt/hiZShYYvW X-Received: by 2002:a05:6214:e89:b0:62f:f049:f054 with SMTP id hf9-20020a0562140e8900b0062ff049f054mr1804972qvb.65.1686907624417; Fri, 16 Jun 2023 02:27:04 -0700 (PDT) ARC-Seal: i=2; a=rsa-sha256; t=1686907624; cv=pass; d=google.com; s=arc-20160816; b=wlhclkhoO8JYh91zSHKC1aNDTwq8y+cro2fRQq8Vf82mOZ/bUhC5cb2g8FdndrRykZ fcB93s/zNKNWqfDY7nY95uYWq0JIqBbA/M7XbinoNE8Be8Uu//Bhq243CdUgrQEQeyM2 8goqsvMM3RVc5YH+reVixp5gVYHFZNATwrUTRpXk7is+aaMw4iSoX84KkbAPz0Xg4lYm K37rDlQhuLRkL/BnUxtOsbyyy1Fy/SCLgB2D4T3Y+gN1/Mnc2JxeE74YBgvQW4rUrT+Z b3Yxx/qtWD8wpdG+E/2STlpwzuk59xCcGFy0TrzBe7ue+yO+0Ph+m01LgSVY532CK3SE MTBw== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:content-transfer-encoding :message-id:date:subject:cc:to:from:dkim-signature; bh=lZhdpleADvYfdQ4LJsVuanPOs0fucwZ/KKlfDsrJ0sY=; b=usrrCU7XbJ9ppZU02T5uPTvTUb85NgFv4//og4trWPbS/a8sMNcDQR/2VeRASEyi5y 3iEoBUM4x7pZNVTQSXYGUKU1a4v5mWlYw+oIlvzKx2z9VdQJhKkx1W+NeeEMsLypPC6o /HSScUDvpTB2NIhP9JabqDJ1o/GALl7QudyXIsY1iv6Mh9bY2NFGhCYE+0ZMFtRehLgG XJpeW1B+X3Urz7izZ1feLetQ18LlI6ZHoIZ2wG46qQUSJxlmMdZq8Byi35h1yt/NGxyR JgwxSzbb/GFu3ur9TXGSip9xyPx+3EAMR8SnU3wF96mufC7ez+u/HMzygL7IsF5N3d27 Zo0g== ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@oppo.com header.s=selector1 header.b=LOAln3Du; arc=pass (i=1 spf=pass spfdomain=oppo.com dkim=pass dkdomain=oppo.com dmarc=pass fromdomain=oppo.com); spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=oppo.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id z16-20020a63b910000000b00542b1792242si11165717pge.720.2023.06.16.02.26.50; Fri, 16 Jun 2023 02:27:04 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@oppo.com header.s=selector1 header.b=LOAln3Du; arc=pass (i=1 spf=pass spfdomain=oppo.com dkim=pass dkdomain=oppo.com dmarc=pass fromdomain=oppo.com); spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=oppo.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343718AbjFPJVt (ORCPT <rfc822;maxin.john@gmail.com> + 99 others); Fri, 16 Jun 2023 05:21:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38862 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S245738AbjFPJVo (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Fri, 16 Jun 2023 05:21:44 -0400 Received: from APC01-SG2-obe.outbound.protection.outlook.com (mail-sgaapc01on2047.outbound.protection.outlook.com [40.107.215.47]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 815F110F6 for <linux-kernel@vger.kernel.org>; Fri, 16 Jun 2023 02:21:42 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=oHDTYt6l6y7JO6HjvkcW91vxOaD/QjI3ExL1xtQrAoLu3pt/KemfbeEvMxduya+lwQmUxUtYlVQdkiZHP7SQUC7Ow1GjTqXAwyfdNpLkCAovh8SqQVEOqzHz5YgrEm0VX9V6E5b41cqygx9axDq8rISQOhTsXOoeBD77jdim6xbKw+45PCAKatfhUrjy5kq4yG6CFe29ggZ3DosmOPndWE1b0EdNG0QL2HLbJ3ipzoOJKsb4WtpOxVNuXmK2JVwsUJM2s5g+JMAa6OvAPnPLsqpI//r1k1v6qGdGT0ft6J9iUtCnqnHgYdWakPHblC5snj3x85VRoDFeS2DbkwA29A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=lZhdpleADvYfdQ4LJsVuanPOs0fucwZ/KKlfDsrJ0sY=; b=lK9Yw1FlOJG6rxqAMKcKSdpD8+nZZnE57toPyJgDSc339L91VR9swLVWBP2Kk1u+U0l1JGbRc07M/qdB6sL/P25NGn+B+U/Y0zV59ZxmeXndz+/UKixGxBUuigEnXHTy7NHoESJnmAPcG0jl5HWEt+UU9t0SSr6az80YoW2niB7iiEWi+xuEwtpXvrPyb0LRoufZ27epjXVDwGBk6PQPhyGRXbbrZPVG9WLOVN9pVxeuriz2Y5jsbbCrpr1LxvfnivtMMcrwrsbrP1Mv6YuR/DfDJR9dQwgnRElijMul3DDqwP/XV0+W1aCR4CJwTkGT6jx27t2evY03xe0ONO+2xQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=oppo.com; dmarc=pass action=none header.from=oppo.com; dkim=pass header.d=oppo.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oppo.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=lZhdpleADvYfdQ4LJsVuanPOs0fucwZ/KKlfDsrJ0sY=; b=LOAln3DuSdzhHAGtZXv3sfRzd0QUYFWTjH4T918uCLHiSLe6C62eD8IkpKnzt4wVE4J18s/KGClIFkTJ94fRmbWnwVt6Kdp7GFlv78V/vkHpFnECdOJPETy1B2VrGV3EysJi/LpIp/w/NUdn1we7WouS3AYT/rDV8IhsZcwiR9A= Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=oppo.com; Received: from TYZPR02MB5595.apcprd02.prod.outlook.com (2603:1096:400:1c5::12) by SEZPR02MB6158.apcprd02.prod.outlook.com (2603:1096:101:a5::10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6500.25; Fri, 16 Jun 2023 09:21:23 +0000 Received: from TYZPR02MB5595.apcprd02.prod.outlook.com ([fe80::2e7:47a3:69c4:9e8e]) by TYZPR02MB5595.apcprd02.prod.outlook.com ([fe80::2e7:47a3:69c4:9e8e%3]) with mapi id 15.20.6500.020; Fri, 16 Jun 2023 09:21:23 +0000 From: lipeifeng@oppo.com To: akpm@linux-foundation.org Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, surenb@google.com, gregkh@google.com, lipeifeng <lipeifeng@oppo.com> Subject: [PATCH] mm: vmscan: export func:shrink_slab Date: Fri, 16 Jun 2023 17:21:12 +0800 Message-Id: <20230616092112.387-1-lipeifeng@oppo.com> X-Mailer: git-send-email 2.34.1 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-ClientProxiedBy: SI2PR02CA0007.apcprd02.prod.outlook.com (2603:1096:4:194::23) To TYZPR02MB5595.apcprd02.prod.outlook.com (2603:1096:400:1c5::12) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: TYZPR02MB5595:EE_|SEZPR02MB6158:EE_ X-MS-Office365-Filtering-Correlation-Id: 313220d5-dcf2-42f0-52bb-08db6e4b0dca X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: mpRIuhgnivBBYrjMcfsHF3plsXEgdkpsm6+3CmRAC7URbILYBJ+0u0svds/s0szX2SQlWM5GsGL6VbljJQoAVEmIagAsyUWJq9Srq/zI9Gc0LdRQON6Jn5HXx4kzscED3aW3i8awu7YYsa1vFTRPvSe9Jo4qxLzrKUb4X58t3TozemaEmRwvlZzTo9CdCBUao4o0duS9impuqfay7mSEzCQaepGlvaqcQAyYkBdg2hlYo3AYvzuClaPKp4KXgDWebMJ8CqbqQnz0Qt+8qyr8CxvizjFFNHwD/bEBfNgLaBXBLSuaNcMva0TWucicw5L5Wzv4WUAXyD+6NBLPG5pcIfREFBZILhfr3JbfTKz4nHN9yU1MPZXeLkbFsR2PAQTCfLCpgIiLaMkvW6wwp3gsefQL/ehRyedEK9EN7ncIDDDOzRsIMPDEtgc+hIy1i3LHOJtE6eZq/R7ceEr/XLPgAxQ/J8rxjMCXz3Qx4Ql+H0URZLrM736SIFwBRIAasjK67MyidbVsrUgCIKACQ5Eh3ACOSp4l1neEBro8MOK7tJq8bqDIRe4y+K+57BoOSHCXS7hO0pjCXS71ttyEQGyDQgzgarezmuI4igDnJJtweamra+mYUN/8/WqrK6JsmLZy X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:TYZPR02MB5595.apcprd02.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230028)(4636009)(396003)(346002)(39860400002)(136003)(366004)(376002)(451199021)(6666004)(478600001)(8936002)(5660300002)(36756003)(86362001)(8676002)(2906002)(4326008)(66556008)(66946007)(66476007)(38100700002)(316002)(38350700002)(6916009)(41300700001)(9686003)(1076003)(6506007)(6512007)(26005)(107886003)(83380400001)(2616005)(186003)(6486002)(52116002);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: AYRli7U/vl7HwTMvnUY97Obj53OtDsiURlD3wxlmtikFqu0H62Pu058eqIBm3uf7MawDYgIVnkDu05zQiIQ/9unlerSZkn/M+y1p8SII0PXjDjdvQOGY+wo35QxusO9GRdq7dqjakx4HWo754kHCCmEJQ/vh63yKzAQOazH/lGALs4qU/VCujji4Mxw+e081PEsXQYK5WAdrJyMNXGDZsgt9h5P4XYq/fAC9z6nHvmfEkK/593cPRlb3cfzQ1vxTn5KemGqEb21rG3BY7L7yoqGJKiaokDc8Nnxs08KcqvvUetNUIyQ8M1y8iQ6C+DrzOBn6S7tuLVlYoFfT46K/IBe8JAZ8b0u0KUB9rd2CTu7Yz6iz7Rw0DQvu/kNtmWeN66gjIdRxn3GcjNya1si4QRvkIb3C1l0XX1fddwYnV0G/HI5Fne843PICJfKpDkQCs9S7J2D1CqLDLh6SxUsCMrJnhcbIVCOSYGox0wHfyvpB6heF6rjPvScv47jrhgPlxdXyi5oYCTahZqjcANAxtPLZBgM4iwZMh6D+V9OyxpSG2OI4AbLrQmV3iZ1fk2Pb1eCas/pM21qifmwUFETi15A7GMKaQ69tt7qkqV17DQ08h4lV9WdOCjAUB6rr/hXlyQElvQXtgMbHC8MegGztgQ4Cnb66mYu5oxEDynJj8slIlIOhX5CJZBdoZq/KS6BJXGGhNwdrmhWYe9chMXZomYNtVpHG7zYEZxO1+MrXRSNCvAgoYQYVgKWiP45LOOraMvfQiWK6L3DJ31iRnaHFCGMi/ECd7UfAU6dnLwIe592XxrAm/YmwUgz753kjLJLaluyUtTCCqXRXgt/rM2njrM7LOKHPDTzrTRggCqjFbKgTqVvJY6MZOqxXF4wLk49MiK+bak4Z+w+lsjXL3mt2RHR1yYbcouLsaisPzCCq2KK2XEcDp0Rd49LcgyqcaVjZpygu8jR597xLH7jHNGtpqmiUFa4i8IU8uHTaZiYW0qwTg4+QAHg/4fQZervR26xNHjivNBzK/PJsa6TQoL2mFTMOv/PpFZzodxmSQjAwdFHAmarAPNS4uzFFXFyYzxQ/X98DhsA4uwYUahKvunGExWG2sGzX4QFlVGDb6zaY6exgrL7VpPJBhd+4HUButl5Qv5gUshx6TWRAfVhZZnNHgVqD+AJ0fW/fGjfEDP1aXsZ9X9yOLc3PimFzNKfEPySrOSHQh9IsS+NfzAif1/CaQd13QOETE3Ug9KdcIELQ+fsyTYHqhJfHCDhvCPtgMIsLCDGM47e6MgESksLLVUgwYOr31bkCb3uQACs4Z8boppTstKzTj4/AMKtPtL77zj67DiX+qneE0MulN8DKkCe75dMwx2W0AD3czK+60RKPHsxoUS0qwB2/O6NXpOiql+a5ZTr+r8NHMjDacky3nhztdGf1xmmr5MoU1BetgbYrFjT58LDBf6MKp+sxyRRmGR3ir6aICGXLHjgy1bH1/LxoRLdvMUiyH6IP5P58DnPq0C/QugsRxw66u/rxtTlYbqXnfJxocuUpw88agBosb08TKJ8WuSF1hLA3fnB1amb2CdpbpK1vbEsqdqk6logXCqNF X-OriginatorOrg: oppo.com X-MS-Exchange-CrossTenant-Network-Message-Id: 313220d5-dcf2-42f0-52bb-08db6e4b0dca X-MS-Exchange-CrossTenant-AuthSource: TYZPR02MB5595.apcprd02.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Jun 2023 09:21:23.8456 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: f1905eb1-c353-41c5-9516-62b4a54b5ee6 X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: EdPfr97+M/aKQL9jYXhLrs97NWR218o4tBHG50Gdqug6G7rimBHcH0XKGG+OpKU2zDfm+W4djL1J+RsG3reoKQ== X-MS-Exchange-Transport-CrossTenantHeadersStamped: SEZPR02MB6158 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_PASS,SPF_PASS,T_SCC_BODY_TEXT_LINE, URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1768850848714630563?= X-GMAIL-MSGID: =?utf-8?q?1768850848714630563?= |
Series |
mm: vmscan: export func:shrink_slab
|
|
Commit Message
李培锋
June 16, 2023, 9:21 a.m. UTC
From: lipeifeng <lipeifeng@oppo.com> Some of shrinkers during shrink_slab would enter synchronous-wait due to lock or other reasons, which would causes kswapd or direct_reclaim to be blocked. This patch export shrink_slab so that it can be called in drivers which can shrink memory independently. Signed-off-by: lipeifeng <lipeifeng@oppo.com> --- mm/vmscan.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
Comments
On 16.06.23 11:21, lipeifeng@oppo.com wrote: > From: lipeifeng <lipeifeng@oppo.com> > > Some of shrinkers during shrink_slab would enter synchronous-wait > due to lock or other reasons, which would causes kswapd or > direct_reclaim to be blocked. > > This patch export shrink_slab so that it can be called in drivers > which can shrink memory independently. > > Signed-off-by: lipeifeng <lipeifeng@oppo.com> > --- > mm/vmscan.c | 3 ++- > 1 file changed, 2 insertions(+), 1 deletion(-) > > diff --git a/mm/vmscan.c b/mm/vmscan.c > index 6d0cd2840cf0..2e54fa52e7ec 100644 > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -1043,7 +1043,7 @@ static unsigned long shrink_slab_memcg(gfp_t gfp_mask, int nid, > * > * Returns the number of reclaimed slab objects. > */ > -static unsigned long shrink_slab(gfp_t gfp_mask, int nid, > +unsigned long shrink_slab(gfp_t gfp_mask, int nid, > struct mem_cgroup *memcg, > int priority) > { > @@ -1087,6 +1087,7 @@ static unsigned long shrink_slab(gfp_t gfp_mask, int nid, > cond_resched(); > return freed; > } > +EXPORT_SYMBOL_GPL(shrink_slab); > > static unsigned long drop_slab_node(int nid) > { It feels like something we don't want arbitrary drivers to call. Unrelated to that, this better be sent along with actual driver usage.
On 16.06.23 11:21, lipeifeng@oppo.com wrote: >> From: lipeifeng <lipeifeng@oppo.com> >> >> Some of shrinkers during shrink_slab would enter synchronous-wait due >> to lock or other reasons, which would causes kswapd or direct_reclaim >> to be blocked. >> >> This patch export shrink_slab so that it can be called in drivers >> which can shrink memory independently. >> >> Signed-off-by: lipeifeng <lipeifeng@oppo.com> >> --- >> mm/vmscan.c | 3 ++- >> 1 file changed, 2 insertions(+), 1 deletion(-) >> >> diff --git a/mm/vmscan.c b/mm/vmscan.c index >> 6d0cd2840cf0..2e54fa52e7ec 100644 >> --- a/mm/vmscan.c >> +++ b/mm/vmscan.c >> @@ -1043,7 +1043,7 @@ static unsigned long shrink_slab_memcg(gfp_t gfp_mask, int nid, >> * >> * Returns the number of reclaimed slab objects. >> */ >> -static unsigned long shrink_slab(gfp_t gfp_mask, int nid, >> +unsigned long shrink_slab(gfp_t gfp_mask, int nid, >> struct mem_cgroup *memcg, >> int priority) >> { >> @@ -1087,6 +1087,7 @@ static unsigned long shrink_slab(gfp_t gfp_mask, int nid, >> cond_resched(); >> return freed; >> } >> +EXPORT_SYMBOL_GPL(shrink_slab); >> >> static unsigned long drop_slab_node(int nid) >> { > >It feels like something we don't want arbitrary drivers to call. > >Unrelated to that, this better be sent along with actual driver usage. Hi Sir: Virtually, we have implemented async shrink_slabd isolated from kswapd and direct_reclaim. The goal above it is to avoid the sync-wait in kswapd or direct_reclaim due to some shrinkers. But the async shrink_slabd was only applied to mobile products so that I didn't make sure any risk in other products. For the above reasons, I wanna merge the patch to export shrink_slab and the patch of drivers would be considered to be pushed if I check all the risks. Some informal code files of driver are attached for your reference. -----邮件原件----- 发件人: David Hildenbrand <david@redhat.com> 发送时间: 2023年6月16日 17:43 收件人: 李培锋(wink) <lipeifeng@oppo.com>; akpm@linux-foundation.org 抄送: linux-mm@kvack.org; linux-kernel@vger.kernel.org; surenb@google.com; gregkh@google.com 主题: Re: [PATCH] mm: vmscan: export func:shrink_slab On 16.06.23 11:21, lipeifeng@oppo.com wrote: > From: lipeifeng <lipeifeng@oppo.com> > > Some of shrinkers during shrink_slab would enter synchronous-wait due > to lock or other reasons, which would causes kswapd or direct_reclaim > to be blocked. > > This patch export shrink_slab so that it can be called in drivers > which can shrink memory independently. > > Signed-off-by: lipeifeng <lipeifeng@oppo.com> > --- > mm/vmscan.c | 3 ++- > 1 file changed, 2 insertions(+), 1 deletion(-) > > diff --git a/mm/vmscan.c b/mm/vmscan.c index > 6d0cd2840cf0..2e54fa52e7ec 100644 > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -1043,7 +1043,7 @@ static unsigned long shrink_slab_memcg(gfp_t gfp_mask, int nid, > * > * Returns the number of reclaimed slab objects. > */ > -static unsigned long shrink_slab(gfp_t gfp_mask, int nid, > +unsigned long shrink_slab(gfp_t gfp_mask, int nid, > struct mem_cgroup *memcg, > int priority) > { > @@ -1087,6 +1087,7 @@ static unsigned long shrink_slab(gfp_t gfp_mask, int nid, > cond_resched(); > return freed; > } > +EXPORT_SYMBOL_GPL(shrink_slab); > > static unsigned long drop_slab_node(int nid) > { It feels like something we don't want arbitrary drivers to call. Unrelated to that, this better be sent along with actual driver usage. -- Cheers, David / dhildenb // SPDX-License-Identifier: GPL-2.0-only /* * Copyright (C) 2020-2022 Oplus. All rights reserved. */ #define pr_fmt(fmt) "shrink_async: " fmt #include <linux/module.h> #include <trace/hooks/vmscan.h> #include <linux/swap.h> #include <linux/proc_fs.h> #include <linux/gfp.h> #include <linux/types.h> #include <linux/cpufreq.h> #include <linux/freezer.h> #include <linux/wait.h> #define SHRINK_SLABD_NAME "kshrink_slabd" extern unsigned long shrink_slab(gfp_t gfp_mask, int nid, struct mem_cgroup *memcg, int priority); static int kshrink_slabd_pid; static struct task_struct *shrink_slabd_tsk = NULL; static bool async_shrink_slabd_setup = false; wait_queue_head_t shrink_slabd_wait; struct async_slabd_parameter { struct mem_cgroup *shrink_slabd_memcg; gfp_t shrink_slabd_gfp_mask; atomic_t shrink_slabd_runnable; int shrink_slabd_nid; int priority; } asp; static struct reclaim_state async_reclaim_state = { .reclaimed_slab = 0, }; static bool is_shrink_slabd_task(struct task_struct *tsk) { return tsk->pid == kshrink_slabd_pid; } bool wakeup_shrink_slabd(gfp_t gfp_mask, int nid, struct mem_cgroup *memcg, int priority) { if (unlikely(!async_shrink_slabd_setup)) return false; if (atomic_read(&(asp.shrink_slabd_runnable)) == 1) return true; current->reclaim_state = &async_reclaim_state; asp.shrink_slabd_gfp_mask = gfp_mask; asp.shrink_slabd_nid = nid; asp.shrink_slabd_memcg = memcg; asp.priority = priority; atomic_set(&(asp.shrink_slabd_runnable), 1); wake_up_interruptible(&shrink_slabd_wait); return true; } void set_async_slabd_cpus(void) { struct cpumask mask; struct cpumask *cpumask = &mask; pg_data_t *pgdat = NODE_DATA(0); unsigned int cpu = 0, cpufreq_max_tmp = 0; struct cpufreq_policy *policy_max; static bool set_slabd_cpus_success = false; if (unlikely(!async_shrink_slabd_setup)) return; if (likely(set_slabd_cpus_success)) return; for_each_possible_cpu(cpu) { struct cpufreq_policy *policy = cpufreq_cpu_get(cpu); if (policy == NULL) continue; if (policy->cpuinfo.max_freq >= cpufreq_max_tmp) { cpufreq_max_tmp = policy->cpuinfo.max_freq; policy_max = policy; } } cpumask_copy(cpumask, cpumask_of_node(pgdat->node_id)); cpumask_andnot(cpumask, cpumask, policy_max->related_cpus); if (!cpumask_empty(cpumask)) { set_cpus_allowed_ptr(shrink_slabd_tsk, cpumask); set_slabd_cpus_success = true; } } static int kshrink_slabd_func(void *p) { struct mem_cgroup *memcg; gfp_t gfp_mask; int nid, priority; /* * Tell the memory management that we're a "memory allocator", * and that if we need more memory we should get access to it * regardless (see "__alloc_pages()"). "kswapd" should * never get caught in the normal page freeing logic. * * (Kswapd normally doesn't need memory anyway, but sometimes * you need a small amount of memory in order to be able to * page out something else, and this flag essentially protects * us from recursively trying to free more memory as we're * trying to free the first piece of memory in the first place). */ current->flags |= PF_MEMALLOC | PF_SWAPWRITE | PF_KSWAPD; set_freezable(); current->reclaim_state = &async_reclaim_state; asp.shrink_slabd_gfp_mask = 0; asp.shrink_slabd_nid = 0; asp.shrink_slabd_memcg = NULL; atomic_set(&(asp.shrink_slabd_runnable), 0); asp.priority = 0; while (!kthread_should_stop()) { wait_event_freezable(shrink_slabd_wait, (atomic_read(&(asp.shrink_slabd_runnable)) == 1)); set_async_slabd_cpus(); nid = asp.shrink_slabd_nid; gfp_mask = asp.shrink_slabd_gfp_mask; priority = asp.priority; memcg = asp.shrink_slabd_memcg; shrink_slab(gfp_mask, nid, memcg, priority); atomic_set(&(asp.shrink_slabd_runnable), 0); } current->flags &= ~(PF_MEMALLOC | PF_SWAPWRITE | PF_KSWAPD); current->reclaim_state = NULL; return 0; } static void should_shrink_async(void *data, gfp_t gfp_mask, int nid, struct mem_cgroup *memcg, int priority, bool *bypass) { if (unlikely(!async_shrink_slabd_setup)) { *bypass = false; return; } if (is_shrink_slabd_task(current)) { *bypass = false; } else { *bypass = true; wakeup_shrink_slabd(gfp_mask, nid, memcg, priority); } } static int register_shrink_async_vendor_hooks(void) { int ret = 0; ret = register_trace_android_vh_shrink_slab_bypass(should_shrink_async, NULL); if (ret != 0) { pr_err("register_trace_android_vh_shrink_slab_bypass failed! ret=%d\n", ret); goto out; } out: return ret; } static void unregister_shrink_async_vendor_hooks(void) { unregister_trace_android_vh_shrink_slab_bypass(should_shrink_async, NULL); return; } static int __init shrink_async_init(void) { int ret = 0; ret = register_shrink_async_vendor_hooks(); if (ret != 0) return ret; init_waitqueue_head(&shrink_slabd_wait); shrink_slabd_tsk = kthread_run(kshrink_slabd_func, NULL, SHRINK_SLABD_NAME); if (IS_ERR_OR_NULL(shrink_slabd_tsk)) { pr_err("Failed to start shrink_slabd on node 0\n"); ret = PTR_ERR(shrink_slabd_tsk); shrink_slabd_tsk = NULL; return ret; } kshrink_slabd_pid = shrink_slabd_tsk->pid; async_shrink_slabd_setup = true; pr_info("kshrink_async succeed!\n"); return 0; } static void __exit shrink_async_exit(void) { unregister_shrink_async_vendor_hooks(); pr_info("shrink_async exit succeed!\n"); return; } module_init(shrink_async_init); module_exit(shrink_async_exit); MODULE_LICENSE("GPL v2");
>>> On 16.06.23 11:21, lipeifeng@oppo.com wrote: >>> From: lipeifeng <lipeifeng@oppo.com> >>> >>> Some of shrinkers during shrink_slab would enter synchronous-wait due >>> to lock or other reasons, which would causes kswapd or direct_reclaim >>> to be blocked. >>> >>> This patch export shrink_slab so that it can be called in drivers >>> which can shrink memory independently. >>> >>> Signed-off-by: lipeifeng <lipeifeng@oppo.com> >>> --- >>> mm/vmscan.c | 3 ++- >>> 1 file changed, 2 insertions(+), 1 deletion(-) >>> >>> diff --git a/mm/vmscan.c b/mm/vmscan.c index >>> 6d0cd2840cf0..2e54fa52e7ec 100644 >>> --- a/mm/vmscan.c >>> +++ b/mm/vmscan.c >>> @@ -1043,7 +1043,7 @@ static unsigned long shrink_slab_memcg(gfp_t gfp_mask, int nid, >>> * >>> * Returns the number of reclaimed slab objects. >>> */ >>> -static unsigned long shrink_slab(gfp_t gfp_mask, int nid, >>> +unsigned long shrink_slab(gfp_t gfp_mask, int nid, >>> struct mem_cgroup *memcg, >>> int priority) >>> { >>> @@ -1087,6 +1087,7 @@ static unsigned long shrink_slab(gfp_t gfp_mask, int nid, >>> cond_resched(); >>> return freed; >>> } >>> +EXPORT_SYMBOL_GPL(shrink_slab); >>> >>> static unsigned long drop_slab_node(int nid) >>> { >> >>It feels like something we don't want arbitrary drivers to call. >> >>Unrelated to that, this better be sent along with actual driver usage. > >Hi Sir: > >Virtually, we have implemented async shrink_slabd isolated from kswapd and direct_reclaim. >The goal above it is to avoid the sync-wait in kswapd or direct_reclaim due to some shrinkers. > >But the async shrink_slabd was only applied to mobile products so that I didn't make sure any risk in other products. For the above reasons, I wanna merge the patch to export shrink_slab and the patch of drivers would be considered to be pushed if I check all the risks. > >Some informal code files of driver are attached for your reference. Hi Sir: Pls help to review the patch merge it if no problems, thanks you very much. -----邮件原件----- 发件人: 李培锋(wink) 发送时间: 2023年6月20日 11:05 收件人: David Hildenbrand <david@redhat.com>; akpm@linux-foundation.org 抄送: linux-mm@kvack.org; linux-kernel@vger.kernel.org; surenb@google.com; gregkh@google.com; zhangshiming@opp.com; 郭健 <guojian@oppo.com> 主题: 回复: [PATCH] mm: vmscan: export func:shrink_slab On 16.06.23 11:21, lipeifeng@oppo.com wrote: >> From: lipeifeng <lipeifeng@oppo.com> >> >> Some of shrinkers during shrink_slab would enter synchronous-wait due >> to lock or other reasons, which would causes kswapd or direct_reclaim >> to be blocked. >> >> This patch export shrink_slab so that it can be called in drivers >> which can shrink memory independently. >> >> Signed-off-by: lipeifeng <lipeifeng@oppo.com> >> --- >> mm/vmscan.c | 3 ++- >> 1 file changed, 2 insertions(+), 1 deletion(-) >> >> diff --git a/mm/vmscan.c b/mm/vmscan.c index >> 6d0cd2840cf0..2e54fa52e7ec 100644 >> --- a/mm/vmscan.c >> +++ b/mm/vmscan.c >> @@ -1043,7 +1043,7 @@ static unsigned long shrink_slab_memcg(gfp_t gfp_mask, int nid, >> * >> * Returns the number of reclaimed slab objects. >> */ >> -static unsigned long shrink_slab(gfp_t gfp_mask, int nid, >> +unsigned long shrink_slab(gfp_t gfp_mask, int nid, >> struct mem_cgroup *memcg, >> int priority) >> { >> @@ -1087,6 +1087,7 @@ static unsigned long shrink_slab(gfp_t gfp_mask, int nid, >> cond_resched(); >> return freed; >> } >> +EXPORT_SYMBOL_GPL(shrink_slab); >> >> static unsigned long drop_slab_node(int nid) >> { > >It feels like something we don't want arbitrary drivers to call. > >Unrelated to that, this better be sent along with actual driver usage. Hi Sir: Virtually, we have implemented async shrink_slabd isolated from kswapd and direct_reclaim. The goal above it is to avoid the sync-wait in kswapd or direct_reclaim due to some shrinkers. But the async shrink_slabd was only applied to mobile products so that I didn't make sure any risk in other products. For the above reasons, I wanna merge the patch to export shrink_slab and the patch of drivers would be considered to be pushed if I check all the risks. Some informal code files of driver are attached for your reference. -----邮件原件----- 发件人: David Hildenbrand <david@redhat.com> 发送时间: 2023年6月16日 17:43 收件人: 李培锋(wink) <lipeifeng@oppo.com>; akpm@linux-foundation.org 抄送: linux-mm@kvack.org; linux-kernel@vger.kernel.org; surenb@google.com; gregkh@google.com 主题: Re: [PATCH] mm: vmscan: export func:shrink_slab On 16.06.23 11:21, lipeifeng@oppo.com wrote: > From: lipeifeng <lipeifeng@oppo.com> > > Some of shrinkers during shrink_slab would enter synchronous-wait due > to lock or other reasons, which would causes kswapd or direct_reclaim > to be blocked. > > This patch export shrink_slab so that it can be called in drivers > which can shrink memory independently. > > Signed-off-by: lipeifeng <lipeifeng@oppo.com> > --- > mm/vmscan.c | 3 ++- > 1 file changed, 2 insertions(+), 1 deletion(-) > > diff --git a/mm/vmscan.c b/mm/vmscan.c index > 6d0cd2840cf0..2e54fa52e7ec 100644 > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -1043,7 +1043,7 @@ static unsigned long shrink_slab_memcg(gfp_t gfp_mask, int nid, > * > * Returns the number of reclaimed slab objects. > */ > -static unsigned long shrink_slab(gfp_t gfp_mask, int nid, > +unsigned long shrink_slab(gfp_t gfp_mask, int nid, > struct mem_cgroup *memcg, > int priority) > { > @@ -1087,6 +1087,7 @@ static unsigned long shrink_slab(gfp_t gfp_mask, int nid, > cond_resched(); > return freed; > } > +EXPORT_SYMBOL_GPL(shrink_slab); > > static unsigned long drop_slab_node(int nid) > { It feels like something we don't want arbitrary drivers to call. Unrelated to that, this better be sent along with actual driver usage. -- Cheers, David / dhildenb
On Tue, Jun 20, 2023 at 03:05:27AM +0000, 李培锋(wink) wrote: > On 16.06.23 11:21, lipeifeng@oppo.com wrote: > >> From: lipeifeng <lipeifeng@oppo.com> > >> > >> Some of shrinkers during shrink_slab would enter synchronous-wait due > >> to lock or other reasons, which would causes kswapd or direct_reclaim > >> to be blocked. > >> > >> This patch export shrink_slab so that it can be called in drivers > >> which can shrink memory independently. > >> > >> Signed-off-by: lipeifeng <lipeifeng@oppo.com> > >> --- > >> mm/vmscan.c | 3 ++- > >> 1 file changed, 2 insertions(+), 1 deletion(-) > >> > >> diff --git a/mm/vmscan.c b/mm/vmscan.c index > >> 6d0cd2840cf0..2e54fa52e7ec 100644 > >> --- a/mm/vmscan.c > >> +++ b/mm/vmscan.c > >> @@ -1043,7 +1043,7 @@ static unsigned long shrink_slab_memcg(gfp_t gfp_mask, int nid, > >> * > >> * Returns the number of reclaimed slab objects. > >> */ > >> -static unsigned long shrink_slab(gfp_t gfp_mask, int nid, > >> +unsigned long shrink_slab(gfp_t gfp_mask, int nid, > >> struct mem_cgroup *memcg, > >> int priority) > >> { > >> @@ -1087,6 +1087,7 @@ static unsigned long shrink_slab(gfp_t gfp_mask, int nid, > >> cond_resched(); > >> return freed; > >> } > >> +EXPORT_SYMBOL_GPL(shrink_slab); > >> > >> static unsigned long drop_slab_node(int nid) > >> { > > > >It feels like something we don't want arbitrary drivers to call. > > > >Unrelated to that, this better be sent along with actual driver usage. > > Hi Sir: > > Virtually, we have implemented async shrink_slabd isolated from kswapd and direct_reclaim. > The goal above it is to avoid the sync-wait in kswapd or direct_reclaim due to some shrinkers. > > But the async shrink_slabd was only applied to mobile products so that I didn't make sure any > risk in other products. For the above reasons, I wanna merge the patch to export shrink_slab > and the patch of drivers would be considered to be pushed if I check all the risks. > > Some informal code files of driver are attached for your reference. You have to submit this as a real series, we can not accept exports for no in-kernel users (nor would you want us to, as that ends up being an unmaintainable mess.) So please resubmit this as a proper patch series, with the user of this function, and then it can be properly evaluated. As-is, this can not be accepted at all. thanks, greg k-h
diff --git a/mm/vmscan.c b/mm/vmscan.c index 6d0cd2840cf0..2e54fa52e7ec 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1043,7 +1043,7 @@ static unsigned long shrink_slab_memcg(gfp_t gfp_mask, int nid, * * Returns the number of reclaimed slab objects. */ -static unsigned long shrink_slab(gfp_t gfp_mask, int nid, +unsigned long shrink_slab(gfp_t gfp_mask, int nid, struct mem_cgroup *memcg, int priority) { @@ -1087,6 +1087,7 @@ static unsigned long shrink_slab(gfp_t gfp_mask, int nid, cond_resched(); return freed; } +EXPORT_SYMBOL_GPL(shrink_slab); static unsigned long drop_slab_node(int nid) {