From patchwork Mon May 1 16:54:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 89098 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp67573vqo; Mon, 1 May 2023 10:13:16 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ4X/FOwVnPeL1RcGLzlsLD7qj5dfFfp77eJ5c340N+VZJTotyQSukhH+cvaI6EeL6QGOkZ6 X-Received: by 2002:a17:90a:7447:b0:24d:ec16:6f8c with SMTP id o7-20020a17090a744700b0024dec166f8cmr6634507pjk.20.1682961195771; Mon, 01 May 2023 10:13:15 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682961195; cv=none; d=google.com; s=arc-20160816; b=hbbfaenNCxpIecBo1bAirvJy0MraBaKRja3aXPsRgUaySxXJTv+QLrsL+MKt6NC+1m PPdKoNPNGbr9oj2eEOIrZ5RxvD0f5X9ZHt7oWg7qM0WBMvbnJD5Ka61biW5Jlv+neKLn fNd7h2Re0NKOJRu+/19858QqWw72jJjg6zKVx4vOid9goBMmZFJ0niSdQNLvRXUuDeHx e4Sqppporx5cUzyndvHKw9nXBghFPr5KiFC5MDsosL/JqlRNpWF1dgnYcGi1XZ0hqaFS 8YZnZJYISHvTRUgHJdVORi5ybmzejXbRhDf761aNDY8g+Fvthcfs1wgy5EIWiKGbfxrY G4DQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:cc:to:from:subject :message-id:references:mime-version:in-reply-to:date:dkim-signature; bh=tgVoyGNNM2CEwgGlnNhwRO5zW+BCSfKlhykn/JkHIoY=; b=srlDAPvBi9s5aZdFwpEsN13JYo4uaMbfzKrbn4dER2kbLUwGgYUNVFwkIFYKR3zeRc c/E8Nk6Xfmjwwiygc2SVizI1MTrRL42dTicjsq7nZh4OuiT7uBMnjuGrgjczE7VAEfXK H7Y3sx+kdQn3Vx9bD1LQ2V8hj5moZam3Ii8pgeeP446Jv9Jga+07qyjCTWVMCkmNzVLt rDPG2smruMY4JCrBp2nf+my2TwvEUWKLTBjlSFJNFSg7GQRIWO/tlQ9lPZSkH6HIDN7z ddAnD7rthqHRe6WiBZDii02NGTgjpCOYKuClezPvTnUwqNDcIIwCa06CfamCY7jWCzt6 6gFg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=2YKjiLNW; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id bk13-20020a056a02028d00b004a4eae7c943si27283816pgb.535.2023.05.01.10.13.00; Mon, 01 May 2023 10:13:15 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=2YKjiLNW; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232456AbjEAQzd (ORCPT + 99 others); Mon, 1 May 2023 12:55:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59600 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232556AbjEAQzM (ORCPT ); Mon, 1 May 2023 12:55:12 -0400 Received: from mail-pj1-x1049.google.com (mail-pj1-x1049.google.com [IPv6:2607:f8b0:4864:20::1049]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8C99B1719 for ; Mon, 1 May 2023 09:55:10 -0700 (PDT) Received: by mail-pj1-x1049.google.com with SMTP id 98e67ed59e1d1-24e02410034so1197079a91.0 for ; Mon, 01 May 2023 09:55:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1682960110; x=1685552110; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:from:to:cc:subject:date:message-id :reply-to; bh=tgVoyGNNM2CEwgGlnNhwRO5zW+BCSfKlhykn/JkHIoY=; b=2YKjiLNW+kfuA2PzlgEprNmYD56U2DvqAYUQT3SxKi2Wkcms/FURL1MvVV4lKJqj8O 6/Ns7jrOtbP8ic5/FVYS4laNGQMz7H1sQUQlCcm/XRw/h77Gbtg2COyXGd2xvroS0fJn qwWPXD+NTjhYN8F2PIhFlV9xM7/YRbLeL8oWUbHvCo0bmT2YG7Au7XFXE7JF9tMEE+O2 Razpx64/yqsX3p0ZTAuL1qwvem7fqbeaFnGoyPVJo12mzY/mv/yC7mdLiyEupHVMtu6T xYmQq3px4UJy+E+76my+C+5ib12oMdQjlgTcOmFRh/w96xFVezYHyxkBpY3bPmKf/8tt YK/Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682960110; x=1685552110; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:x-gm-message-state:from:to:cc:subject :date:message-id:reply-to; bh=tgVoyGNNM2CEwgGlnNhwRO5zW+BCSfKlhykn/JkHIoY=; b=Ph00Oi4mQ1ZHPj/6k09TozZ0NWkS2w+iwOMsrKR4ydgv4AxhPcu6kdpk3CDXAKLeo0 +TFPkHePn1yPHX0FcOwl05y0zDxTNr36BXuZLuUZRT1qlf2gV0o5j/+NRPWNgfGYHXq2 R9e1RIOUcx63G0nE6Bmzo/EE8gZRXZJ5we63M8+dPXcBnWAHU4VQv3xjay2WYFSOxKIf LePdrlU/oSVb86Xn+ep8GQr9x1r8GY6mKzYtnQVH7R65SNBNtt+iNNcheAx55J+uJXmx H8SxDttlE5n3C5J21oTs5tGcIUqd3YzdL+UfnG9Sn32HxTouYaSuIkHfHe+FQkobCaxL quGg== X-Gm-Message-State: AC+VfDxmAWmOz1v3xX2RQLZFDeL4oSw/hj255zv7qi6wICNRKcrrw+gK 1HdUf/tdA3znnYTiaQZY+wIfqMZF2L0= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:6d24:3efd:facc:7ac4]) (user=surenb job=sendgmr) by 2002:a17:90a:390d:b0:246:66d6:f24e with SMTP id y13-20020a17090a390d00b0024666d6f24emr3850182pjb.2.1682960109840; Mon, 01 May 2023 09:55:09 -0700 (PDT) Date: Mon, 1 May 2023 09:54:11 -0700 In-Reply-To: <20230501165450.15352-1-surenb@google.com> Mime-Version: 1.0 References: <20230501165450.15352-1-surenb@google.com> X-Mailer: git-send-email 2.40.1.495.gc816e09b53d-goog Message-ID: <20230501165450.15352-2-surenb@google.com> Subject: [PATCH 01/40] lib/string_helpers: Drop space in string_get_size's output From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, ldufour@linux.ibm.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org, Andy Shevchenko , Michael Ellerman , Benjamin Herrenschmidt , Paul Mackerras , "Michael S. Tsirkin" , Jason Wang , " =?utf-8?q?Noralf_Tr=C3=B8nnes?= " X-Spam-Status: No, score=-9.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1764712718559406548?= X-GMAIL-MSGID: =?utf-8?q?1764712718559406548?= From: Kent Overstreet Previously, string_get_size() outputted a space between the number and the units, i.e. 9.88 MiB This changes it to 9.88MiB which allows it to be parsed correctly by the 'sort -h' command. Signed-off-by: Kent Overstreet Signed-off-by: Suren Baghdasaryan Cc: Andy Shevchenko Cc: Michael Ellerman Cc: Benjamin Herrenschmidt Cc: Paul Mackerras Cc: "Michael S. Tsirkin" Cc: Jason Wang Cc: "Noralf Trønnes" Cc: Jens Axboe --- lib/string_helpers.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/lib/string_helpers.c b/lib/string_helpers.c index 230020a2e076..593b29fece32 100644 --- a/lib/string_helpers.c +++ b/lib/string_helpers.c @@ -126,8 +126,7 @@ void string_get_size(u64 size, u64 blk_size, const enum string_size_units units, else unit = units_str[units][i]; - snprintf(buf, len, "%u%s %s", (u32)size, - tmp, unit); + snprintf(buf, len, "%u%s%s", (u32)size, tmp, unit); } EXPORT_SYMBOL(string_get_size); From patchwork Mon May 1 16:54:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 89126 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp73043vqo; Mon, 1 May 2023 10:22:03 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ7h8d+NiHjMfMKJsxQ5UqmhmD8WtuZSNRQrYhEgd3ixuRyvcRcA5L9XnLzy3ia8yeNrThKO X-Received: by 2002:aa7:88c3:0:b0:63b:7ad1:549c with SMTP id k3-20020aa788c3000000b0063b7ad1549cmr21189556pff.6.1682961723324; Mon, 01 May 2023 10:22:03 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682961723; cv=none; d=google.com; s=arc-20160816; b=NsBuqDA8uJ6mLRRrH5QuqGlbZBlZxfFycFgQMr7/EvalAGWfFK2Gcdph8GCD5IJGhK xmgbNOesZeeXUDfy2nhwZDedOFAhkveLhmTJ0qelCAv04sR5HEDkckxn0IeWBgZ3fOaz 52E0TvN7uO+GKl/NMsNA6BdF3tcyXr5O/ol2n7ix0UAFPo/ZBrQutDSdCMhpTD0fa+Tj +vUJRDCqvqO4aicDKoi2lPuOdiG52rzz4g/76qyNKJFp0QAtRmBhbQTmr0+zeRQDSSoe C4Y5hZAsrZOcQSLRe97vWs4SvPzYatG56M3Hi91ypEDxjEt5lnJZF3zOvmxNzIoL43XH qtxA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:dkim-signature; bh=OgbhlQnm6N3CIqx+6EXURMMvow4S1KQvuNCpohSnl08=; b=xG2Ns3bKHP6pEj6hPVAg/8Xvt5dYvbjGlwaT3ivRfkU0bcGlgfkkyk3TVxk4sPTlcE MTpdmhaPI94jbH7GBPvzXF2pQ8/woyn/cFCbqMmS0JX1vpRjOx2/Pu9TvkwSUgW/+xQ3 m2ezFl837cu2qDFE9x7+WsgEwRVMZik5UhZI9+LzrgcnKTKD0MwRB5zeRL5/glpNsiwz 7yGWgVnE2AWiqFfeGpD6mwE0qtolkVxE+ZjxIpRqQXUyplMEIcLLdPNsp8T6mjxLNJ2Q /foD1n+LsWKNmtT53T8Y8Ms6U/+kPzDmCuaZ7ySxn7+XNTiafbCW473+mUDbXwymD6cA 08cg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=ym8bWqOd; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id v5-20020aa799c5000000b0063f1cb928bdsi24557409pfi.313.2023.05.01.10.21.50; Mon, 01 May 2023 10:22:03 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=ym8bWqOd; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231161AbjEAQzh (ORCPT + 99 others); Mon, 1 May 2023 12:55:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59652 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232602AbjEAQzO (ORCPT ); Mon, 1 May 2023 12:55:14 -0400 Received: from mail-pg1-x54a.google.com (mail-pg1-x54a.google.com [IPv6:2607:f8b0:4864:20::54a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D3CBF1725 for ; Mon, 1 May 2023 09:55:12 -0700 (PDT) Received: by mail-pg1-x54a.google.com with SMTP id 41be03b00d2f7-5144902c15eso1412277a12.2 for ; Mon, 01 May 2023 09:55:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1682960112; x=1685552112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=OgbhlQnm6N3CIqx+6EXURMMvow4S1KQvuNCpohSnl08=; b=ym8bWqOddRKLLCHNc+7Y2IQs+szb6DlXVMA8aGLldE3slVUGdWc6AeUaj2cctynpb7 /42bu00/rNI9NmOjux118bETqQ9xSRBpI0DlCMlMXfD5dyA5fGqmz9wEGRomBF9wE3qi RRE1kSFRttrNPJ5gUqGivtomoZ/CkHpvdtT+8kMIwh52/Lg8aqiW8D9XXtWx1rDhFoLd bIJ0Jt1xQTF7bUEViJLE2KzQyVYsdDW/+ihFxINfhU+/Rq1Q5G2PY50HQQ7ppVzde88x xSaqGzF/rOMvs2lVTYEukIGUCSWj1ys72INhoG28un8pv0+7d++N9bkCwOHBDc2Mepzd xe8g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682960112; x=1685552112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=OgbhlQnm6N3CIqx+6EXURMMvow4S1KQvuNCpohSnl08=; b=Ci9UGOSV7lLJZShTC/6/Hwx6sMQY6y297kFZzJk5jWDDDP7yH+KKcaQqpww/BxIsqO TJt9DhXxvfBAxfE9mAN7+tjOhjDfA8wid78g47ajA7XLljhit5zoboGyAFiBKmxYPDmJ sPIcnc+q0LsyFX14fAlh1KogArmGKoOjd8YzbZgof4YcEgwWZAyGXFBDNrz8ID82Xu7U mutPA36+AtAc3jBdTNUDPuIYzRedUPY9IKpCNM8gdumcb/+68MvySSR6fBISkxoV4SMp RvmNoHf/g9mCL5zUGeKR0x3/1KdCxvmwpdwP9hhRtmlxv4eRpzQqozDPc+hRiMlSbwOn Rm0w== X-Gm-Message-State: AC+VfDwhqvvvpv90LiwRoCPR/ol64LUVjCutwI6g7mYUvSIQbnq8W7ON TKm7rm69CAGpL+xFLygPZhBJv5Fmwgw= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:6d24:3efd:facc:7ac4]) (user=surenb job=sendgmr) by 2002:a63:24f:0:b0:520:60ac:fb30 with SMTP id 76-20020a63024f000000b0052060acfb30mr3551005pgc.1.1682960112196; Mon, 01 May 2023 09:55:12 -0700 (PDT) Date: Mon, 1 May 2023 09:54:12 -0700 In-Reply-To: <20230501165450.15352-1-surenb@google.com> Mime-Version: 1.0 References: <20230501165450.15352-1-surenb@google.com> X-Mailer: git-send-email 2.40.1.495.gc816e09b53d-goog Message-ID: <20230501165450.15352-3-surenb@google.com> Subject: [PATCH 02/40] scripts/kallysms: Always include __start and __stop symbols From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, ldufour@linux.ibm.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org X-Spam-Status: No, score=-9.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1764713271848995458?= X-GMAIL-MSGID: =?utf-8?q?1764713271848995458?= From: Kent Overstreet These symbols are used to denote section boundaries: by always including them we can unify loading sections from modules with loading built-in sections, which leads to some significant cleanup. Signed-off-by: Kent Overstreet Signed-off-by: Suren Baghdasaryan --- scripts/kallsyms.c | 13 +++++++++++++ 1 file changed, 13 insertions(+) diff --git a/scripts/kallsyms.c b/scripts/kallsyms.c index 0d2db41177b2..7b7dbeb5bd6e 100644 --- a/scripts/kallsyms.c +++ b/scripts/kallsyms.c @@ -203,6 +203,11 @@ static int symbol_in_range(const struct sym_entry *s, return 0; } +static bool string_starts_with(const char *s, const char *prefix) +{ + return strncmp(s, prefix, strlen(prefix)) == 0; +} + static int symbol_valid(const struct sym_entry *s) { const char *name = sym_name(s); @@ -210,6 +215,14 @@ static int symbol_valid(const struct sym_entry *s) /* if --all-symbols is not specified, then symbols outside the text * and inittext sections are discarded */ if (!all_symbols) { + /* + * Symbols starting with __start and __stop are used to denote + * section boundaries, and should always be included: + */ + if (string_starts_with(name, "__start_") || + string_starts_with(name, "__stop_")) + return 1; + if (symbol_in_range(s, text_ranges, ARRAY_SIZE(text_ranges)) == 0) return 0; From patchwork Mon May 1 16:54:13 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 89122 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp72743vqo; Mon, 1 May 2023 10:21:29 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ7P/A6ob1gDqqYlH7bFgI4UuiX5MDlA6255SFeYamAFJSUvrnVGE1WUiF8uRnBYnxdx+D5v X-Received: by 2002:a17:90a:3906:b0:24e:12f4:b74f with SMTP id y6-20020a17090a390600b0024e12f4b74fmr1938124pjb.20.1682961688792; Mon, 01 May 2023 10:21:28 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682961688; cv=none; d=google.com; s=arc-20160816; b=m1yDgt084kjViDC9jMEWdIS3UQboTX/8Jwo1CjW35osRE8juX6orKKicrJkVEjZGcD reO5m6kHxxPR4lA1tT8+8yjZwv4EO3mdBi1rHRMGcNNFKujDPXPP9AXEDYCpQuAupMWL H1Oo/T4yQX3xPEQna/KF27g4SfPSteY+tqenEW8eYV3CbDLxRyPYxp2v57xQEK6x84LZ /JjMqqMG7LzmRAJcYYevrD/oD1ymhwlFmYnGUk6FM4SNSB+eUb23porphkqpj2MDVlQA TbFsg0Mr+Apv/VqGfedgphp0IvE4MoThZTh/RP8FdpQyd9dDtglJ4G643dtVAGPKDzdo 8IwQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:dkim-signature; bh=bGdFaZ9yLSvpkzL04e/VHNU0a1vWusSmGZaxEHf2wm8=; b=LC/LIc1Dw2mdOiXYJxepJzYJVNTg6YHIDbjXOygHwedgYngbUuo9QuTGDKn2CzxJNT poWMHfTSck4jy2YISDsEUxfif+UcnqA3aycq4avu4pniyqPPfWHMH3bMZfK8TPC7exg1 8UYF5DveXvUN4YNyubO+RQN5udg7Up2yyAHmyIGrnHODxHBUUgaJX9V/NUmLOpSWIZuY LMEjiOZbS9/jmFuQwrLi8CFS1HBMuxw1kPDCKxgPv7OnBAMSJW6gvnltwzpXc054l9N/ 6fHrTnD0aSkrBj+T+hP41ivVBtzzLb1CWFhUnK58AXMInqmoZjZpM9WSHR4ziRLCLy6s I4Yg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=kNURb9oR; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id nu1-20020a17090b1b0100b0024dec043858si5017891pjb.74.2023.05.01.10.21.16; Mon, 01 May 2023 10:21:28 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=kNURb9oR; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232684AbjEAQzl (ORCPT + 99 others); Mon, 1 May 2023 12:55:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60082 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232627AbjEAQzb (ORCPT ); Mon, 1 May 2023 12:55:31 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 61863198A for ; Mon, 1 May 2023 09:55:15 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-559ffd15df9so31972927b3.3 for ; Mon, 01 May 2023 09:55:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1682960114; x=1685552114; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=bGdFaZ9yLSvpkzL04e/VHNU0a1vWusSmGZaxEHf2wm8=; b=kNURb9oRoWKMv7g0yo2L8rW1EcO17+d5GcAx/UcGQY6Nq04eWDbD4SbdCNOLXgOYzq iIefzjvXI4Snk9VN4a44AHM12D0iuNdtL0q3JhvnQXzZtf429k8SdHQyNq1DhIH3u3D/ vgRvJtIdIGprvSKRamhSV8bn0NWTicrhYnvBvcVO6D5qUdO+wBWr5u0wD0Nu7LbQQb+M IIs2pzxiZlPELIg9+H7fv7z/PKPu/SB3zB+CWnrv1/57UJqzRl/3C3lgPuiSCXQkCpOG 4JQ2sAFEorBhte2EmAzube4SlGlrch2L/NYmCg7/2eMQe55a1P+zlu78IrB4haHDMTMn h3Fg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682960114; x=1685552114; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=bGdFaZ9yLSvpkzL04e/VHNU0a1vWusSmGZaxEHf2wm8=; b=YAZAwiFnk54oFXQKJQ/GeFX5hfmqeW7V7iagKSampf9i/9bkQos5+opB7+XkJBrQR8 BCZlKgJ0aew4A8xOke1MGaDBvMTtd25e+abxG97Gr53KYvhVJlOyFD5BgSrW5zeMADl9 gbgcYZlxeVhNs5aTrbn5c1XoDjlnFZ9dqA2JQgfkjv5p1BN63QxgY4Vkv0eAo/9I0wv3 FnhKbTMwxAk64lIpKAjg+15xtvOmCMlrxcJuvgG05Fp2wdQYUkSap76a0nXRUGXV1xUD 1zVBYl3oldKb96Hw6a3Z3YOSyjCkja3js1VUG68G6o6nW4iiH2WtCd1UTRcbE15RJF0s zEvg== X-Gm-Message-State: AC+VfDwK6mMNMUYcQqmQ5VtBtHQxZ7U+cvAIjg9dATITr37R2iSMzKtY A1SA9LsyogLszxsGkhkNtYqQ8/9rFzA= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:6d24:3efd:facc:7ac4]) (user=surenb job=sendgmr) by 2002:a05:690c:723:b0:54f:68a1:b406 with SMTP id bt3-20020a05690c072300b0054f68a1b406mr8285886ywb.2.1682960114403; Mon, 01 May 2023 09:55:14 -0700 (PDT) Date: Mon, 1 May 2023 09:54:13 -0700 In-Reply-To: <20230501165450.15352-1-surenb@google.com> Mime-Version: 1.0 References: <20230501165450.15352-1-surenb@google.com> X-Mailer: git-send-email 2.40.1.495.gc816e09b53d-goog Message-ID: <20230501165450.15352-4-surenb@google.com> Subject: [PATCH 03/40] fs: Convert alloc_inode_sb() to a macro From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, ldufour@linux.ibm.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org, Alexander Viro X-Spam-Status: No, score=-9.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1764713236070117462?= X-GMAIL-MSGID: =?utf-8?q?1764713236070117462?= From: Kent Overstreet We're introducing alloc tagging, which tracks memory allocations by callsite. Converting alloc_inode_sb() to a macro means allocations will be tracked by its caller, which is a bit more useful. Signed-off-by: Kent Overstreet Signed-off-by: Suren Baghdasaryan Cc: Alexander Viro --- include/linux/fs.h | 6 +----- 1 file changed, 1 insertion(+), 5 deletions(-) diff --git a/include/linux/fs.h b/include/linux/fs.h index 21a981680856..4905ce14db0b 100644 --- a/include/linux/fs.h +++ b/include/linux/fs.h @@ -2699,11 +2699,7 @@ int setattr_should_drop_sgid(struct mnt_idmap *idmap, * This must be used for allocating filesystems specific inodes to set * up the inode reclaim context correctly. */ -static inline void * -alloc_inode_sb(struct super_block *sb, struct kmem_cache *cache, gfp_t gfp) -{ - return kmem_cache_alloc_lru(cache, &sb->s_inode_lru, gfp); -} +#define alloc_inode_sb(_sb, _cache, _gfp) kmem_cache_alloc_lru(_cache, &_sb->s_inode_lru, _gfp) extern void __insert_inode_hash(struct inode *, unsigned long hashval); static inline void insert_inode_hash(struct inode *inode) From patchwork Mon May 1 16:54:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 89088 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp56094vqo; Mon, 1 May 2023 09:57:01 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6UHPNBHE0/EbIo16t1sUVGd7rHlyAwjgBCNZxk9ISv3xdTeD5uXVkI0EV5SMVM6Rc0hbsB X-Received: by 2002:a05:6a20:c198:b0:f0:5f9a:f5d5 with SMTP id bg24-20020a056a20c19800b000f05f9af5d5mr16204556pzb.41.1682960220997; Mon, 01 May 2023 09:57:00 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682960220; cv=none; d=google.com; s=arc-20160816; b=MlRt59jDNF11xRagz5Lt6AoUSBPIlNn7GhGaBt7uOtBXZgQwimyk1wZWkWn5M1lKGl 02yCUB7nVryWXCRmvvsLWgFN01WpHcr3VzdaN6V5K8ScD7CFTcFj0m2CGVvOwjl/rEgh k8nBHEve4dp2m4iVOdh1OU/OU1O9/unipJIsteGR1ocqOBo0I9157SxfigaLdMb56KBz mC3maDXs5xvXDh1smUPSTJWJzITE1G58RNrcv2975RsFoAPQS76yimDz+6ft3o0J+V8H LVJOjqjya73ZkxLshkfVsMcrVkSqi6MAShtsrxTAvI9a2y44Sg6Lhnyg1x4hkUetSgjM gNBQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:dkim-signature; bh=MvglFAc23/TKNtr0bbBz1MPaCPYCHObzjBdGSdfY+dE=; b=ZvFCd7B+ppDtVchtNvxEyFwKzCP7GDRgyXmNvauQrRNI5w9swR+XX9nLhttREdTa+l uppDaQXMQTZ5LQ03UEomQGv5FtRATzpSs8+8PQQ7dcUuf/hgmcQzw9cx67eA7Zx/OPbw 6yd6erSEhz+uFZlRfdHkwdNAZiAa1sXfNQYq3WTQug69gXp1f6EYy3xj69Lass22fXkR rL0YQCrSoNfuzDaByWrfaeN+Sg3CskrIQX5CgPoTldpA6JQ6wndi1pRHjAVVByrvWoFp dIzt+wKMR5nJSKq2MOyt4AMMxog3LEtRHMoO1XVzHe1cYl2OKHbUv2xZpHQBY8iAjlSZ x1dQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=yKDBPh22; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id p20-20020a63f454000000b0050fa04005e6si28667865pgk.412.2023.05.01.09.56.44; Mon, 01 May 2023 09:57:00 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=yKDBPh22; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232655AbjEAQ4K (ORCPT + 99 others); Mon, 1 May 2023 12:56:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60108 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232589AbjEAQzc (ORCPT ); Mon, 1 May 2023 12:55:32 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BC0CD19A1 for ; Mon, 1 May 2023 09:55:17 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id 3f1490d57ef6-b8f324b3ef8so3224229276.0 for ; Mon, 01 May 2023 09:55:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1682960117; x=1685552117; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=MvglFAc23/TKNtr0bbBz1MPaCPYCHObzjBdGSdfY+dE=; b=yKDBPh22/4OVp0gpJ5U5nHWMn8h04FMW1WN9f98nVkyHZSwAOs9UEKf2haMvXBbR8k I5P7Dha4Pldrj64han5IvUTFfOCUHNGUrIJIAXAHFHlzCILjwJdK3NkqJaf+mFjOWY9Y anqwu1P80ScHesP8cB9KZprrz91t4RxUXEXNi1EU3gVLZoQD/8xloHC2gL9j3TfYF6sZ YkdPXTXoXUqQoMaiXzOvgnurLH12U1odpmtI/6y+T+w0FuxiB/jPfmQMFC3YfiI9xt9g 5K5Lvs6m9I2be6QJ0QLu+uaPSHjNUlPR4Y/We8WulcHXa/trNfBWstA095DBG1XNwXOI RnBQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682960117; x=1685552117; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=MvglFAc23/TKNtr0bbBz1MPaCPYCHObzjBdGSdfY+dE=; b=eH/AB0hnXripk0bRqPrKQKWsk4i0/bKwFdk1PgAzZyxqNtcXJgC7AtCizFOSdWGmSd Hoe+WoFauB/crgEOWDyLW1QAldX1YGed10vpO7yQWPHAmmSAlZTPQwutiw7Iz2w8DgIo YM4+XKJlKrQetWNk8P4UlVa8Lmr46EAFVj3jRXn6XZ0HveiwkFFJEbikNVXfSu+eVEqP LN5w7JOHbLkivNMLstdxx7D+n8r9gf8WZPXsGjPajdkQKmNILSibxfRZ/Z6xdKbCLrAD 9rZNzrwJq13Ug7IFXECKbocvzPlYCRZcO98zkpaOor2brn7+m6COrHXtvFCp4rY6rw2l gFjw== X-Gm-Message-State: AC+VfDy3IqKBbWvRJyvDUxtRJydwB90yqNAyrG+v3e9d64HjG/vhCQ4q /dp2my7KJFgc4qEpSt7wKdb2B3ifhhY= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:6d24:3efd:facc:7ac4]) (user=surenb job=sendgmr) by 2002:a25:c54f:0:b0:b8e:fbcb:d6ef with SMTP id v76-20020a25c54f000000b00b8efbcbd6efmr8806787ybe.4.1682960116894; Mon, 01 May 2023 09:55:16 -0700 (PDT) Date: Mon, 1 May 2023 09:54:14 -0700 In-Reply-To: <20230501165450.15352-1-surenb@google.com> Mime-Version: 1.0 References: <20230501165450.15352-1-surenb@google.com> X-Mailer: git-send-email 2.40.1.495.gc816e09b53d-goog Message-ID: <20230501165450.15352-5-surenb@google.com> Subject: [PATCH 04/40] nodemask: Split out include/linux/nodemask_types.h From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, ldufour@linux.ibm.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org X-Spam-Status: No, score=-9.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1764711696647250695?= X-GMAIL-MSGID: =?utf-8?q?1764711696647250695?= From: Kent Overstreet sched.h, which defines task_struct, needs nodemask_t - but sched.h is a frequently used header and ideally shouldn't be pulling in any more code that it needs to. This splits out nodemask_types.h which has the definition sched.h needs, which will avoid a circular header dependency in the alloc tagging patch series, and as a bonus should speed up kernel build times. Signed-off-by: Kent Overstreet Signed-off-by: Suren Baghdasaryan Cc: Ingo Molnar Cc: Peter Zijlstra --- include/linux/nodemask.h | 2 +- include/linux/nodemask_types.h | 9 +++++++++ include/linux/sched.h | 2 +- 3 files changed, 11 insertions(+), 2 deletions(-) create mode 100644 include/linux/nodemask_types.h diff --git a/include/linux/nodemask.h b/include/linux/nodemask.h index bb0ee80526b2..fda37b6df274 100644 --- a/include/linux/nodemask.h +++ b/include/linux/nodemask.h @@ -93,10 +93,10 @@ #include #include #include +#include #include #include -typedef struct { DECLARE_BITMAP(bits, MAX_NUMNODES); } nodemask_t; extern nodemask_t _unused_nodemask_arg_; /** diff --git a/include/linux/nodemask_types.h b/include/linux/nodemask_types.h new file mode 100644 index 000000000000..84c2f47c4237 --- /dev/null +++ b/include/linux/nodemask_types.h @@ -0,0 +1,9 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef __LINUX_NODEMASK_TYPES_H +#define __LINUX_NODEMASK_TYPES_H + +#include + +typedef struct { DECLARE_BITMAP(bits, MAX_NUMNODES); } nodemask_t; + +#endif /* __LINUX_NODEMASK_TYPES_H */ diff --git a/include/linux/sched.h b/include/linux/sched.h index eed5d65b8d1f..35e7efdea2d9 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -20,7 +20,7 @@ #include #include #include -#include +#include #include #include #include From patchwork Mon May 1 16:54:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 89093 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp63535vqo; Mon, 1 May 2023 10:07:14 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ7Ap5xCCb4hjkxVVb1xzsbUkKG3OQ/w+lVTltKd7hYfiFKyMrocKhxBuuwnNMBm3dlTUkxZ X-Received: by 2002:a05:6a00:1905:b0:63d:3595:26db with SMTP id y5-20020a056a00190500b0063d359526dbmr19807373pfi.23.1682960834526; Mon, 01 May 2023 10:07:14 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682960834; cv=none; d=google.com; s=arc-20160816; b=uA0igMauaTPPuut5gPex97ifw9lTr7LPgx5TmFynclbXSti99YTULSm72Ros7lyaSs iHhsl6yHn5FmITps1Q95fmGomAcr6pRc+nsCi4vT9A/ydgmtAyYjXt7xHB+wc3uKYA6z P1TSY4+i21V91CRjaf4wrLzJSn4YpX1CP84hWUtzpM1CilTMmTBfNssJpExq60frjfZH pusLtmmpvPyh/Mp01DFLsSlzUp53Yg5FYvJdbwkifkm7ji4sxGxvLPL1qLKzO2dQMWrE KYk9hrQnjPXX57G+3N+NyxXk0wMmmTRuI15NXq2V5YHBePr6STLGP53qT2VcP/hyBqyY s4UA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:dkim-signature; bh=Am8x4T/C3ER6nwTMzq8wqC7nUn5OpA8nmjjdL2Y+IyA=; b=xz54LQSf+pu9U4xQGSOg7P3/xqA0vg8I9GY1OsfH0LOP0ANVoaGRZQZHRQyEP2sTLS i55ArOKJuliXFu5NQaLlDSzC+uKgNk12pYELb4ld4TjYkscqIXPtU4DJB7OXe3Svo6N8 xej6MOR04ekdtk5k+H954U/g1Vvr3MokvNDwkwHXg5iH1cZHzK/9zUyTBzQQO3UvRQq/ dkVo8195Q6/hA+QUsMRb4gux7KiXbe/ROmj4TLRMoA4EUXPba4b9+sqiXFhKl6krcxcM Ce6tsu2wUaHHX2S6EcbKgKEPM8kbOzl38oeooAg7l0wpEcxoJPxYFFrP/78BsfVUKik0 S9kw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=WPj8rory; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id 73-20020a63004c000000b00528b2a08a3esi9564221pga.425.2023.05.01.10.06.59; Mon, 01 May 2023 10:07:14 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=WPj8rory; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232749AbjEAQ4N (ORCPT + 99 others); Mon, 1 May 2023 12:56:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59600 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232580AbjEAQzc (ORCPT ); Mon, 1 May 2023 12:55:32 -0400 Received: from mail-pj1-x1049.google.com (mail-pj1-x1049.google.com [IPv6:2607:f8b0:4864:20::1049]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A5FD31BD8 for ; Mon, 1 May 2023 09:55:19 -0700 (PDT) Received: by mail-pj1-x1049.google.com with SMTP id 98e67ed59e1d1-24e02410034so1197218a91.0 for ; Mon, 01 May 2023 09:55:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1682960119; x=1685552119; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Am8x4T/C3ER6nwTMzq8wqC7nUn5OpA8nmjjdL2Y+IyA=; b=WPj8roryRwAmmVOt/f6DiF72YfWfvTwfrhdft0wjazxPvIpRZjMIBwQEDTYgaFjL40 EYH/Obl5jlfmohNQjk6ksHyCvogtUr2FLXSqo8imf9pH+Av5pbwlM01Z0VPnzvuHFHvS sJJApW1h4nytu2oRGIJAzX1ILNXit4lP55uwJk+oNrWKlXyEZFC9xWXVli4TIY3SK85U B99AfIcgd6WX1rUTcNtVjahhym0hUSbpRJ8MlMIviR2dEnrK9LO4e6TK4pMF6ElbNOxK BqCNggeEEuUJjuE4WJ/nJF98lgK/t0MR2oB5re4JYDKTxZxL1IcsVMbdtZa5WL37TB8E NhaA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682960119; x=1685552119; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Am8x4T/C3ER6nwTMzq8wqC7nUn5OpA8nmjjdL2Y+IyA=; b=bTr8RKf2MlgUUgE0vU4Ikp0jwaiMKKlR/57YyvM3T8g4rNA/y4rhlhv8ETFP2qp+uC BGfeRge3SKWZP+W2ydA6IRyLOvkpDoWIV7KeLMCWWNV+zY9YKf14ftg/kc/WxiFBzRWU 1zvTcdTf1XkQKzaM96cWB8Q7mC33NfddgrqBGI77WIW/T/6tBuq8DoANpa8+kJNIg9G1 pdt6/Ltn7PY+QazM0y7bJUs3pxkp6UH2tKDZ/cGJ/guqNiDc/eBztpUMl3CHRw+braTa SenBJSaCwI3IUQCPMWexW1vxA8qXk/939cBiJh2OZkVFEu/L8u9lQN1bHA8xV8L0SP79 Gdmg== X-Gm-Message-State: AC+VfDzulds7TGPFLUwrJ6i41b7v7FYCpXv+32EQrL6e8dHbdsXRguWC ktH5WiVJkYUddmBd8q/aFhj7BzX6MWE= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:6d24:3efd:facc:7ac4]) (user=surenb job=sendgmr) by 2002:a17:90a:24a:b0:24d:e504:69ed with SMTP id t10-20020a17090a024a00b0024de50469edmr1685228pje.3.1682960119293; Mon, 01 May 2023 09:55:19 -0700 (PDT) Date: Mon, 1 May 2023 09:54:15 -0700 In-Reply-To: <20230501165450.15352-1-surenb@google.com> Mime-Version: 1.0 References: <20230501165450.15352-1-surenb@google.com> X-Mailer: git-send-email 2.40.1.495.gc816e09b53d-goog Message-ID: <20230501165450.15352-6-surenb@google.com> Subject: [PATCH 05/40] prandom: Remove unused include From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, ldufour@linux.ibm.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org X-Spam-Status: No, score=-9.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1764712339988415998?= X-GMAIL-MSGID: =?utf-8?q?1764712339988415998?= From: Kent Overstreet prandom.h doesn't use percpu.h - this fixes some circular header issues. Signed-off-by: Kent Overstreet Signed-off-by: Suren Baghdasaryan --- include/linux/prandom.h | 1 - 1 file changed, 1 deletion(-) diff --git a/include/linux/prandom.h b/include/linux/prandom.h index f2ed5b72b3d6..f7f1e5251c67 100644 --- a/include/linux/prandom.h +++ b/include/linux/prandom.h @@ -10,7 +10,6 @@ #include #include -#include #include struct rnd_state { From patchwork Mon May 1 16:54:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 89107 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp70910vqo; Mon, 1 May 2023 10:18:20 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ61i0jA2I29c+A8+2Da468+ybI0XW/YuqkAeYTwvYRZBCv0aJvmeoXkKoHffa683fIVzeoe X-Received: by 2002:a17:902:e54b:b0:1a6:f1f3:e475 with SMTP id n11-20020a170902e54b00b001a6f1f3e475mr17945203plf.55.1682961500660; Mon, 01 May 2023 10:18:20 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682961500; cv=none; d=google.com; s=arc-20160816; b=j6kMhSj3YSXh7rrHOWie4oDKCuSaVOr7TLyqcYzf+5pbQE5iIOUQJdtsBHRDszll+S EmANtIh6AdNjumrIIx2jY1FyCT9YhNvfZvvpN9QhjB9yr5YPSPGjcq+CGhydJCZLkkIP 30qeKZhB3MpxFrp10Q2X/BwPcd20uBrMNAc3EvPMZ/Q+G9FZYdDzPW2ikSJ5BER3cOoy P1fNVQUjdWZEho2Toai41rIJ7exx9mZgjBUqsxBi1KInl18W1sw5ohBKxN09i7yLw7YD y6Ngsj+XSzA74HkaRBPkIe1viDW8kqQwbca9b/s3Cq4IP4iHPa09BttIiZFPAKAKJODU DFFQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:dkim-signature; bh=AsvePdIIpK1hSWiMR4JNg9vJpL7RbQEpCCaVPwkxl+U=; b=zfqLbVE9RyDefo3spls4V6lxnwyIVG5ypA2Q6dljkWkbddF+MmcJZtz1TQn7NYJMjm nSMBswiBf7deJEY5oVuELfnzOwjCR+GG4DAbcfEoSOdlOXyrG6SgPTBonSQtaE/AFaSP WxnN0HmzSbJPY9+gzb5oFBYc0DH2aN1+J9RE/OrHd5VhTd7aFoOJnCizb708A36Dr1eC 2TwAEdUsU2Fp+uAJyrCAsT2Qm2ES6LEX8UZCku+jsAtpzoZijzqZ/SCxvgcPqgLvGfMb of9+cYbKeRhtqdWIHeXfTtYHdCekKtcTAJWfYae5q6fJbY0mvDye88DYxFGU3h8WBVJo Cu4A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=FUEHKTcj; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id on11-20020a17090b1d0b00b00240d56569eesi9929865pjb.166.2023.05.01.10.18.07; Mon, 01 May 2023 10:18:20 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=FUEHKTcj; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232627AbjEAQ4c (ORCPT + 99 others); Mon, 1 May 2023 12:56:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60786 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232402AbjEAQ4F (ORCPT ); Mon, 1 May 2023 12:56:05 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0326F1FCE for ; Mon, 1 May 2023 09:55:22 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-559fb850e08so31595667b3.3 for ; Mon, 01 May 2023 09:55:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1682960122; x=1685552122; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=AsvePdIIpK1hSWiMR4JNg9vJpL7RbQEpCCaVPwkxl+U=; b=FUEHKTcj/6Oaa/jf/hG74LwWnPzMzDXETDPC/OZPn9YmdSr3Je1dLjqI5qQFH1sW3q Bq/alOSo7JqKEqD9rgKcyUF43rVJZUIyQmx1G3PllnLeg5Yg39a7Bnek6w/Qmd8uXbJK EsGaYT+egztrqXkT7lGAkap+Gl5GwKP4LRCOjo5ZnVALaZHiuOt5sB/2YrOa8QMVBMrC FKm9pwPXa7lut/5XYzJLVlhPqGzsdKvhEJFlIIbig8wN8WeEjVJtPWgEipfIbTn79FXC wbTQc/DXAGa8hfnAWPDItUw4SLFFckirBkUuGFld0u0/pGt++nX/qQJEblW72Hij4gvi rVdg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682960122; x=1685552122; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=AsvePdIIpK1hSWiMR4JNg9vJpL7RbQEpCCaVPwkxl+U=; b=IDUiKtbgk6zVW5XYjUaYko16beZInx3QuT8rf9WtmErHdvX/GjXt/xGX4oOdu0YfKl t5QX/eEV8n52IVGuEJFx2pC+Qb1tAQ7FvyEAxVMzYmutQZTCnBvZOadBIIkW+kbxUIwB QnRHUmwZ+8cexD94EINu9jd9JOQ3OZk76q0DK5BEHW1mBAnQNyBd5As1bXPmtfM5pfvZ 1EvzzrOkNfd0SZIatHubIuTeJFyowgEHPfIY5PuilCxmBQTAnsYU6IxEiWDs+5J+C0ec xILlK9EF+QciwrX0JBe8aNLulrq8Bmg6J5QqZzfySUA35M5Gl3YezEyo5dIP1TjZykrR f00g== X-Gm-Message-State: AC+VfDw5UNzbjBdl1R1wywvScsGUPzs8Hq1o3qkcuPxShuCJCo/2rH8I gGUxIyZBcuBca/UtA7zpqKR3HOdM+Yk= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:6d24:3efd:facc:7ac4]) (user=surenb job=sendgmr) by 2002:a81:7653:0:b0:54f:a60c:12eb with SMTP id j19-20020a817653000000b0054fa60c12ebmr8139444ywk.1.1682960121808; Mon, 01 May 2023 09:55:21 -0700 (PDT) Date: Mon, 1 May 2023 09:54:16 -0700 In-Reply-To: <20230501165450.15352-1-surenb@google.com> Mime-Version: 1.0 References: <20230501165450.15352-1-surenb@google.com> X-Mailer: git-send-email 2.40.1.495.gc816e09b53d-goog Message-ID: <20230501165450.15352-7-surenb@google.com> Subject: [PATCH 06/40] lib/string.c: strsep_no_empty() From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, ldufour@linux.ibm.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org X-Spam-Status: No, score=-9.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1764713038296842989?= X-GMAIL-MSGID: =?utf-8?q?1764713038296842989?= From: Kent Overstreet This adds a new helper which is like strsep, except that it skips empty tokens. Signed-off-by: Kent Overstreet Signed-off-by: Suren Baghdasaryan --- include/linux/string.h | 1 + lib/string.c | 19 +++++++++++++++++++ 2 files changed, 20 insertions(+) diff --git a/include/linux/string.h b/include/linux/string.h index c062c581a98b..6cd5451c262c 100644 --- a/include/linux/string.h +++ b/include/linux/string.h @@ -96,6 +96,7 @@ extern char * strpbrk(const char *,const char *); #ifndef __HAVE_ARCH_STRSEP extern char * strsep(char **,const char *); #endif +extern char *strsep_no_empty(char **, const char *); #ifndef __HAVE_ARCH_STRSPN extern __kernel_size_t strspn(const char *,const char *); #endif diff --git a/lib/string.c b/lib/string.c index 3d55ef890106..dd4914baf45a 100644 --- a/lib/string.c +++ b/lib/string.c @@ -520,6 +520,25 @@ char *strsep(char **s, const char *ct) EXPORT_SYMBOL(strsep); #endif +/** + * strsep_no_empt - Split a string into tokens, but don't return empty tokens + * @s: The string to be searched + * @ct: The characters to search for + * + * strsep() updates @s to point after the token, ready for the next call. + */ +char *strsep_no_empty(char **s, const char *ct) +{ + char *ret; + + do { + ret = strsep(s, ct); + } while (ret && !*ret); + + return ret; +} +EXPORT_SYMBOL_GPL(strsep_no_empty); + #ifndef __HAVE_ARCH_MEMSET /** * memset - Fill a region of memory with the given value From patchwork Mon May 1 16:54:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 89123 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp72782vqo; Mon, 1 May 2023 10:21:32 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ50gTZLtMCHuMqkjrba+xlPdtID9FbP0HyNjy+TmTw3a1vk1Dsc/LLRouHB+DqEWDCzBg+f X-Received: by 2002:a17:903:648:b0:1aa:d1fe:9520 with SMTP id kh8-20020a170903064800b001aad1fe9520mr7988927plb.20.1682961692639; Mon, 01 May 2023 10:21:32 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682961692; cv=none; d=google.com; s=arc-20160816; b=EzI9Q06dSGxTopn31zXKRPbKHOtJYIuCtBEICluEiri18Wy+MpUItt+OyVK38YQ7gt mZRpWSn9VIPFwa02RU922UZsVBKYFb1GNl2vS9VnHW8PHvd5jSdLS6/Qef75FIiIbr9X KCmq3dIvlJgqPY52AFaEnXJqIUjmd3d5JeyCF+ujEBNZ3EkKx8KRy7m9H9KGLKGXifN7 4Fn8mVuIfpvhkuyF4fGhCN4rmMVbMy1e5HK9qK9dds/2gh82ThxoXYcwl7adHhXiRNad Z0uqzCTawBUEBtUzu3LwjxSbsK7o+W+y7xG0t1WnKut15nbkyyGob/uHazx9wjNfG3M7 A7Gw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:dkim-signature; bh=O9K7OqGn67XevhyLqxW8/t3YPJ+QYtYh9lmzl+bkJrg=; b=zd1sGsbaSD/A7EIXzs77iTV5Gs/Iejew2LFpZrl5VcNniPfGVWjvOdxCz8KFBSUkUg EjoGkK815+To7yvKCbXoHKQXU7Bm/RlP9amR552NbLeY7TgUf+a46fXB6AbHGn+rYfr0 4ho7Qwp+eYLsFvkgrc/FOx4kxWw02BHW+Pw9JGN9XTWvyU4QrvRlNCtqbSv43oWfo6sZ nne1hiAWfuNxX+SsIbNxKP6oiWvJPW/1UJ55+FBocKJPbrJVMj8FDIilN6HN+JobDGpt Py4WJWQ0JCySbKtwHuenaK5sETxEvLcye5beYE7dpD3cPE2dyqPqJ7MPcNw9x9oACaps +w8Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=uyQWn2Ht; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id o13-20020a17090ac70d00b0024711d63febsi30966279pjt.173.2023.05.01.10.21.18; Mon, 01 May 2023 10:21:32 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=uyQWn2Ht; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232693AbjEAQ4r (ORCPT + 99 others); Mon, 1 May 2023 12:56:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60818 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232698AbjEAQ4G (ORCPT ); Mon, 1 May 2023 12:56:06 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2D3651FE3 for ; Mon, 1 May 2023 09:55:25 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id 3f1490d57ef6-b9a77926afbso5335756276.3 for ; Mon, 01 May 2023 09:55:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1682960124; x=1685552124; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=O9K7OqGn67XevhyLqxW8/t3YPJ+QYtYh9lmzl+bkJrg=; b=uyQWn2HtrGHHQPYjCaEXS71WFvIKKSc7//EwH6w5zT7RuLTJsJvg4Siy51jlaOTMsf DmzxgGzQqK6buMAxRdIZF5co25tsVldyDbaF7rOabJEbLpi4XheA+l5VWP2IJDBlEIui J3jFl+rVymLiKhaaXfPqFSO8pBqa31LbX5CWrS5Wner2pA7Sqr1hkvfcI6t4MxiQZt9K qhkEpMVbiHPNZp9IQ6O2D9+a8gKMVQ/1Gtf9GajnR6izUu8XnftHHBs+v4IcA0SDdYNk NO/g09kaGZENLPill52iLrjrZMs4uUM92aRcipfW+xL+/5JdYb7kKitjTzDH4s2JKlli slTg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682960124; x=1685552124; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=O9K7OqGn67XevhyLqxW8/t3YPJ+QYtYh9lmzl+bkJrg=; b=CanQ2mW02OsQBUaJcBFHkttiWyseaNMjKS6Yq14dV091ygUJ/QkYmf+y7iRp7ebxFH k21wLDqcZaaY66C5mOln1wnN1BCd82cYmqGaRCxiXV/45+g3cuv9fQgR5U00aCVeVlX/ PBy+yqi2QCyqKsPnEyrRAjhuUGqQMIPP9c55jwxTteDz4rRT2D+GpvA1TZmMsyv8uE3L XpFTHg4mXQMgmNApXwPMhMTRui/7gKOLQi1PY86L5fTPKDaYWdci4jELKWZtB0OTuOay m7McU8J0e6QJofDv+rw4zy0L+RgKfBMZlHEfm465hCNzQJxnYh+zN1qkaoFba2x72X2h PqFA== X-Gm-Message-State: AC+VfDz6yzl9YC+zU2cp4uIfk926IElRGTkLn9UbUBwHC5it5qlj151W 2GSKooJZo396KnIFvDILASvZWen+Bv0= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:6d24:3efd:facc:7ac4]) (user=surenb job=sendgmr) by 2002:a05:6902:100e:b0:b8f:47c4:58ed with SMTP id w14-20020a056902100e00b00b8f47c458edmr8682966ybt.9.1682960124020; Mon, 01 May 2023 09:55:24 -0700 (PDT) Date: Mon, 1 May 2023 09:54:17 -0700 In-Reply-To: <20230501165450.15352-1-surenb@google.com> Mime-Version: 1.0 References: <20230501165450.15352-1-surenb@google.com> X-Mailer: git-send-email 2.40.1.495.gc816e09b53d-goog Message-ID: <20230501165450.15352-8-surenb@google.com> Subject: [PATCH 07/40] Lazy percpu counters From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, ldufour@linux.ibm.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org X-Spam-Status: No, score=-9.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1764713239346346203?= X-GMAIL-MSGID: =?utf-8?q?1764713239346346203?= From: Kent Overstreet This patch adds lib/lazy-percpu-counter.c, which implements counters that start out as atomics, but lazily switch to percpu mode if the update rate crosses some threshold (arbitrarily set at 256 per second). Signed-off-by: Kent Overstreet Signed-off-by: Suren Baghdasaryan --- include/linux/lazy-percpu-counter.h | 102 ++++++++++++++++++++++ lib/Kconfig | 3 + lib/Makefile | 2 + lib/lazy-percpu-counter.c | 127 ++++++++++++++++++++++++++++ 4 files changed, 234 insertions(+) create mode 100644 include/linux/lazy-percpu-counter.h create mode 100644 lib/lazy-percpu-counter.c diff --git a/include/linux/lazy-percpu-counter.h b/include/linux/lazy-percpu-counter.h new file mode 100644 index 000000000000..45ca9e2ce58b --- /dev/null +++ b/include/linux/lazy-percpu-counter.h @@ -0,0 +1,102 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Lazy percpu counters: + * (C) 2022 Kent Overstreet + * + * Lazy percpu counters start out in atomic mode, then switch to percpu mode if + * the update rate crosses some threshold. + * + * This means we don't have to decide between low memory overhead atomic + * counters and higher performance percpu counters - we can have our cake and + * eat it, too! + * + * Internally we use an atomic64_t, where the low bit indicates whether we're in + * percpu mode, and the high 8 bits are a secondary counter that's incremented + * when the counter is modified - meaning 55 bits of precision are available for + * the counter itself. + */ + +#ifndef _LINUX_LAZY_PERCPU_COUNTER_H +#define _LINUX_LAZY_PERCPU_COUNTER_H + +#include +#include + +struct lazy_percpu_counter { + atomic64_t v; + unsigned long last_wrap; +}; + +void lazy_percpu_counter_exit(struct lazy_percpu_counter *c); +void lazy_percpu_counter_add_slowpath(struct lazy_percpu_counter *c, s64 i); +void lazy_percpu_counter_add_slowpath_noupgrade(struct lazy_percpu_counter *c, s64 i); +s64 lazy_percpu_counter_read(struct lazy_percpu_counter *c); + +/* + * We use the high bits of the atomic counter for a secondary counter, which is + * incremented every time the counter is touched. When the secondary counter + * wraps, we check the time the counter last wrapped, and if it was recent + * enough that means the update frequency has crossed our threshold and we + * switch to percpu mode: + */ +#define COUNTER_MOD_BITS 8 +#define COUNTER_MOD_MASK ~(~0ULL >> COUNTER_MOD_BITS) +#define COUNTER_MOD_BITS_START (64 - COUNTER_MOD_BITS) + +/* + * We use the low bit of the counter to indicate whether we're in atomic mode + * (low bit clear), or percpu mode (low bit set, counter is a pointer to actual + * percpu counters: + */ +#define COUNTER_IS_PCPU_BIT 1 + +static inline u64 __percpu *lazy_percpu_counter_is_pcpu(u64 v) +{ + if (!(v & COUNTER_IS_PCPU_BIT)) + return NULL; + + v ^= COUNTER_IS_PCPU_BIT; + return (u64 __percpu *)(unsigned long)v; +} + +/** + * lazy_percpu_counter_add: Add a value to a lazy_percpu_counter + * + * @c: counter to modify + * @i: value to add + */ +static inline void lazy_percpu_counter_add(struct lazy_percpu_counter *c, s64 i) +{ + u64 v = atomic64_read(&c->v); + u64 __percpu *pcpu_v = lazy_percpu_counter_is_pcpu(v); + + if (likely(pcpu_v)) + this_cpu_add(*pcpu_v, i); + else + lazy_percpu_counter_add_slowpath(c, i); +} + +/** + * lazy_percpu_counter_add_noupgrade: Add a value to a lazy_percpu_counter, + * without upgrading to percpu mode + * + * @c: counter to modify + * @i: value to add + */ +static inline void lazy_percpu_counter_add_noupgrade(struct lazy_percpu_counter *c, s64 i) +{ + u64 v = atomic64_read(&c->v); + u64 __percpu *pcpu_v = lazy_percpu_counter_is_pcpu(v); + + if (likely(pcpu_v)) + this_cpu_add(*pcpu_v, i); + else + lazy_percpu_counter_add_slowpath_noupgrade(c, i); +} + +static inline void lazy_percpu_counter_sub(struct lazy_percpu_counter *c, s64 i) +{ + lazy_percpu_counter_add(c, -i); +} + +#endif /* _LINUX_LAZY_PERCPU_COUNTER_H */ diff --git a/lib/Kconfig b/lib/Kconfig index 5c2da561c516..7380292a8fcd 100644 --- a/lib/Kconfig +++ b/lib/Kconfig @@ -505,6 +505,9 @@ config ASSOCIATIVE_ARRAY for more information. +config LAZY_PERCPU_COUNTER + bool + config HAS_IOMEM bool depends on !NO_IOMEM diff --git a/lib/Makefile b/lib/Makefile index 876fcdeae34e..293a0858a3f8 100644 --- a/lib/Makefile +++ b/lib/Makefile @@ -164,6 +164,8 @@ obj-$(CONFIG_DEBUG_PREEMPT) += smp_processor_id.o obj-$(CONFIG_DEBUG_LIST) += list_debug.o obj-$(CONFIG_DEBUG_OBJECTS) += debugobjects.o +obj-$(CONFIG_LAZY_PERCPU_COUNTER) += lazy-percpu-counter.o + obj-$(CONFIG_BITREVERSE) += bitrev.o obj-$(CONFIG_LINEAR_RANGES) += linear_ranges.o obj-$(CONFIG_PACKING) += packing.o diff --git a/lib/lazy-percpu-counter.c b/lib/lazy-percpu-counter.c new file mode 100644 index 000000000000..4f4e32c2dc09 --- /dev/null +++ b/lib/lazy-percpu-counter.c @@ -0,0 +1,127 @@ +// SPDX-License-Identifier: GPL-2.0-only + +#include +#include +#include +#include +#include + +static inline s64 lazy_percpu_counter_atomic_val(s64 v) +{ + /* Ensure output is sign extended properly: */ + return (v << COUNTER_MOD_BITS) >> + (COUNTER_MOD_BITS + COUNTER_IS_PCPU_BIT); +} + +static void lazy_percpu_counter_switch_to_pcpu(struct lazy_percpu_counter *c) +{ + u64 __percpu *pcpu_v = alloc_percpu_gfp(u64, GFP_ATOMIC|__GFP_NOWARN); + u64 old, new, v; + + if (!pcpu_v) + return; + + preempt_disable(); + v = atomic64_read(&c->v); + do { + if (lazy_percpu_counter_is_pcpu(v)) { + free_percpu(pcpu_v); + return; + } + + old = v; + new = (unsigned long)pcpu_v | 1; + + *this_cpu_ptr(pcpu_v) = lazy_percpu_counter_atomic_val(v); + } while ((v = atomic64_cmpxchg(&c->v, old, new)) != old); + preempt_enable(); +} + +/** + * lazy_percpu_counter_exit: Free resources associated with a + * lazy_percpu_counter + * + * @c: counter to exit + */ +void lazy_percpu_counter_exit(struct lazy_percpu_counter *c) +{ + free_percpu(lazy_percpu_counter_is_pcpu(atomic64_read(&c->v))); +} +EXPORT_SYMBOL_GPL(lazy_percpu_counter_exit); + +/** + * lazy_percpu_counter_read: Read current value of a lazy_percpu_counter + * + * @c: counter to read + */ +s64 lazy_percpu_counter_read(struct lazy_percpu_counter *c) +{ + s64 v = atomic64_read(&c->v); + u64 __percpu *pcpu_v = lazy_percpu_counter_is_pcpu(v); + + if (pcpu_v) { + int cpu; + + v = 0; + for_each_possible_cpu(cpu) + v += *per_cpu_ptr(pcpu_v, cpu); + } else { + v = lazy_percpu_counter_atomic_val(v); + } + + return v; +} +EXPORT_SYMBOL_GPL(lazy_percpu_counter_read); + +void lazy_percpu_counter_add_slowpath(struct lazy_percpu_counter *c, s64 i) +{ + u64 atomic_i; + u64 old, v = atomic64_read(&c->v); + u64 __percpu *pcpu_v; + + atomic_i = i << COUNTER_IS_PCPU_BIT; + atomic_i &= ~COUNTER_MOD_MASK; + atomic_i |= 1ULL << COUNTER_MOD_BITS_START; + + do { + pcpu_v = lazy_percpu_counter_is_pcpu(v); + if (pcpu_v) { + this_cpu_add(*pcpu_v, i); + return; + } + + old = v; + } while ((v = atomic64_cmpxchg(&c->v, old, old + atomic_i)) != old); + + if (unlikely(!(v & COUNTER_MOD_MASK))) { + unsigned long now = jiffies; + + if (c->last_wrap && + unlikely(time_after(c->last_wrap + HZ, now))) + lazy_percpu_counter_switch_to_pcpu(c); + else + c->last_wrap = now; + } +} +EXPORT_SYMBOL(lazy_percpu_counter_add_slowpath); + +void lazy_percpu_counter_add_slowpath_noupgrade(struct lazy_percpu_counter *c, s64 i) +{ + u64 atomic_i; + u64 old, v = atomic64_read(&c->v); + u64 __percpu *pcpu_v; + + atomic_i = i << COUNTER_IS_PCPU_BIT; + atomic_i &= ~COUNTER_MOD_MASK; + + do { + pcpu_v = lazy_percpu_counter_is_pcpu(v); + if (pcpu_v) { + this_cpu_add(*pcpu_v, i); + return; + } + + old = v; + } while ((v = atomic64_cmpxchg(&c->v, old, old + atomic_i)) != old); +} +EXPORT_SYMBOL(lazy_percpu_counter_add_slowpath_noupgrade); From patchwork Mon May 1 16:54:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 89100 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp70229vqo; Mon, 1 May 2023 10:17:18 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ7It93QQeflslMozFBvpI0OMignH9tH+RD5s8FFX7/MiYFWbJYcHRjNObwQvtTRrLgMAVah X-Received: by 2002:a17:90b:3742:b0:24d:fedf:688a with SMTP id ne2-20020a17090b374200b0024dfedf688amr3934689pjb.24.1682961438506; Mon, 01 May 2023 10:17:18 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682961438; cv=none; d=google.com; s=arc-20160816; b=lfhIY1JOZWZlTx2TDDZc+ty3+8awGlnrZJTO6C4BzrxtnzSlstFAWfsTia/pgnjMm4 12Idh/cBJ7085CsE+vk1ylDwUedefFDGSLaqf0X8sIhVKMDIxpHPIxKwTnmao2OI2Pba /r8eZ76qsIVgYxEZrT0xxe5V5+xP3RdgdOewconX2L24LP24lSa7MRdOP1j4rmo/mvz3 KHoicFe83gMDAdt5O+8Nx5lLXaxuj5xQR6A/URIpaI0DHktviSB8uo2o3B4QsSg2o60b KhEvXPHdXfc3X0ShcfyqzA/to1dgKVhn2P8csvDuLIre3Kw08x30e8FNeRMXQ+tQ8XWp twdQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:dkim-signature; bh=EpAPHCELRqlGyXafm3HOq1DjysfSUkIp0lW71FhLJ+Q=; b=ONZRzsB1Ot5h2de+cyVQOzUxw0IBJovEWmHzCRiG1F/xJUXV12BIDmdUv4J31Lgnwj W3iPeaR45pGW8n0l7sdZGd4zeHFMRiBTm4vehmfcraG8lX2sctlofUl1yTaherjAQ68k S5L/dGRQJsXEXBM8S7NYPgZUZ30kVqRYfXobRsGI0G68WtGSREYUg+0Lt4MWqrQauzWr UW/wmnNbV9XwVj+BR+kvpHp3n6CZ34T3cuY3PetlmzHG2X7476IUrLwzywJBWNaEJhhY WMtmSqqtQO1fo8oKP/YSGC2+/+wzMmNBu4eoEBeNZa6jLbuwb8Mf0gAeQXNl4witDI6c 6Uow== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=KndAV4af; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id h5-20020a17090acf0500b00247425ceb4csi31742034pju.165.2023.05.01.10.17.05; Mon, 01 May 2023 10:17:18 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=KndAV4af; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232873AbjEAQ45 (ORCPT + 99 others); Mon, 1 May 2023 12:56:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59604 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232758AbjEAQ4O (ORCPT ); Mon, 1 May 2023 12:56:14 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 897101FD4 for ; Mon, 1 May 2023 09:55:27 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id 3f1490d57ef6-b9a25f6aa0eso5005560276.1 for ; Mon, 01 May 2023 09:55:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1682960126; x=1685552126; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=EpAPHCELRqlGyXafm3HOq1DjysfSUkIp0lW71FhLJ+Q=; b=KndAV4afZjE7urj1LZ7Rf/o633ajiD8s+Zgf2L1Mk9XX6B/9PMd2nEUVnlftP7GrKt bWCCrfLe4oDbYF4trh5h6CFhIJwiPA+kfT/a0Y4OXyPznsVC8TV+5wilPJneaIud8ycS FrIUNz8/SFFMagOyadbcqEA0OzG1GIoTXbpA5wGATZUND86MP2J0BOIMLXyxPVqWZ9Wl hyCmq22BBXz06DX2ppJBr0xZldtvTeIUTTqyjK3oZrVhnhKWEQyYLcpCxTL3wMg/+F+S 6FiOvawPLieCpVMV7STZbWNFOnrCi/8x9VUaf5jjtzPaRTlRCN5z85N54yndm8M1FUN8 3eIA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682960126; x=1685552126; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=EpAPHCELRqlGyXafm3HOq1DjysfSUkIp0lW71FhLJ+Q=; b=KYuIHlalG1kyU1s+RikbGFmjuF53Mbfs/5oBuiN7Z/7OtTK+pIw4b570PYIjaX6XSo f9GpwSB8Jj0pVZxw94uM0Pa8fEMiKee/dPb/GWU6J0Gdd4FvwXB0ZK1DnHjJqtJRFX2w iGZNEP1Ue3BQrqSkZhrRuso1gTap2PR+UzutLtQCTZ53bKUCQI6GQJfLc5+IClZxdJ6I 0GlWVk2Cq0WXgGl6Mxoik7njtqp0lXPEKLGQ2SffE7aNPCIBGnr4NGoZ3cYhv5TPdyAG Gy8LcAskdOfJLR7oSG40altUZZcSeLwndPe+NE21ZnKtHSIR5jcHev2BATun8rl++SLi NZWQ== X-Gm-Message-State: AC+VfDxyQpNEpXNhFcgnDVvIU0WbsHpQ4yfw3cTQf/Xj1HDApT64GVlL lQm/DHRqIHdtG/JbasuqzjEt669Pfao= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:6d24:3efd:facc:7ac4]) (user=surenb job=sendgmr) by 2002:a25:db10:0:b0:b9d:b2ef:1b1e with SMTP id g16-20020a25db10000000b00b9db2ef1b1emr2548037ybf.7.1682960126449; Mon, 01 May 2023 09:55:26 -0700 (PDT) Date: Mon, 1 May 2023 09:54:18 -0700 In-Reply-To: <20230501165450.15352-1-surenb@google.com> Mime-Version: 1.0 References: <20230501165450.15352-1-surenb@google.com> X-Mailer: git-send-email 2.40.1.495.gc816e09b53d-goog Message-ID: <20230501165450.15352-9-surenb@google.com> Subject: [PATCH 08/40] mm: introduce slabobj_ext to support slab object extensions From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, ldufour@linux.ibm.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org X-Spam-Status: No, score=-9.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1764712973512718388?= X-GMAIL-MSGID: =?utf-8?q?1764712973512718388?= Currently slab pages can store only vectors of obj_cgroup pointers in page->memcg_data. Introduce slabobj_ext structure to allow more data to be stored for each slab object. Wrap obj_cgroup into slabobj_ext to support current functionality while allowing to extend slabobj_ext in the future. Signed-off-by: Suren Baghdasaryan --- include/linux/memcontrol.h | 20 +++-- include/linux/mm_types.h | 4 +- init/Kconfig | 4 + mm/kfence/core.c | 14 ++-- mm/kfence/kfence.h | 4 +- mm/memcontrol.c | 56 ++------------ mm/page_owner.c | 2 +- mm/slab.h | 148 +++++++++++++++++++++++++------------ mm/slab_common.c | 47 ++++++++++++ 9 files changed, 185 insertions(+), 114 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 222d7370134c..b9fd9732a52b 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -339,8 +339,8 @@ struct mem_cgroup { extern struct mem_cgroup *root_mem_cgroup; enum page_memcg_data_flags { - /* page->memcg_data is a pointer to an objcgs vector */ - MEMCG_DATA_OBJCGS = (1UL << 0), + /* page->memcg_data is a pointer to an slabobj_ext vector */ + MEMCG_DATA_OBJEXTS = (1UL << 0), /* page has been accounted as a non-slab kernel page */ MEMCG_DATA_KMEM = (1UL << 1), /* the next bit after the last actual flag */ @@ -378,7 +378,7 @@ static inline struct mem_cgroup *__folio_memcg(struct folio *folio) unsigned long memcg_data = folio->memcg_data; VM_BUG_ON_FOLIO(folio_test_slab(folio), folio); - VM_BUG_ON_FOLIO(memcg_data & MEMCG_DATA_OBJCGS, folio); + VM_BUG_ON_FOLIO(memcg_data & MEMCG_DATA_OBJEXTS, folio); VM_BUG_ON_FOLIO(memcg_data & MEMCG_DATA_KMEM, folio); return (struct mem_cgroup *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); @@ -399,7 +399,7 @@ static inline struct obj_cgroup *__folio_objcg(struct folio *folio) unsigned long memcg_data = folio->memcg_data; VM_BUG_ON_FOLIO(folio_test_slab(folio), folio); - VM_BUG_ON_FOLIO(memcg_data & MEMCG_DATA_OBJCGS, folio); + VM_BUG_ON_FOLIO(memcg_data & MEMCG_DATA_OBJEXTS, folio); VM_BUG_ON_FOLIO(!(memcg_data & MEMCG_DATA_KMEM), folio); return (struct obj_cgroup *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); @@ -496,7 +496,7 @@ static inline struct mem_cgroup *folio_memcg_check(struct folio *folio) */ unsigned long memcg_data = READ_ONCE(folio->memcg_data); - if (memcg_data & MEMCG_DATA_OBJCGS) + if (memcg_data & MEMCG_DATA_OBJEXTS) return NULL; if (memcg_data & MEMCG_DATA_KMEM) { @@ -542,7 +542,7 @@ static inline struct mem_cgroup *get_mem_cgroup_from_objcg(struct obj_cgroup *ob static inline bool folio_memcg_kmem(struct folio *folio) { VM_BUG_ON_PGFLAGS(PageTail(&folio->page), &folio->page); - VM_BUG_ON_FOLIO(folio->memcg_data & MEMCG_DATA_OBJCGS, folio); + VM_BUG_ON_FOLIO(folio->memcg_data & MEMCG_DATA_OBJEXTS, folio); return folio->memcg_data & MEMCG_DATA_KMEM; } @@ -1606,6 +1606,14 @@ unsigned long mem_cgroup_soft_limit_reclaim(pg_data_t *pgdat, int order, } #endif /* CONFIG_MEMCG */ +/* + * Extended information for slab objects stored as an array in page->memcg_data + * if MEMCG_DATA_OBJEXTS is set. + */ +struct slabobj_ext { + struct obj_cgroup *objcg; +} __aligned(8); + static inline void __inc_lruvec_kmem_state(void *p, enum node_stat_item idx) { __mod_lruvec_kmem_state(p, idx, 1); diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 306a3d1a0fa6..e79303e1e30c 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -194,7 +194,7 @@ struct page { /* Usage count. *DO NOT USE DIRECTLY*. See page_ref.h */ atomic_t _refcount; -#ifdef CONFIG_MEMCG +#ifdef CONFIG_SLAB_OBJ_EXT unsigned long memcg_data; #endif @@ -320,7 +320,7 @@ struct folio { void *private; atomic_t _mapcount; atomic_t _refcount; -#ifdef CONFIG_MEMCG +#ifdef CONFIG_SLAB_OBJ_EXT unsigned long memcg_data; #endif /* private: the union with struct page is transitional */ diff --git a/init/Kconfig b/init/Kconfig index 32c24950c4ce..44267919a2a2 100644 --- a/init/Kconfig +++ b/init/Kconfig @@ -936,10 +936,14 @@ config CGROUP_FAVOR_DYNMODS Say N if unsure. +config SLAB_OBJ_EXT + bool + config MEMCG bool "Memory controller" select PAGE_COUNTER select EVENTFD + select SLAB_OBJ_EXT help Provides control over the memory footprint of tasks in a cgroup. diff --git a/mm/kfence/core.c b/mm/kfence/core.c index dad3c0eb70a0..aea6fa145080 100644 --- a/mm/kfence/core.c +++ b/mm/kfence/core.c @@ -590,9 +590,9 @@ static unsigned long kfence_init_pool(void) continue; __folio_set_slab(slab_folio(slab)); -#ifdef CONFIG_MEMCG - slab->memcg_data = (unsigned long)&kfence_metadata[i / 2 - 1].objcg | - MEMCG_DATA_OBJCGS; +#ifdef CONFIG_MEMCG_KMEM + slab->obj_exts = (unsigned long)&kfence_metadata[i / 2 - 1].obj_exts | + MEMCG_DATA_OBJEXTS; #endif } @@ -634,8 +634,8 @@ static unsigned long kfence_init_pool(void) if (!i || (i % 2)) continue; -#ifdef CONFIG_MEMCG - slab->memcg_data = 0; +#ifdef CONFIG_MEMCG_KMEM + slab->obj_exts = 0; #endif __folio_clear_slab(slab_folio(slab)); } @@ -1093,8 +1093,8 @@ void __kfence_free(void *addr) { struct kfence_metadata *meta = addr_to_metadata((unsigned long)addr); -#ifdef CONFIG_MEMCG - KFENCE_WARN_ON(meta->objcg); +#ifdef CONFIG_MEMCG_KMEM + KFENCE_WARN_ON(meta->obj_exts.objcg); #endif /* * If the objects of the cache are SLAB_TYPESAFE_BY_RCU, defer freeing diff --git a/mm/kfence/kfence.h b/mm/kfence/kfence.h index 2aafc46a4aaf..8e0d76c4ea2a 100644 --- a/mm/kfence/kfence.h +++ b/mm/kfence/kfence.h @@ -97,8 +97,8 @@ struct kfence_metadata { struct kfence_track free_track; /* For updating alloc_covered on frees. */ u32 alloc_stack_hash; -#ifdef CONFIG_MEMCG - struct obj_cgroup *objcg; +#ifdef CONFIG_MEMCG_KMEM + struct slabobj_ext obj_exts; #endif }; diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 4b27e245a055..f2a7fe718117 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -2892,13 +2892,6 @@ static void commit_charge(struct folio *folio, struct mem_cgroup *memcg) } #ifdef CONFIG_MEMCG_KMEM -/* - * The allocated objcg pointers array is not accounted directly. - * Moreover, it should not come from DMA buffer and is not readily - * reclaimable. So those GFP bits should be masked off. - */ -#define OBJCGS_CLEAR_MASK (__GFP_DMA | __GFP_RECLAIMABLE | __GFP_ACCOUNT) - /* * mod_objcg_mlstate() may be called with irq enabled, so * mod_memcg_lruvec_state() should be used. @@ -2917,62 +2910,27 @@ static inline void mod_objcg_mlstate(struct obj_cgroup *objcg, rcu_read_unlock(); } -int memcg_alloc_slab_cgroups(struct slab *slab, struct kmem_cache *s, - gfp_t gfp, bool new_slab) -{ - unsigned int objects = objs_per_slab(s, slab); - unsigned long memcg_data; - void *vec; - - gfp &= ~OBJCGS_CLEAR_MASK; - vec = kcalloc_node(objects, sizeof(struct obj_cgroup *), gfp, - slab_nid(slab)); - if (!vec) - return -ENOMEM; - - memcg_data = (unsigned long) vec | MEMCG_DATA_OBJCGS; - if (new_slab) { - /* - * If the slab is brand new and nobody can yet access its - * memcg_data, no synchronization is required and memcg_data can - * be simply assigned. - */ - slab->memcg_data = memcg_data; - } else if (cmpxchg(&slab->memcg_data, 0, memcg_data)) { - /* - * If the slab is already in use, somebody can allocate and - * assign obj_cgroups in parallel. In this case the existing - * objcg vector should be reused. - */ - kfree(vec); - return 0; - } - - kmemleak_not_leak(vec); - return 0; -} - static __always_inline struct mem_cgroup *mem_cgroup_from_obj_folio(struct folio *folio, void *p) { /* * Slab objects are accounted individually, not per-page. * Memcg membership data for each individual object is saved in - * slab->memcg_data. + * slab->obj_exts. */ if (folio_test_slab(folio)) { - struct obj_cgroup **objcgs; + struct slabobj_ext *obj_exts; struct slab *slab; unsigned int off; slab = folio_slab(folio); - objcgs = slab_objcgs(slab); - if (!objcgs) + obj_exts = slab_obj_exts(slab); + if (!obj_exts) return NULL; off = obj_to_index(slab->slab_cache, slab, p); - if (objcgs[off]) - return obj_cgroup_memcg(objcgs[off]); + if (obj_exts[off].objcg) + return obj_cgroup_memcg(obj_exts[off].objcg); return NULL; } @@ -2980,7 +2938,7 @@ struct mem_cgroup *mem_cgroup_from_obj_folio(struct folio *folio, void *p) /* * folio_memcg_check() is used here, because in theory we can encounter * a folio where the slab flag has been cleared already, but - * slab->memcg_data has not been freed yet + * slab->obj_exts has not been freed yet * folio_memcg_check() will guarantee that a proper memory * cgroup pointer or NULL will be returned. */ diff --git a/mm/page_owner.c b/mm/page_owner.c index 31169b3e7f06..8b6086c666e6 100644 --- a/mm/page_owner.c +++ b/mm/page_owner.c @@ -372,7 +372,7 @@ static inline int print_page_owner_memcg(char *kbuf, size_t count, int ret, if (!memcg_data) goto out_unlock; - if (memcg_data & MEMCG_DATA_OBJCGS) + if (memcg_data & MEMCG_DATA_OBJEXTS) ret += scnprintf(kbuf + ret, count - ret, "Slab cache page\n"); diff --git a/mm/slab.h b/mm/slab.h index f01ac256a8f5..25d14b3a7280 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -57,8 +57,8 @@ struct slab { #endif atomic_t __page_refcount; -#ifdef CONFIG_MEMCG - unsigned long memcg_data; +#ifdef CONFIG_SLAB_OBJ_EXT + unsigned long obj_exts; #endif }; @@ -67,8 +67,8 @@ struct slab { SLAB_MATCH(flags, __page_flags); SLAB_MATCH(compound_head, slab_cache); /* Ensure bit 0 is clear */ SLAB_MATCH(_refcount, __page_refcount); -#ifdef CONFIG_MEMCG -SLAB_MATCH(memcg_data, memcg_data); +#ifdef CONFIG_SLAB_OBJ_EXT +SLAB_MATCH(memcg_data, obj_exts); #endif #undef SLAB_MATCH static_assert(sizeof(struct slab) <= sizeof(struct page)); @@ -390,36 +390,106 @@ static inline bool kmem_cache_debug_flags(struct kmem_cache *s, slab_flags_t fla return false; } -#ifdef CONFIG_MEMCG_KMEM +#ifdef CONFIG_SLAB_OBJ_EXT + /* - * slab_objcgs - get the object cgroups vector associated with a slab + * slab_obj_exts - get the pointer to the slab object extension vector + * associated with a slab. * @slab: a pointer to the slab struct * - * Returns a pointer to the object cgroups vector associated with the slab, + * Returns a pointer to the object extension vector associated with the slab, * or NULL if no such vector has been associated yet. */ -static inline struct obj_cgroup **slab_objcgs(struct slab *slab) +static inline struct slabobj_ext *slab_obj_exts(struct slab *slab) { - unsigned long memcg_data = READ_ONCE(slab->memcg_data); + unsigned long obj_exts = READ_ONCE(slab->obj_exts); - VM_BUG_ON_PAGE(memcg_data && !(memcg_data & MEMCG_DATA_OBJCGS), +#ifdef CONFIG_MEMCG + VM_BUG_ON_PAGE(obj_exts && !(obj_exts & MEMCG_DATA_OBJEXTS), slab_page(slab)); - VM_BUG_ON_PAGE(memcg_data & MEMCG_DATA_KMEM, slab_page(slab)); + VM_BUG_ON_PAGE(obj_exts & MEMCG_DATA_KMEM, slab_page(slab)); - return (struct obj_cgroup **)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); + return (struct slabobj_ext *)(obj_exts & ~MEMCG_DATA_FLAGS_MASK); +#else + return (struct slabobj_ext *)obj_exts; +#endif } -int memcg_alloc_slab_cgroups(struct slab *slab, struct kmem_cache *s, - gfp_t gfp, bool new_slab); -void mod_objcg_state(struct obj_cgroup *objcg, struct pglist_data *pgdat, - enum node_stat_item idx, int nr); +int alloc_slab_obj_exts(struct slab *slab, struct kmem_cache *s, + gfp_t gfp, bool new_slab); -static inline void memcg_free_slab_cgroups(struct slab *slab) +static inline bool need_slab_obj_ext(void) { - kfree(slab_objcgs(slab)); - slab->memcg_data = 0; + /* + * CONFIG_MEMCG_KMEM creates vector of obj_cgroup objects conditionally + * inside memcg_slab_post_alloc_hook. No other users for now. + */ + return false; } +static inline void free_slab_obj_exts(struct slab *slab) +{ + struct slabobj_ext *obj_exts; + + obj_exts = slab_obj_exts(slab); + if (!obj_exts) + return; + + kfree(obj_exts); + slab->obj_exts = 0; +} + +static inline struct slabobj_ext * +prepare_slab_obj_exts_hook(struct kmem_cache *s, gfp_t flags, void *p) +{ + struct slab *slab; + + if (!p) + return NULL; + + if (!need_slab_obj_ext()) + return NULL; + + slab = virt_to_slab(p); + if (!slab_obj_exts(slab) && + WARN(alloc_slab_obj_exts(slab, s, flags, false), + "%s, %s: Failed to create slab extension vector!\n", + __func__, s->name)) + return NULL; + + return slab_obj_exts(slab) + obj_to_index(s, slab, p); +} + +#else /* CONFIG_SLAB_OBJ_EXT */ + +static inline struct slabobj_ext *slab_obj_exts(struct slab *slab) +{ + return NULL; +} + +static inline int alloc_slab_obj_exts(struct slab *slab, + struct kmem_cache *s, gfp_t gfp, + bool new_slab) +{ + return 0; +} + +static inline void free_slab_obj_exts(struct slab *slab) +{ +} + +static inline struct slabobj_ext * +prepare_slab_obj_exts_hook(struct kmem_cache *s, gfp_t flags, void *p) +{ + return NULL; +} + +#endif /* CONFIG_SLAB_OBJ_EXT */ + +#ifdef CONFIG_MEMCG_KMEM +void mod_objcg_state(struct obj_cgroup *objcg, struct pglist_data *pgdat, + enum node_stat_item idx, int nr); + static inline size_t obj_full_size(struct kmem_cache *s) { /* @@ -487,16 +557,15 @@ static inline void memcg_slab_post_alloc_hook(struct kmem_cache *s, if (likely(p[i])) { slab = virt_to_slab(p[i]); - if (!slab_objcgs(slab) && - memcg_alloc_slab_cgroups(slab, s, flags, - false)) { + if (!slab_obj_exts(slab) && + alloc_slab_obj_exts(slab, s, flags, false)) { obj_cgroup_uncharge(objcg, obj_full_size(s)); continue; } off = obj_to_index(s, slab, p[i]); obj_cgroup_get(objcg); - slab_objcgs(slab)[off] = objcg; + slab_obj_exts(slab)[off].objcg = objcg; mod_objcg_state(objcg, slab_pgdat(slab), cache_vmstat_idx(s), obj_full_size(s)); } else { @@ -509,14 +578,14 @@ static inline void memcg_slab_post_alloc_hook(struct kmem_cache *s, static inline void memcg_slab_free_hook(struct kmem_cache *s, struct slab *slab, void **p, int objects) { - struct obj_cgroup **objcgs; + struct slabobj_ext *obj_exts; int i; if (!memcg_kmem_online()) return; - objcgs = slab_objcgs(slab); - if (!objcgs) + obj_exts = slab_obj_exts(slab); + if (!obj_exts) return; for (i = 0; i < objects; i++) { @@ -524,11 +593,11 @@ static inline void memcg_slab_free_hook(struct kmem_cache *s, struct slab *slab, unsigned int off; off = obj_to_index(s, slab, p[i]); - objcg = objcgs[off]; + objcg = obj_exts[off].objcg; if (!objcg) continue; - objcgs[off] = NULL; + obj_exts[off].objcg = NULL; obj_cgroup_uncharge(objcg, obj_full_size(s)); mod_objcg_state(objcg, slab_pgdat(slab), cache_vmstat_idx(s), -obj_full_size(s)); @@ -537,27 +606,11 @@ static inline void memcg_slab_free_hook(struct kmem_cache *s, struct slab *slab, } #else /* CONFIG_MEMCG_KMEM */ -static inline struct obj_cgroup **slab_objcgs(struct slab *slab) -{ - return NULL; -} - static inline struct mem_cgroup *memcg_from_slab_obj(void *ptr) { return NULL; } -static inline int memcg_alloc_slab_cgroups(struct slab *slab, - struct kmem_cache *s, gfp_t gfp, - bool new_slab) -{ - return 0; -} - -static inline void memcg_free_slab_cgroups(struct slab *slab) -{ -} - static inline bool memcg_slab_pre_alloc_hook(struct kmem_cache *s, struct list_lru *lru, struct obj_cgroup **objcgp, @@ -594,7 +647,7 @@ static __always_inline void account_slab(struct slab *slab, int order, struct kmem_cache *s, gfp_t gfp) { if (memcg_kmem_online() && (s->flags & SLAB_ACCOUNT)) - memcg_alloc_slab_cgroups(slab, s, gfp, true); + alloc_slab_obj_exts(slab, s, gfp, true); mod_node_page_state(slab_pgdat(slab), cache_vmstat_idx(s), PAGE_SIZE << order); @@ -603,8 +656,7 @@ static __always_inline void account_slab(struct slab *slab, int order, static __always_inline void unaccount_slab(struct slab *slab, int order, struct kmem_cache *s) { - if (memcg_kmem_online()) - memcg_free_slab_cgroups(slab); + free_slab_obj_exts(slab); mod_node_page_state(slab_pgdat(slab), cache_vmstat_idx(s), -(PAGE_SIZE << order)); @@ -684,6 +736,7 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s, unsigned int orig_size) { unsigned int zero_size = s->object_size; + struct slabobj_ext *obj_exts; size_t i; flags &= gfp_allowed_mask; @@ -714,6 +767,7 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s, kmemleak_alloc_recursive(p[i], s->object_size, 1, s->flags, flags); kmsan_slab_alloc(s, p[i], flags); + obj_exts = prepare_slab_obj_exts_hook(s, flags, p[i]); } memcg_slab_post_alloc_hook(s, objcg, flags, size, p); diff --git a/mm/slab_common.c b/mm/slab_common.c index 607249785c07..f11cc072b01e 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -204,6 +204,53 @@ struct kmem_cache *find_mergeable(unsigned int size, unsigned int align, return NULL; } +#ifdef CONFIG_SLAB_OBJ_EXT +/* + * The allocated objcg pointers array is not accounted directly. + * Moreover, it should not come from DMA buffer and is not readily + * reclaimable. So those GFP bits should be masked off. + */ +#define OBJCGS_CLEAR_MASK (__GFP_DMA | __GFP_RECLAIMABLE | __GFP_ACCOUNT) + +int alloc_slab_obj_exts(struct slab *slab, struct kmem_cache *s, + gfp_t gfp, bool new_slab) +{ + unsigned int objects = objs_per_slab(s, slab); + unsigned long obj_exts; + void *vec; + + gfp &= ~OBJCGS_CLEAR_MASK; + vec = kcalloc_node(objects, sizeof(struct slabobj_ext), gfp, + slab_nid(slab)); + if (!vec) + return -ENOMEM; + + obj_exts = (unsigned long)vec; +#ifdef CONFIG_MEMCG + obj_exts |= MEMCG_DATA_OBJEXTS; +#endif + if (new_slab) { + /* + * If the slab is brand new and nobody can yet access its + * obj_exts, no synchronization is required and obj_exts can + * be simply assigned. + */ + slab->obj_exts = obj_exts; + } else if (cmpxchg(&slab->obj_exts, 0, obj_exts)) { + /* + * If the slab is already in use, somebody can allocate and + * assign slabobj_exts in parallel. In this case the existing + * objcg vector should be reused. + */ + kfree(vec); + return 0; + } + + kmemleak_not_leak(vec); + return 0; +} +#endif /* CONFIG_SLAB_OBJ_EXT */ + static struct kmem_cache *create_cache(const char *name, unsigned int object_size, unsigned int align, slab_flags_t flags, unsigned int useroffset, From patchwork Mon May 1 16:54:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 89097 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp67540vqo; Mon, 1 May 2023 10:13:13 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ7FkWt+xbwNTp9F2Zc4VpIGQh/YJcTyg+9osf1FPmOra4MP5SQUlYMSEYw3vT0jT6HXcDON X-Received: by 2002:a05:6a20:12c3:b0:f3:1b6:f468 with SMTP id v3-20020a056a2012c300b000f301b6f468mr18549670pzg.6.1682961193443; Mon, 01 May 2023 10:13:13 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682961193; cv=none; d=google.com; s=arc-20160816; b=EJk1+UHprV59rW6XQf5vjpn7amd+kC1FPYFzixQ1Zu1cTfk+fpTH73LdChvR60Ca0c auw2IxRp6hAQfNorCjV2gLZXFwS1UXHy6ahZTkLy/H4eQkCz981uh0sIKDetqrxDirx/ 8/XPJ6V7Z6attIZO5skzVtkEG1raEzr2Fv6g/7fQmrZGnmEhTtmtldrzzs9EIWupk4Kb K+yXAjqshrC9S8CtrUwZn6NXJIIpSJk1FXz/bw5r2tIWNjmqkQS3IDgAYttalmGjm9n6 NrCwYfTJM24iKN8pbd36wGY72LCztzuHNe7NpP5uZNRehjZGU2fSah2WsRn4Zp7mByb+ QqYA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:dkim-signature; bh=DK6/s7foZNj5EExVH4gZckioJVPcIxI52cq1C7FfTD8=; b=iSnUf4FmhFW/trVII5+mC03yb19/gRjZHKqo6WKziCkaiFs47hFPGSP5qzH1Ix1sSF T4UOQZHtapK65MXYcyZBMFsE9oiEWnp+JICWEtBgdhu5nTD0wis3c3MLw0D22lnc8dmC NEbxtJCNG8F8SA2CJ5fNagD2gABEg203GxwXaKTE2eqxBUe38IifmaZzaZR65G+XM5yH 8rkTwJ7QtcXWgp1lrDCZz9iunQKZRkfU0mbwiHDwUIIrlhCZMcCKeB+uublPZ7waCvJ/ VdkyLkhIr38hnWatZwFkuKu4pHkGInrOc4hPY4oCu1UJL20QVkKELiMieJL2R85H84IT VVwQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=j+Kr4wFQ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id 126-20020a630484000000b004fb95253a18si29541454pge.376.2023.05.01.10.12.58; Mon, 01 May 2023 10:13:13 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=j+Kr4wFQ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232895AbjEAQ5P (ORCPT + 99 others); Mon, 1 May 2023 12:57:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59596 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232771AbjEAQ4P (ORCPT ); Mon, 1 May 2023 12:56:15 -0400 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 66C96210A for ; Mon, 1 May 2023 09:55:29 -0700 (PDT) Received: by mail-pl1-x64a.google.com with SMTP id d9443c01a7336-1ab07423559so331495ad.3 for ; Mon, 01 May 2023 09:55:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1682960129; x=1685552129; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=DK6/s7foZNj5EExVH4gZckioJVPcIxI52cq1C7FfTD8=; b=j+Kr4wFQ8hb6ghgASWLoQmwshyZipq1bE+D/CA603zsc4kB1n7R3CTQw1X9N4pK0pE 0zmuTlr7BciP0EtB463YkQ7YcnZO9MQaA53eLGwRosKH+IiuL9AF+xukRwd1DIpRoQoE CuOOiRq7K1VtqSyYvIO65JDUgFMtQPFNC1XFk5UKDZu8zG86BBXEpcIuAsalgfYjpqSk 4H7kaQB7eIw3KHROJbChdasmtpmhAOtDolQRdOsPV+jVQQISfSODNw3+xILO7OLY0pMp pmy+NWFCOxFQwMauo6Mv0q9Qa8hd2IlOPM48vsABoana6/QPtONXZz6Lhi9KWaOg0TwZ kOfw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682960129; x=1685552129; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=DK6/s7foZNj5EExVH4gZckioJVPcIxI52cq1C7FfTD8=; b=cwDXgbG9c2Jkt6/FA3ZScCoR84uRR2Pz+vWByZueyZuJNwHjIZIItkL+rsnLMgzwCX QQWQrdA30IjSYDfuavUrxaJ3Y5Y3Mw9iQEYCOGhdDHwA/eG+uRiqdthsWlg/6W3zcQD8 GT8n+oOO/WRsgJ8tyMFprDlhwJEyiEeqqr/MvtZSXjB4NDLdLxrvrYUEU59J61atkIqV b+/A8E7E3LMrCC85EPrWz/UiqpE3MydzOSS8p/GJ+ZDiykBFK3paqyX6gbOAp3SmWUR8 RK8FZU3T9WALhaSNuYXzGOnABzTAmL+Jy4HujgNfDbact+GKbWSNYGV8Y2PtkrNTukQF zgPw== X-Gm-Message-State: AC+VfDzGTlr/t8xoTa872Jyezad03wTXikmwLmm3ZuvulASBUcFhtRua CfGeDqYi/6eeO6AA8DE1jCJlFuIh7rg= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:6d24:3efd:facc:7ac4]) (user=surenb job=sendgmr) by 2002:a17:902:6a84:b0:1a6:4543:d295 with SMTP id n4-20020a1709026a8400b001a64543d295mr4657171plk.5.1682960128744; Mon, 01 May 2023 09:55:28 -0700 (PDT) Date: Mon, 1 May 2023 09:54:19 -0700 In-Reply-To: <20230501165450.15352-1-surenb@google.com> Mime-Version: 1.0 References: <20230501165450.15352-1-surenb@google.com> X-Mailer: git-send-email 2.40.1.495.gc816e09b53d-goog Message-ID: <20230501165450.15352-10-surenb@google.com> Subject: [PATCH 09/40] mm: introduce __GFP_NO_OBJ_EXT flag to selectively prevent slabobj_ext creation From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, ldufour@linux.ibm.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org X-Spam-Status: No, score=-9.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1764712716313623398?= X-GMAIL-MSGID: =?utf-8?q?1764712716313623398?= Introduce __GFP_NO_OBJ_EXT flag in order to prevent recursive allocations when allocating slabobj_ext on a slab. Signed-off-by: Suren Baghdasaryan --- include/linux/gfp_types.h | 12 ++++++++++-- 1 file changed, 10 insertions(+), 2 deletions(-) diff --git a/include/linux/gfp_types.h b/include/linux/gfp_types.h index 6583a58670c5..aab1959130f9 100644 --- a/include/linux/gfp_types.h +++ b/include/linux/gfp_types.h @@ -53,8 +53,13 @@ typedef unsigned int __bitwise gfp_t; #define ___GFP_SKIP_ZERO 0 #define ___GFP_SKIP_KASAN 0 #endif +#ifdef CONFIG_SLAB_OBJ_EXT +#define ___GFP_NO_OBJ_EXT 0x4000000u +#else +#define ___GFP_NO_OBJ_EXT 0 +#endif #ifdef CONFIG_LOCKDEP -#define ___GFP_NOLOCKDEP 0x4000000u +#define ___GFP_NOLOCKDEP 0x8000000u #else #define ___GFP_NOLOCKDEP 0 #endif @@ -99,12 +104,15 @@ typedef unsigned int __bitwise gfp_t; * node with no fallbacks or placement policy enforcements. * * %__GFP_ACCOUNT causes the allocation to be accounted to kmemcg. + * + * %__GFP_NO_OBJ_EXT causes slab allocation to have no object extension. */ #define __GFP_RECLAIMABLE ((__force gfp_t)___GFP_RECLAIMABLE) #define __GFP_WRITE ((__force gfp_t)___GFP_WRITE) #define __GFP_HARDWALL ((__force gfp_t)___GFP_HARDWALL) #define __GFP_THISNODE ((__force gfp_t)___GFP_THISNODE) #define __GFP_ACCOUNT ((__force gfp_t)___GFP_ACCOUNT) +#define __GFP_NO_OBJ_EXT ((__force gfp_t)___GFP_NO_OBJ_EXT) /** * DOC: Watermark modifiers @@ -249,7 +257,7 @@ typedef unsigned int __bitwise gfp_t; #define __GFP_NOLOCKDEP ((__force gfp_t)___GFP_NOLOCKDEP) /* Room for N __GFP_FOO bits */ -#define __GFP_BITS_SHIFT (26 + IS_ENABLED(CONFIG_LOCKDEP)) +#define __GFP_BITS_SHIFT (27 + IS_ENABLED(CONFIG_LOCKDEP)) #define __GFP_BITS_MASK ((__force gfp_t)((1 << __GFP_BITS_SHIFT) - 1)) /** From patchwork Mon May 1 16:54:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 89103 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp70474vqo; Mon, 1 May 2023 10:17:42 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ7EZbO/T8IaYoNUJrEvo7BbZ9gRpXX0xIAmkpBhAVEbsLJZY/30gQJ0qUgbkCFd3zhDKrw5 X-Received: by 2002:a05:6a20:54a3:b0:ef:b575:e559 with SMTP id i35-20020a056a2054a300b000efb575e559mr18690333pzk.8.1682961461715; Mon, 01 May 2023 10:17:41 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682961461; cv=none; d=google.com; s=arc-20160816; b=Vhv5lV9s+kY+0jQHzOujyLFKJFwACnHOgFU4d7XfXcXrcT9G0i7DwQvb77T8sZGsLP FUTlmQLQNfNOHZf9Li4oKSMpLojhTvBFlz2i+G26iLtVQDQZoVJaAFD6js3RslBPtLxR GeDWQ2r53oiyXWZVliiJwDoSqvMtvnp0MCqUeAbV14cUCsSODNqrCHyAyhB1o4CmSQsL GpCLc5f20VAmWOMqfY5cQGOO9P/C4zNiWn5YXs5E/hQTXW11cAI4met6BF+AIhdPC+JF Q0jyhq4AGatCwUbRHY0jw2Eq3IRDAQ0+YtJXI2kymMndyHAM9yvv6uXPdC3RnStAmtfT YfeQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:dkim-signature; bh=gDL+lQo0P3G6fblFMKJonaRbqsrkB9ljveMgEFO8c80=; b=KFr1yF59g3vYvojLsyCo1hLkGjuKBYZzrNch5eOofkpR4KLib/4kCh1a8M68y4t9Xd iQIPlhaJNptXD0039+TSTV80hmERgpjcQRHjtftoUtYN5aOzxQvYsTwiVCdGZefUvhg6 qtcD86REXL16mipoJc9VlQiBZWLj1sM7njUHfIJEaDqnMPnj13i+aprEWGfyLtu2YQV6 TWlt9VcbYNvxc0Pnwm8Cu4XwCptycNhzjwCIdPfHgpUkQvApI6W5EIHqml390WV52HIZ /KNw+h6z805RmsgmHQJ7P6bwO+yjEPu7ibrhIsoJIrxFGeP5Y/+i7whrrMJHJw7LP6GX quBg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=MVmPTJvn; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id l63-20020a638842000000b00513f15fed4fsi28293526pgd.590.2023.05.01.10.17.26; Mon, 01 May 2023 10:17:41 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=MVmPTJvn; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232922AbjEAQ5a (ORCPT + 99 others); Mon, 1 May 2023 12:57:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33122 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232779AbjEAQ43 (ORCPT ); Mon, 1 May 2023 12:56:29 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 322E110CC for ; Mon, 1 May 2023 09:55:33 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id 3f1490d57ef6-b9a7df507c5so5360397276.1 for ; Mon, 01 May 2023 09:55:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1682960131; x=1685552131; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=gDL+lQo0P3G6fblFMKJonaRbqsrkB9ljveMgEFO8c80=; b=MVmPTJvnt/d4OHnObDpLo5Yx/m0JxigNMHefGJUQ6YHCHiKWwXI+TPB1lVnlbOYOEt 5jgACI5oKYMCB7Pq7cz3X81+bpZcuEUKDUynk+/ZyYCZwjUk5TCfCarGFU4MMvcurlND oln4td3qiCopzaVDD/Q4rMLFlXyQJx4wDKAU0Wzgmky/qZcwJ14zhRRSwMOBlU9Kez/h x9gFamkwY5h9zdy2v/nUepQsOsBIbNt92VQg8iIvlxGC803m7Adm/vEjtGzf8sBaILXc DjUHk8NLlqyBasCJKVPDW64fh5SwGVhMkreauSlsOSNHTVu6NR6T/SSX4RnGRR6l1W62 EEqg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682960131; x=1685552131; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=gDL+lQo0P3G6fblFMKJonaRbqsrkB9ljveMgEFO8c80=; b=RyE964qj88Uo2UDdHzuxCOBz1Kp+JdlvNFIDzfr7wyuMm3A98fpPGO9gC9xM46C7ff 6nIuOvjZWK//pf5NS33DGr9fv/3D6mZ7GLOarq3v4ZsBaV79LgC5JjBfUYzV2x2s2qPe /4fx2XPpV/Ae1gEhwUWdQcPVfQmOnl/XyFdgraIpFlqfViDClewnJlMQYZMRUQMTAIaP dDl5Xh25dA5nEnstTbmaKXZiGW4RjCzH+ihCy1TPEVsk7XScSa7tNTx4I8h2TC9+5fre g1S7ywdpsVQBqt4gNvCZcar7gRHS+uN99w6zycMefWNQw/UxChXoRJxW4OZzF7971s+4 Fcig== X-Gm-Message-State: AC+VfDylganZBMHtvRzRFBKfvIRSB0WAB5MvnWERIp7Qw3BWl6EEQmvK QV1oMe0e54xykFDMo1uT/PzOnLjrFaw= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:6d24:3efd:facc:7ac4]) (user=surenb job=sendgmr) by 2002:a25:6b05:0:b0:b8b:f5fb:5986 with SMTP id g5-20020a256b05000000b00b8bf5fb5986mr8475612ybc.10.1682960131180; Mon, 01 May 2023 09:55:31 -0700 (PDT) Date: Mon, 1 May 2023 09:54:20 -0700 In-Reply-To: <20230501165450.15352-1-surenb@google.com> Mime-Version: 1.0 References: <20230501165450.15352-1-surenb@google.com> X-Mailer: git-send-email 2.40.1.495.gc816e09b53d-goog Message-ID: <20230501165450.15352-11-surenb@google.com> Subject: [PATCH 10/40] mm/slab: introduce SLAB_NO_OBJ_EXT to avoid obj_ext creation From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, ldufour@linux.ibm.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org X-Spam-Status: No, score=-9.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1764712997264869914?= X-GMAIL-MSGID: =?utf-8?q?1764712997264869914?= Slab extension objects can't be allocated before slab infrastructure is initialized. Some caches, like kmem_cache and kmem_cache_node, are created before slab infrastructure is initialized. Objects from these caches can't have extension objects. Introduce SLAB_NO_OBJ_EXT slab flag to mark these caches and avoid creating extensions for objects allocated from these slabs. Signed-off-by: Suren Baghdasaryan --- include/linux/slab.h | 7 +++++++ mm/slab.c | 2 +- mm/slub.c | 5 +++-- 3 files changed, 11 insertions(+), 3 deletions(-) diff --git a/include/linux/slab.h b/include/linux/slab.h index 6b3e155b70bf..99a146f3cedf 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -147,6 +147,13 @@ #endif #define SLAB_TEMPORARY SLAB_RECLAIM_ACCOUNT /* Objects are short-lived */ +#ifdef CONFIG_SLAB_OBJ_EXT +/* Slab created using create_boot_cache */ +#define SLAB_NO_OBJ_EXT ((slab_flags_t __force)0x20000000U) +#else +#define SLAB_NO_OBJ_EXT 0 +#endif + /* * ZERO_SIZE_PTR will be returned for zero sized kmalloc requests. * diff --git a/mm/slab.c b/mm/slab.c index bb57f7fdbae1..ccc76f7455e9 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -1232,7 +1232,7 @@ void __init kmem_cache_init(void) create_boot_cache(kmem_cache, "kmem_cache", offsetof(struct kmem_cache, node) + nr_node_ids * sizeof(struct kmem_cache_node *), - SLAB_HWCACHE_ALIGN, 0, 0); + SLAB_HWCACHE_ALIGN | SLAB_NO_OBJ_EXT, 0, 0); list_add(&kmem_cache->list, &slab_caches); slab_state = PARTIAL; diff --git a/mm/slub.c b/mm/slub.c index c87628cd8a9a..507b71372ee4 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -5020,7 +5020,8 @@ void __init kmem_cache_init(void) node_set(node, slab_nodes); create_boot_cache(kmem_cache_node, "kmem_cache_node", - sizeof(struct kmem_cache_node), SLAB_HWCACHE_ALIGN, 0, 0); + sizeof(struct kmem_cache_node), + SLAB_HWCACHE_ALIGN | SLAB_NO_OBJ_EXT, 0, 0); hotplug_memory_notifier(slab_memory_callback, SLAB_CALLBACK_PRI); @@ -5030,7 +5031,7 @@ void __init kmem_cache_init(void) create_boot_cache(kmem_cache, "kmem_cache", offsetof(struct kmem_cache, node) + nr_node_ids * sizeof(struct kmem_cache_node *), - SLAB_HWCACHE_ALIGN, 0, 0); + SLAB_HWCACHE_ALIGN | SLAB_NO_OBJ_EXT, 0, 0); kmem_cache = bootstrap(&boot_kmem_cache); kmem_cache_node = bootstrap(&boot_kmem_cache_node); From patchwork Mon May 1 16:54:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 89119 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp72431vqo; Mon, 1 May 2023 10:20:57 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ4wnapmzdruY+AZ3ZdCrI6iFz8rD/dwyKYYUlZVAoKL+DHtBlVMnqr2fSgNeprY/6j9X/RG X-Received: by 2002:a17:90a:7447:b0:24d:ec16:6f8c with SMTP id o7-20020a17090a744700b0024dec166f8cmr6655837pjk.20.1682961657133; Mon, 01 May 2023 10:20:57 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682961657; cv=none; d=google.com; s=arc-20160816; b=b8jMWtP0970W82UXyR52DzrbJ6OO3U3mMB5+VNm8Bqx/AEJ7EayN/hi6SJVMisjDQB j0Uf4/5c01EMt3svPRupOsKSJbQIJNEMPro5n3vFwqA7nTGH0QA2bUcl0B0j+7nVeOLo CafYVw8/44YGYyU1oXI5j7T+HuJYOyc+rMb9RHZ/ooSkx1tyW6Et6KxiZsxsJC34J4/B q4+gQmpaChyo8UuzTXaI8hnr+G4Ev+Q1YKKRCXqL5I89Wt1CuX76tjgRffhy/MclFT22 V5/C9V8dpAVHiU5vkO7iT9DW0ceLwp6JJRofqCOwmfUgT45IjjeYaX5MzflHccCrszt2 FHsQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:dkim-signature; bh=AxHF9zQp9trScXtaAl0tt4NG3XQrBAxJQ/ayq1tkXak=; b=Y+g0feNKWhvn5d8vjYkIu/n704ebHeT8XpIXqdn5dDTopSbh54KUK59P5wTnFHfmPF bIoLjx3Z00RkFgSs4mteyLrxwV3pkEYdfBCP3oPSXKWZvZV6tJeRAfaa4KNW+0l5MIEq ICW9KqsHG69Zb3aDFAm6WX6G+nLzsDiyvMiEmBQxFDkRFKjOX8H3MsOEwGtD2E5jT92d Mk/jmqDcNTrO6VoEDBmgLIR8uJGiLRCuyNnWByYqRhkMZ/1qLzQN0LWsjs5BCihP7xLC Xymhg0ekOSKyF062SRMfn3gIgyHo05BKPHfNmCNMrHdun//JwzBz8kffKkb+RE1vE0qu GLow== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=tJm7WR8A; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id lw18-20020a17090b181200b002473c8f52fcsi33898792pjb.91.2023.05.01.10.20.43; Mon, 01 May 2023 10:20:57 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=tJm7WR8A; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232558AbjEAQ5e (ORCPT + 99 others); Mon, 1 May 2023 12:57:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33142 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232790AbjEAQ43 (ORCPT ); Mon, 1 May 2023 12:56:29 -0400 Received: from mail-pg1-x549.google.com (mail-pg1-x549.google.com [IPv6:2607:f8b0:4864:20::549]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 115A7198D for ; Mon, 1 May 2023 09:55:34 -0700 (PDT) Received: by mail-pg1-x549.google.com with SMTP id 41be03b00d2f7-51b67183546so1424163a12.0 for ; Mon, 01 May 2023 09:55:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1682960133; x=1685552133; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=AxHF9zQp9trScXtaAl0tt4NG3XQrBAxJQ/ayq1tkXak=; b=tJm7WR8AdSF6O5qWHaKU1D/ysKeRzFc3FiZT54ezTvTsHezyhXNB2hdIvEgmyHbI2P coE7FK11G2KLLdp82iDrBRtaK1XJdqiYBBu7Bzgc0CjbScOB1xMrdceY1iUR8li4Lunu QxqPry7kkIxD8xjnoCiSp2i1c6RgrlFShVMk73jBCUlai0Pj6tzx2wQKCiGt42ycRlUY Q6iiOWoraxytw320styP6cKZ8CPR54fhZHUXwBTe18PxhhkTB2K2EgrVvGyA9dqorw8v 3z+UXLp0f4HuyIt3egDvW6I8g1tJRbXw06CphA32uVzlUNPHB3VM7B/5KkyjlKHh6bO/ vApw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682960133; x=1685552133; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=AxHF9zQp9trScXtaAl0tt4NG3XQrBAxJQ/ayq1tkXak=; b=fgrBAVSsm6osIpA9UDlyfD0xwO/MLG/rlP8js5Fg5cf3UkgvFD9RObVozMhlySDRxb cnd6IcxNj9IU4A60tZGxdixL9Wd5PLYm3vOo4iVF8Hf6cWIm5jN7etZY6g0SAOg5HvSz yWfd0L80hL8BWGXpDEPH4SgBqIv2Ccfu9ic8H1BBO5U64/jgyzJYp7eCkFXjBvQppNs4 95G2j1Ao+t4rD4IeifSilWR/yQPsbxbPmvkKGJZIz+u2huyAvYuPCotWGT2jcixlUQ48 gX4pTK+7bXt7/iIRfzRcHqEsr6b2aJDCvf+Vp2ndDF/TTIpCbWbc6Ut9XraMz5CNWAad J6gw== X-Gm-Message-State: AC+VfDzCQxrC1hpTbOfw4oUPKQPVxYSLHGglgVTSmNh9+yZuzCZzkrVm NA8fI+dhrHYjPxm/kyyY0FyoRHZ0/c8= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:6d24:3efd:facc:7ac4]) (user=surenb job=sendgmr) by 2002:a17:902:ecc5:b0:1a6:6bdb:b542 with SMTP id a5-20020a170902ecc500b001a66bdbb542mr4742101plh.9.1682960133566; Mon, 01 May 2023 09:55:33 -0700 (PDT) Date: Mon, 1 May 2023 09:54:21 -0700 In-Reply-To: <20230501165450.15352-1-surenb@google.com> Mime-Version: 1.0 References: <20230501165450.15352-1-surenb@google.com> X-Mailer: git-send-email 2.40.1.495.gc816e09b53d-goog Message-ID: <20230501165450.15352-12-surenb@google.com> Subject: [PATCH 11/40] mm: prevent slabobj_ext allocations for slabobj_ext and kmem_cache objects From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, ldufour@linux.ibm.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org X-Spam-Status: No, score=-9.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1764713202740482419?= X-GMAIL-MSGID: =?utf-8?q?1764713202740482419?= Use __GFP_NO_OBJ_EXT to prevent recursions when allocating slabobj_ext objects. Also prevent slabobj_ext allocations for kmem_cache objects. Signed-off-by: Suren Baghdasaryan --- mm/slab.h | 6 ++++++ mm/slab_common.c | 2 ++ 2 files changed, 8 insertions(+) diff --git a/mm/slab.h b/mm/slab.h index 25d14b3a7280..b1c22dc87047 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -450,6 +450,12 @@ prepare_slab_obj_exts_hook(struct kmem_cache *s, gfp_t flags, void *p) if (!need_slab_obj_ext()) return NULL; + if (s->flags & SLAB_NO_OBJ_EXT) + return NULL; + + if (flags & __GFP_NO_OBJ_EXT) + return NULL; + slab = virt_to_slab(p); if (!slab_obj_exts(slab) && WARN(alloc_slab_obj_exts(slab, s, flags, false), diff --git a/mm/slab_common.c b/mm/slab_common.c index f11cc072b01e..42777d66d0e3 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -220,6 +220,8 @@ int alloc_slab_obj_exts(struct slab *slab, struct kmem_cache *s, void *vec; gfp &= ~OBJCGS_CLEAR_MASK; + /* Prevent recursive extension vector allocation */ + gfp |= __GFP_NO_OBJ_EXT; vec = kcalloc_node(objects, sizeof(struct slabobj_ext), gfp, slab_nid(slab)); if (!vec) From patchwork Mon May 1 16:54:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 89094 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp63808vqo; Mon, 1 May 2023 10:07:41 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ4ibo6Lu/Ix9HEoFV3eB8k0dihfEMEpzGvX+f9gno0XyRDH8QpY7R9xHQ7gOALRYdsY962W X-Received: by 2002:a05:6a20:6a0f:b0:ef:205f:8184 with SMTP id p15-20020a056a206a0f00b000ef205f8184mr19864701pzk.13.1682960860985; Mon, 01 May 2023 10:07:40 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682960860; cv=none; d=google.com; s=arc-20160816; b=iIX/ViqI9c3y0k8QFMM9MG+FTZYdO9Bz52Clc1n9o7aTZ17J+FxDOFa9KtHtMEPNBQ ZB5M9KMwazqfLbncbCAvmItp5Tc9wk2gpOjwZ93SdIJ7cdk3FZM0j2iO0qtvdoACDUuR 3/MBJV0YxGCgOGhu45ttkaljECaOdep2UexhJaNGmznV5SLKLiCs34p4OJg6b03Jeo5d xjsGEZlhwTy1gRt2MPSUxaHURK7OkLaTxEoibgrtiv8TY0GC3lePRkk90bbIWxvh0r6Z vF7ciNQpY4qGuV6+A2NjyO13aChYV0+NL8LJABNOjphArU3YiNtVJbOyvkJA3iTgHL+p kfXQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:dkim-signature; bh=xg9JlY7/fjt8XCrAgZEcEBdLHf1ezLwh/Q099RdQeBg=; b=mEkQW/cvRfUOVaP06av9uuhf3XOeJ77ZW72fmxtrVDkm5N6mzUX+tqMkx2t4G14vcG jXqOQKoQjBvQTUfpdwpxQhaDwvyU1WZFew8YnistYcZBujbA1SI0aWLW+djlUdeszCfg 8fV8JDfRJ2LngGc8ozHEPQneuA/CWJvpcGwGvtIT/Pe2pp2MXyqEnZ0IEOGPYhJMZy3e rO3mjdgOf02OPJErS/UsPiHLHTdGXzlNXVXzAYfiJMZv3NExqJUU1fxVkG+M5aFjznBP BBitYwUTRDH8ReEozdzBKoQx0dfdUuv5Z/8selyWtzSTrb4yNl8xUon0BJxW8SvE2hOJ I0/Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=lxWpRSi2; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id a63-20020a639042000000b0051f8b655e04si28763312pge.505.2023.05.01.10.07.28; Mon, 01 May 2023 10:07:40 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=lxWpRSi2; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232954AbjEAQ5t (ORCPT + 99 others); Mon, 1 May 2023 12:57:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33206 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232465AbjEAQ4c (ORCPT ); Mon, 1 May 2023 12:56:32 -0400 Received: from mail-pj1-x1049.google.com (mail-pj1-x1049.google.com [IPv6:2607:f8b0:4864:20::1049]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 902F41723 for ; Mon, 1 May 2023 09:55:36 -0700 (PDT) Received: by mail-pj1-x1049.google.com with SMTP id 98e67ed59e1d1-24de504c5fcso2043469a91.2 for ; Mon, 01 May 2023 09:55:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1682960136; x=1685552136; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=xg9JlY7/fjt8XCrAgZEcEBdLHf1ezLwh/Q099RdQeBg=; b=lxWpRSi2RW8T5MQ6rUDBO360BpQGLeAVDuvFbZTW2oi6i0pP4Ylls+pWo4X0mmDlZ4 ZivjAV0YmO5+Csa7KFdawNwZlR87MIFCY0KvaMmh23wbSx/Zqw9mWSmtCW/DFws1pL93 HinphsKaTEMZsFj4FnMn87Gmq03vSk5t2aEjAb2xkm1oUZHHiNVFH5GCIuTzn5brrAqi 57iYlmpPjz1nsWZstv5H0Wf3fwPUmwqs17xOGhYG6xNhITzJ07bTu6Ucjsolv2WF8BDd fpFmt5mSOlDgy/NcylrZlRdpPNgsEH1oRDPs7B+h6qorwNaMNm9hhTPxfiNkqgalis48 xWGA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682960136; x=1685552136; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=xg9JlY7/fjt8XCrAgZEcEBdLHf1ezLwh/Q099RdQeBg=; b=CpUSdv8vim5hpYCmYK+Sv4YYxIWAEWFdAEBNgaeFSXajHlXBLrR8kPPuSFryaW+xUf Tzg1nuj/UEDdDWRe3SGHRrLqFx8LnERPcjHBG61J/lTaG/5HVwoH8ZBM2lFJXA5L4rK1 waO6YXiQic4w81MKEJ2PlqQ3V4znGaDa7QeRqHp1hlJLMBVrIWcQyScLpN7vEtI9DDW8 5ukg82LqMcDDvRmGlGxyW27MZIJeIezD7R/oIMac2A6hOhLCO0USW9QkfbPodfFRL572 qvZLR6JoXVGZFRPSdreJGNuGP4+00DtOXtPpyRHiT2X1ejouyEIK7+WJ38KLfPjrz65G 6czg== X-Gm-Message-State: AC+VfDwALXSOB4+1hycJ1pa7Dy/cJJcnh1mRNOSNpUtcXY3c/W3X8PKZ 5OC/xDnKoJz86XvV4e+wd8mjCyfSpVo= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:6d24:3efd:facc:7ac4]) (user=surenb job=sendgmr) by 2002:a17:90a:2d7:b0:247:5ce:5bd7 with SMTP id d23-20020a17090a02d700b0024705ce5bd7mr3861119pjd.0.1682960135879; Mon, 01 May 2023 09:55:35 -0700 (PDT) Date: Mon, 1 May 2023 09:54:22 -0700 In-Reply-To: <20230501165450.15352-1-surenb@google.com> Mime-Version: 1.0 References: <20230501165450.15352-1-surenb@google.com> X-Mailer: git-send-email 2.40.1.495.gc816e09b53d-goog Message-ID: <20230501165450.15352-13-surenb@google.com> Subject: [PATCH 12/40] slab: objext: introduce objext_flags as extension to page_memcg_data_flags From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, ldufour@linux.ibm.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org X-Spam-Status: No, score=-9.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1764712367576647733?= X-GMAIL-MSGID: =?utf-8?q?1764712367576647733?= Introduce objext_flags to store additional objext flags unrelated to memcg. Signed-off-by: Suren Baghdasaryan --- include/linux/memcontrol.h | 29 ++++++++++++++++++++++------- mm/slab.h | 4 +--- 2 files changed, 23 insertions(+), 10 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index b9fd9732a52b..5e2da63c525f 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -347,7 +347,22 @@ enum page_memcg_data_flags { __NR_MEMCG_DATA_FLAGS = (1UL << 2), }; -#define MEMCG_DATA_FLAGS_MASK (__NR_MEMCG_DATA_FLAGS - 1) +#define __FIRST_OBJEXT_FLAG __NR_MEMCG_DATA_FLAGS + +#else /* CONFIG_MEMCG */ + +#define __FIRST_OBJEXT_FLAG (1UL << 0) + +#endif /* CONFIG_MEMCG */ + +enum objext_flags { + /* the next bit after the last actual flag */ + __NR_OBJEXTS_FLAGS = __FIRST_OBJEXT_FLAG, +}; + +#define OBJEXTS_FLAGS_MASK (__NR_OBJEXTS_FLAGS - 1) + +#ifdef CONFIG_MEMCG static inline bool folio_memcg_kmem(struct folio *folio); @@ -381,7 +396,7 @@ static inline struct mem_cgroup *__folio_memcg(struct folio *folio) VM_BUG_ON_FOLIO(memcg_data & MEMCG_DATA_OBJEXTS, folio); VM_BUG_ON_FOLIO(memcg_data & MEMCG_DATA_KMEM, folio); - return (struct mem_cgroup *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); + return (struct mem_cgroup *)(memcg_data & ~OBJEXTS_FLAGS_MASK); } /* @@ -402,7 +417,7 @@ static inline struct obj_cgroup *__folio_objcg(struct folio *folio) VM_BUG_ON_FOLIO(memcg_data & MEMCG_DATA_OBJEXTS, folio); VM_BUG_ON_FOLIO(!(memcg_data & MEMCG_DATA_KMEM), folio); - return (struct obj_cgroup *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); + return (struct obj_cgroup *)(memcg_data & ~OBJEXTS_FLAGS_MASK); } /* @@ -459,11 +474,11 @@ static inline struct mem_cgroup *folio_memcg_rcu(struct folio *folio) if (memcg_data & MEMCG_DATA_KMEM) { struct obj_cgroup *objcg; - objcg = (void *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); + objcg = (void *)(memcg_data & ~OBJEXTS_FLAGS_MASK); return obj_cgroup_memcg(objcg); } - return (struct mem_cgroup *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); + return (struct mem_cgroup *)(memcg_data & ~OBJEXTS_FLAGS_MASK); } /* @@ -502,11 +517,11 @@ static inline struct mem_cgroup *folio_memcg_check(struct folio *folio) if (memcg_data & MEMCG_DATA_KMEM) { struct obj_cgroup *objcg; - objcg = (void *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); + objcg = (void *)(memcg_data & ~OBJEXTS_FLAGS_MASK); return obj_cgroup_memcg(objcg); } - return (struct mem_cgroup *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); + return (struct mem_cgroup *)(memcg_data & ~OBJEXTS_FLAGS_MASK); } static inline struct mem_cgroup *page_memcg_check(struct page *page) diff --git a/mm/slab.h b/mm/slab.h index b1c22dc87047..bec202bdcfb8 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -409,10 +409,8 @@ static inline struct slabobj_ext *slab_obj_exts(struct slab *slab) slab_page(slab)); VM_BUG_ON_PAGE(obj_exts & MEMCG_DATA_KMEM, slab_page(slab)); - return (struct slabobj_ext *)(obj_exts & ~MEMCG_DATA_FLAGS_MASK); -#else - return (struct slabobj_ext *)obj_exts; #endif + return (struct slabobj_ext *)(obj_exts & ~OBJEXTS_FLAGS_MASK); } int alloc_slab_obj_exts(struct slab *slab, struct kmem_cache *s, From patchwork Mon May 1 16:54:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 89112 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp71443vqo; Mon, 1 May 2023 10:19:15 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6e+VMf86cNU8e2s+ndBjyJfp5lPvEaFj6On4VjwXA6s7VfnYstZyu53HypAOKC/mKqWYpj X-Received: by 2002:a05:6a21:6da9:b0:ee:84a2:4ad0 with SMTP id wl41-20020a056a216da900b000ee84a24ad0mr18685623pzb.22.1682961555482; Mon, 01 May 2023 10:19:15 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682961555; cv=none; d=google.com; s=arc-20160816; b=Z5YcS/KuKGvsEYA1WVxVKNMcushhABL9z130KKCLH/GsCdPmlE0tnLNDh05Ecn4uSo bXK4T6zaacgvlp9IsRHf3aXSK0vWMD4qcUAIjoiKJKQhMkNjgmf1jYAg+YNdMA5lY2g4 2esNm/fbiBhgsHLi2AYZBu3Fk15ZvP5Z1wKUUwm4byayQO3zIGEkYdrE3/dKnrTAMsCj FNf4IQEYtptUUjQ17pip6R5/qeVpk2LpGh8rczDXZJjRk9ob5V3hjA/PcgZbqSCsUpZ2 vXHp3MjAJ3wvLfnDNVF5u2zIWHduzB8mg210pj4MclmG16iEzVqKnLt3hCIsFCDJL+5R Ro9A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:dkim-signature; bh=esQmYNSNwAFEEht8P+eJjy3027KPfc8B0JiLBlMM/Bo=; b=BYu9i/nPG7BcDPR/JKtCH77b+jTJmTggXTRftuLks3ZMHDuLPEWj5qDa4uFtQ62hVx bcRXhQKOSCAUAx3IIqZnbXyGMQEIp5cW5Qi0jcFPOfW1Peh0SBu30bxMi6wFIDFCJDym lnwXBxkHkwuDJSv9t/rU0EDY/+T31+GdpMo0YSKVIt0zeQFiU+PMb5zA/ko8sDXTsq23 8dZY5mcv6eAFWlrvj40GtO91dn7s5w/JocYcYn0AC9MSQGqd2+2J3svrGjuzus/igIBs XbBy+r5zlCJ2FvZHXcCiE3traCwx8iLWHNKW6g1T+C8Aj6Pi4VM0CEwRLkWXyG3y/DS6 kZTg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=1K0fFEAK; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id k19-20020a63ff13000000b004fb98290dbdsi28435680pgi.50.2023.05.01.10.19.01; Mon, 01 May 2023 10:19:15 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=1K0fFEAK; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232963AbjEAQ6A (ORCPT + 99 others); Mon, 1 May 2023 12:58:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60818 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232828AbjEAQ4n (ORCPT ); Mon, 1 May 2023 12:56:43 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C55B92695 for ; Mon, 1 May 2023 09:55:43 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id 3f1490d57ef6-b9a8023ccf1so5355930276.2 for ; Mon, 01 May 2023 09:55:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1682960138; x=1685552138; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=esQmYNSNwAFEEht8P+eJjy3027KPfc8B0JiLBlMM/Bo=; b=1K0fFEAK+AaTcTrhRe/8a98Aa8YFb/wagQhTInABD8mCONGKSRgdFrxtZCXhoikIrD 95LbVzIsXi3bINaTtEnZF9G22EF21tdKrkz425iIWjDwgf5ia/D5RPk4ld7MvQ2bZrHL zTPwIsZxtdpBgp2DZGIo7+RgNrWbsBTdHAdG+30Neb5Pvht487JVGA5EbtFhDduMq7HO ShzwvYcyvz3j9I+l3PwqAphCFJcDaU5zq6rGvI+RjzLbAJs5GZMoaQi5lWkbfdI+rVbA ocsPKP0FHdN7fG1jB6JiNezvv7iCPM0MjKja2ICr+xbGSCsCD3+3JKMB5QLLD+jX5DbM FQSg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682960138; x=1685552138; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=esQmYNSNwAFEEht8P+eJjy3027KPfc8B0JiLBlMM/Bo=; b=X7AOwz8ax+2iEtnFSDmjycyklkTbdK/PXPFfv/uqu86z5/cQlNHiiwbuuO4T3jkQtP C65qPJiLkayGHiulcYuNFT1Ibpyu7Y69led4D08eh7Jiu+4kbFocDm15EZhWEOJDuxmT pcmhsWZt2hlhe1zPxC4FphWubmEiF72Q655zdLxRdgw+e8JzxKvtCbAPktJ8q4umv+6l i42pzT06TrQ+txLRlhQTfMJvYltepT9cLJN1YpdTX1OrE3WWQNgGKykNanOnmGRK4MNp lg1MuRm4ibJE1vGXHMzCNmY7JCknPXdYHSPMFkxmWZpi9nM5i3AdBnEFVUG1XUjbF4jH lJ8g== X-Gm-Message-State: AC+VfDzcqppeGn/L6NAW6A6nwA4PRRxXVahV3DNt7YrfJlb5Lf4lhfYh dnPWA93PP9VCJty6fL14wGsXkRk9jgE= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:6d24:3efd:facc:7ac4]) (user=surenb job=sendgmr) by 2002:a25:c00b:0:b0:b99:4887:c736 with SMTP id c11-20020a25c00b000000b00b994887c736mr8510714ybf.3.1682960138280; Mon, 01 May 2023 09:55:38 -0700 (PDT) Date: Mon, 1 May 2023 09:54:23 -0700 In-Reply-To: <20230501165450.15352-1-surenb@google.com> Mime-Version: 1.0 References: <20230501165450.15352-1-surenb@google.com> X-Mailer: git-send-email 2.40.1.495.gc816e09b53d-goog Message-ID: <20230501165450.15352-14-surenb@google.com> Subject: [PATCH 13/40] lib: code tagging framework From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, ldufour@linux.ibm.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org X-Spam-Status: No, score=-9.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1764713096402579645?= X-GMAIL-MSGID: =?utf-8?q?1764713096402579645?= Add basic infrastructure to support code tagging which stores tag common information consisting of the module name, function, file name and line number. Provide functions to register a new code tag type and navigate between code tags. Co-developed-by: Kent Overstreet Signed-off-by: Kent Overstreet Signed-off-by: Suren Baghdasaryan --- include/linux/codetag.h | 71 ++++++++++++++ lib/Kconfig.debug | 4 + lib/Makefile | 1 + lib/codetag.c | 199 ++++++++++++++++++++++++++++++++++++++++ 4 files changed, 275 insertions(+) create mode 100644 include/linux/codetag.h create mode 100644 lib/codetag.c diff --git a/include/linux/codetag.h b/include/linux/codetag.h new file mode 100644 index 000000000000..a9d7adecc2a5 --- /dev/null +++ b/include/linux/codetag.h @@ -0,0 +1,71 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * code tagging framework + */ +#ifndef _LINUX_CODETAG_H +#define _LINUX_CODETAG_H + +#include + +struct codetag_iterator; +struct codetag_type; +struct seq_buf; +struct module; + +/* + * An instance of this structure is created in a special ELF section at every + * code location being tagged. At runtime, the special section is treated as + * an array of these. + */ +struct codetag { + unsigned int flags; /* used in later patches */ + unsigned int lineno; + const char *modname; + const char *function; + const char *filename; +} __aligned(8); + +union codetag_ref { + struct codetag *ct; +}; + +struct codetag_range { + struct codetag *start; + struct codetag *stop; +}; + +struct codetag_module { + struct module *mod; + struct codetag_range range; +}; + +struct codetag_type_desc { + const char *section; + size_t tag_size; +}; + +struct codetag_iterator { + struct codetag_type *cttype; + struct codetag_module *cmod; + unsigned long mod_id; + struct codetag *ct; +}; + +#define CODE_TAG_INIT { \ + .modname = KBUILD_MODNAME, \ + .function = __func__, \ + .filename = __FILE__, \ + .lineno = __LINE__, \ + .flags = 0, \ +} + +void codetag_lock_module_list(struct codetag_type *cttype, bool lock); +struct codetag_iterator codetag_get_ct_iter(struct codetag_type *cttype); +struct codetag *codetag_next_ct(struct codetag_iterator *iter); + +void codetag_to_text(struct seq_buf *out, struct codetag *ct); + +struct codetag_type * +codetag_register_type(const struct codetag_type_desc *desc); + +#endif /* _LINUX_CODETAG_H */ diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug index ce51d4dc6803..5078da7d3ffb 100644 --- a/lib/Kconfig.debug +++ b/lib/Kconfig.debug @@ -957,6 +957,10 @@ config DEBUG_STACKOVERFLOW If in doubt, say "N". +config CODE_TAGGING + bool + select KALLSYMS + source "lib/Kconfig.kasan" source "lib/Kconfig.kfence" source "lib/Kconfig.kmsan" diff --git a/lib/Makefile b/lib/Makefile index 293a0858a3f8..28d70ecf2976 100644 --- a/lib/Makefile +++ b/lib/Makefile @@ -228,6 +228,7 @@ obj-$(CONFIG_OF_RECONFIG_NOTIFIER_ERROR_INJECT) += \ of-reconfig-notifier-error-inject.o obj-$(CONFIG_FUNCTION_ERROR_INJECTION) += error-inject.o +obj-$(CONFIG_CODE_TAGGING) += codetag.o lib-$(CONFIG_GENERIC_BUG) += bug.o obj-$(CONFIG_HAVE_ARCH_TRACEHOOK) += syscall.o diff --git a/lib/codetag.c b/lib/codetag.c new file mode 100644 index 000000000000..7708f8388e55 --- /dev/null +++ b/lib/codetag.c @@ -0,0 +1,199 @@ +// SPDX-License-Identifier: GPL-2.0-only +#include +#include +#include +#include +#include +#include + +struct codetag_type { + struct list_head link; + unsigned int count; + struct idr mod_idr; + struct rw_semaphore mod_lock; /* protects mod_idr */ + struct codetag_type_desc desc; +}; + +static DEFINE_MUTEX(codetag_lock); +static LIST_HEAD(codetag_types); + +void codetag_lock_module_list(struct codetag_type *cttype, bool lock) +{ + if (lock) + down_read(&cttype->mod_lock); + else + up_read(&cttype->mod_lock); +} + +struct codetag_iterator codetag_get_ct_iter(struct codetag_type *cttype) +{ + struct codetag_iterator iter = { + .cttype = cttype, + .cmod = NULL, + .mod_id = 0, + .ct = NULL, + }; + + return iter; +} + +static inline struct codetag *get_first_module_ct(struct codetag_module *cmod) +{ + return cmod->range.start < cmod->range.stop ? cmod->range.start : NULL; +} + +static inline +struct codetag *get_next_module_ct(struct codetag_iterator *iter) +{ + struct codetag *res = (struct codetag *) + ((char *)iter->ct + iter->cttype->desc.tag_size); + + return res < iter->cmod->range.stop ? res : NULL; +} + +struct codetag *codetag_next_ct(struct codetag_iterator *iter) +{ + struct codetag_type *cttype = iter->cttype; + struct codetag_module *cmod; + struct codetag *ct; + + lockdep_assert_held(&cttype->mod_lock); + + if (unlikely(idr_is_empty(&cttype->mod_idr))) + return NULL; + + ct = NULL; + while (true) { + cmod = idr_find(&cttype->mod_idr, iter->mod_id); + + /* If module was removed move to the next one */ + if (!cmod) + cmod = idr_get_next_ul(&cttype->mod_idr, + &iter->mod_id); + + /* Exit if no more modules */ + if (!cmod) + break; + + if (cmod != iter->cmod) { + iter->cmod = cmod; + ct = get_first_module_ct(cmod); + } else + ct = get_next_module_ct(iter); + + if (ct) + break; + + iter->mod_id++; + } + + iter->ct = ct; + return ct; +} + +void codetag_to_text(struct seq_buf *out, struct codetag *ct) +{ + seq_buf_printf(out, "%s:%u module:%s func:%s", + ct->filename, ct->lineno, + ct->modname, ct->function); +} + +static inline size_t range_size(const struct codetag_type *cttype, + const struct codetag_range *range) +{ + return ((char *)range->stop - (char *)range->start) / + cttype->desc.tag_size; +} + +static void *get_symbol(struct module *mod, const char *prefix, const char *name) +{ + char buf[64]; + int res; + + res = snprintf(buf, sizeof(buf), "%s%s", prefix, name); + if (WARN_ON(res < 1 || res > sizeof(buf))) + return NULL; + + return mod ? + (void *)find_kallsyms_symbol_value(mod, buf) : + (void *)kallsyms_lookup_name(buf); +} + +static struct codetag_range get_section_range(struct module *mod, + const char *section) +{ + return (struct codetag_range) { + get_symbol(mod, "__start_", section), + get_symbol(mod, "__stop_", section), + }; +} + +static int codetag_module_init(struct codetag_type *cttype, struct module *mod) +{ + struct codetag_range range; + struct codetag_module *cmod; + int err; + + range = get_section_range(mod, cttype->desc.section); + if (!range.start || !range.stop) { + pr_warn("Failed to load code tags of type %s from the module %s\n", + cttype->desc.section, + mod ? mod->name : "(built-in)"); + return -EINVAL; + } + + /* Ignore empty ranges */ + if (range.start == range.stop) + return 0; + + BUG_ON(range.start > range.stop); + + cmod = kmalloc(sizeof(*cmod), GFP_KERNEL); + if (unlikely(!cmod)) + return -ENOMEM; + + cmod->mod = mod; + cmod->range = range; + + down_write(&cttype->mod_lock); + err = idr_alloc(&cttype->mod_idr, cmod, 0, 0, GFP_KERNEL); + if (err >= 0) + cttype->count += range_size(cttype, &range); + up_write(&cttype->mod_lock); + + if (err < 0) { + kfree(cmod); + return err; + } + + return 0; +} + +struct codetag_type * +codetag_register_type(const struct codetag_type_desc *desc) +{ + struct codetag_type *cttype; + int err; + + BUG_ON(desc->tag_size <= 0); + + cttype = kzalloc(sizeof(*cttype), GFP_KERNEL); + if (unlikely(!cttype)) + return ERR_PTR(-ENOMEM); + + cttype->desc = *desc; + idr_init(&cttype->mod_idr); + init_rwsem(&cttype->mod_lock); + + err = codetag_module_init(cttype, NULL); + if (unlikely(err)) { + kfree(cttype); + return ERR_PTR(err); + } + + mutex_lock(&codetag_lock); + list_add_tail(&cttype->link, &codetag_types); + mutex_unlock(&codetag_lock); + + return cttype; +} From patchwork Mon May 1 16:54:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 89099 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp70055vqo; Mon, 1 May 2023 10:17:02 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ5xk4+2CVlSLrDPTTjd3e+nsZPVMHyPLC+my/QC7nnERHiI5qpZcMx1CguzGANr/UCgYAs5 X-Received: by 2002:a17:90b:f8f:b0:24d:e504:c475 with SMTP id ft15-20020a17090b0f8f00b0024de504c475mr8507052pjb.21.1682961422123; Mon, 01 May 2023 10:17:02 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682961422; cv=none; d=google.com; s=arc-20160816; b=uwdxujpBGXpNB57+DF8nedgwGSfGllufwNV0vAnaZ7d9n5hVx0kWQJonnxNH1uUEbx mID0Ypb0GiNzLvHdJM2iNwpM7iBRp4hMIt6LlcKpGNe2M5atrO8vpiaubnN8QXufoZKs 1kznetQK6bGWk7+jGlYMc7fsNEF/JIottJUCBA6TXt2A6g5Fv7UNEVPMIuUXP+ui/Q73 /mh2aMHCQD001mkm673bRyRh8QZ4CY3FVNniFk1pzHJ5HQoCNcHMlV42UvqgWmefrhm5 6B0PZNn3VRFWO992iKdHJ89wG1Wy6Umy58Uj7gUaHWi02U5z3BTjYkVOIQahdO9P/OuR mVjw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:dkim-signature; bh=Fsyg89/vfc29JJa9GS5YhCGnLnEPc8LoOfUR2xEk3nU=; b=OXVItzQD8bjfHekjEpl7CTCGg/PzLNd9Zmr6SuHaWSuarBqqYbeF0Qq/WR8VDZsEAa K7KYAcilGrNQ8j4l2peuELStVKXkIQbKyZ7UfhxFsemFris0bafojtrUtn1vULJc3wjA L3yMh6Hb2sK85o4NoRkzv9v12yHAlcrzwE9wyoSXO4RY9n5kKmYlytfy7KP9wyzLDPdK t/xowFFx2GFgrnE1V6xqmYg6y05U2hpxRQ1pb2BeVIdp2pzdz4DYFB9xXaIxWAD8VP0K qCeX3QtksjFVLwZicdeKlpLtkHQ1SoXrf39TD0Uno0FIpqn+T7uTyrF2idYHnnP4SwEc Z51A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=XugZDvU8; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id o13-20020a17090ac70d00b0024711d63febsi30966279pjt.173.2023.05.01.10.16.48; Mon, 01 May 2023 10:17:02 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=XugZDvU8; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232724AbjEAQ6X (ORCPT + 99 others); Mon, 1 May 2023 12:58:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60860 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232538AbjEAQ4x (ORCPT ); Mon, 1 May 2023 12:56:53 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1C19326AD for ; Mon, 1 May 2023 09:55:47 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id 3f1490d57ef6-b9a6eeea78cso27814348276.0 for ; Mon, 01 May 2023 09:55:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1682960140; x=1685552140; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Fsyg89/vfc29JJa9GS5YhCGnLnEPc8LoOfUR2xEk3nU=; b=XugZDvU8R0/s4xSWeFHluE62boWwkQ9CTbOENMFYQbMS3l2tyzUCtbQcKbwHigUJIJ OtnAHKBuBjvsP9/stYM2LN0tvR5v6TsP4w8c4UexO5Utfx0bFwgsYC7skVlENgpnJL9H sIxP6b1pDy/fBiWqF/8EcNcaUPUXV9QISrg+weQ/AP/7B06TvSpoas2EJfqmlc3VmYeD jr4T+ZByYleQx4KEZl+bfKD1sOEa1DJKK+WO7dY/hFTgLUuurZ1db5fCtz0miigMUxwU tm4S2IZOyKuiQL4O2pCNIbqdvFVKgmVq6GGIhg+Gt6k3Cilb/R8U19yQnyV0xS5Bv4tA iA/Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682960140; x=1685552140; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Fsyg89/vfc29JJa9GS5YhCGnLnEPc8LoOfUR2xEk3nU=; b=HilXAVcnS8Ym6jzNVE67H7h01DkBtNtmXzGj5N6VfEg5UeNRfWa52q7LDfFLcfWW1s iyt5/W90zMEEpmU/RjKtwDGrZTJShYy5AGx2B7LXHV6z1KnE1bkmYcsgygRUxI0ZwqA7 EBjOrpvz3In+GlMvMkBENWCAvmZaIzAx+jB6+ahvgRukOOotPlEgPOoy/C42P4lRZMVt 1ytgTg660wIv8jB1i1h8RQNnA+uB3KFr5DlR4zNKBNm6rhB31j/ZlYahRNeMOVRtPzzE 2Qo6HtWpayLCwBAkuh3zfX06ANUOWflyefIhjQykjD/MQ9QtM1Z1cLcKU98vYaVoeNNv mBCQ== X-Gm-Message-State: AC+VfDyPLRUjrgLBEZM3OnFBwxLzWQNLKsGcCB1AnqvP4hZYRN8Prb0x Le6MoigbCp5CH0cTBVqkN7ke7IBjoUI= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:6d24:3efd:facc:7ac4]) (user=surenb job=sendgmr) by 2002:a25:8087:0:b0:b8b:f5fb:598d with SMTP id n7-20020a258087000000b00b8bf5fb598dmr8714623ybk.6.1682960140703; Mon, 01 May 2023 09:55:40 -0700 (PDT) Date: Mon, 1 May 2023 09:54:24 -0700 In-Reply-To: <20230501165450.15352-1-surenb@google.com> Mime-Version: 1.0 References: <20230501165450.15352-1-surenb@google.com> X-Mailer: git-send-email 2.40.1.495.gc816e09b53d-goog Message-ID: <20230501165450.15352-15-surenb@google.com> Subject: [PATCH 14/40] lib: code tagging module support From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, ldufour@linux.ibm.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org X-Spam-Status: No, score=-9.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1764712955955773801?= X-GMAIL-MSGID: =?utf-8?q?1764712955955773801?= Add support for code tagging from dynamically loaded modules. Signed-off-by: Suren Baghdasaryan Co-developed-by: Kent Overstreet Signed-off-by: Kent Overstreet --- include/linux/codetag.h | 12 +++++++++ kernel/module/main.c | 4 +++ lib/codetag.c | 58 +++++++++++++++++++++++++++++++++++++++-- 3 files changed, 72 insertions(+), 2 deletions(-) diff --git a/include/linux/codetag.h b/include/linux/codetag.h index a9d7adecc2a5..386733e89b31 100644 --- a/include/linux/codetag.h +++ b/include/linux/codetag.h @@ -42,6 +42,10 @@ struct codetag_module { struct codetag_type_desc { const char *section; size_t tag_size; + void (*module_load)(struct codetag_type *cttype, + struct codetag_module *cmod); + void (*module_unload)(struct codetag_type *cttype, + struct codetag_module *cmod); }; struct codetag_iterator { @@ -68,4 +72,12 @@ void codetag_to_text(struct seq_buf *out, struct codetag *ct); struct codetag_type * codetag_register_type(const struct codetag_type_desc *desc); +#ifdef CONFIG_CODE_TAGGING +void codetag_load_module(struct module *mod); +void codetag_unload_module(struct module *mod); +#else +static inline void codetag_load_module(struct module *mod) {} +static inline void codetag_unload_module(struct module *mod) {} +#endif + #endif /* _LINUX_CODETAG_H */ diff --git a/kernel/module/main.c b/kernel/module/main.c index 044aa2c9e3cb..4232e7bff549 100644 --- a/kernel/module/main.c +++ b/kernel/module/main.c @@ -56,6 +56,7 @@ #include #include #include +#include #include #include #include "internal.h" @@ -1249,6 +1250,7 @@ static void free_module(struct module *mod) { trace_module_free(mod); + codetag_unload_module(mod); mod_sysfs_teardown(mod); /* @@ -2974,6 +2976,8 @@ static int load_module(struct load_info *info, const char __user *uargs, /* Get rid of temporary copy. */ free_copy(info, flags); + codetag_load_module(mod); + /* Done! */ trace_module_load(mod); diff --git a/lib/codetag.c b/lib/codetag.c index 7708f8388e55..4ea57fb37346 100644 --- a/lib/codetag.c +++ b/lib/codetag.c @@ -108,15 +108,20 @@ static inline size_t range_size(const struct codetag_type *cttype, static void *get_symbol(struct module *mod, const char *prefix, const char *name) { char buf[64]; + void *ret; int res; res = snprintf(buf, sizeof(buf), "%s%s", prefix, name); if (WARN_ON(res < 1 || res > sizeof(buf))) return NULL; - return mod ? + preempt_disable(); + ret = mod ? (void *)find_kallsyms_symbol_value(mod, buf) : (void *)kallsyms_lookup_name(buf); + preempt_enable(); + + return ret; } static struct codetag_range get_section_range(struct module *mod, @@ -157,8 +162,11 @@ static int codetag_module_init(struct codetag_type *cttype, struct module *mod) down_write(&cttype->mod_lock); err = idr_alloc(&cttype->mod_idr, cmod, 0, 0, GFP_KERNEL); - if (err >= 0) + if (err >= 0) { cttype->count += range_size(cttype, &range); + if (cttype->desc.module_load) + cttype->desc.module_load(cttype, cmod); + } up_write(&cttype->mod_lock); if (err < 0) { @@ -197,3 +205,49 @@ codetag_register_type(const struct codetag_type_desc *desc) return cttype; } + +void codetag_load_module(struct module *mod) +{ + struct codetag_type *cttype; + + if (!mod) + return; + + mutex_lock(&codetag_lock); + list_for_each_entry(cttype, &codetag_types, link) + codetag_module_init(cttype, mod); + mutex_unlock(&codetag_lock); +} + +void codetag_unload_module(struct module *mod) +{ + struct codetag_type *cttype; + + if (!mod) + return; + + mutex_lock(&codetag_lock); + list_for_each_entry(cttype, &codetag_types, link) { + struct codetag_module *found = NULL; + struct codetag_module *cmod; + unsigned long mod_id, tmp; + + down_write(&cttype->mod_lock); + idr_for_each_entry_ul(&cttype->mod_idr, cmod, tmp, mod_id) { + if (cmod->mod && cmod->mod == mod) { + found = cmod; + break; + } + } + if (found) { + if (cttype->desc.module_unload) + cttype->desc.module_unload(cttype, cmod); + + cttype->count -= range_size(cttype, &cmod->range); + idr_remove(&cttype->mod_idr, mod_id); + kfree(cmod); + } + up_write(&cttype->mod_lock); + } + mutex_unlock(&codetag_lock); +} From patchwork Mon May 1 16:54:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 89092 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp63533vqo; Mon, 1 May 2023 10:07:14 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6J626pbuI/HjnVgQorMOZuDvpPwPoyBIuhIR1PxPhwjT2I0rqCv3W7gleqLNksxdm8Xsx7 X-Received: by 2002:a17:90a:fd81:b0:24e:f33:8a1 with SMTP id cx1-20020a17090afd8100b0024e0f3308a1mr2621624pjb.1.1682960834526; Mon, 01 May 2023 10:07:14 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682960834; cv=none; d=google.com; s=arc-20160816; b=mljxfA6lg5cs6GtrGJXsVzRpjZwzI/V/Jr/mPXGJwj70lLXi+idKS45Bg95wzx7GJv Fj44p2BsSAiqzsON/hr0Wn/7GCXfSiI+b2e8u49ZZalyqOyiHCUq/acqSGRrXFo7xjdH acnNUe95VHRu+0Q1S1PFYfmoULUhNKbIuDMpKSOyzKreJsaE6ZrCbNWuojbdm2lLUArb rdQpMMbo3+sYn8M4mN1aZbBb/HlGWWjuGBl0muJAy52D1zdTYCftbpzLiATrzuMMj+ls wRJws9AJAUTwUTlQOMdbiZteUPkWn1cfI0HpQAb1tgRwM46bwss4Mw3jdUqVVkVxDSgu QnEw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:dkim-signature; bh=KlR34XlXux9ljlJXPX99aK4k/cmOetFB9Q46atx2T7Y=; b=gHT54jEo7o98FVo68nYFqGTOQuvK7N4nKmGeJBDPDyi+pRVKcCKGT1Muw3uHp1qHPG gpAefN85uaV/mCsn4nSqhwtoJHqxmufapv5ZiGDuMM20FezhxLwPdr1vy5/E0MGssvFJ 0mcXu6Nbn4fiLd41m64qrjDBfaodyoYRi1nXHNi2k4gjvdawuZEH1x4kdugzBNdC6edI J3J5SXS3RFqpF87XzOCE7eklSiod4pqcE/4Z310ZRUZIbjCCjdIeP0qoBqpOAsnKKoU7 nl2KIxwqhax51ozi9a2ihxdRItOmX2b3jj5gwNkMY9RFNLygrWoszxwk2PgCkS3ItuCH ODbQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=QWt5E67j; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id me15-20020a17090b17cf00b00244ae429c86si34230410pjb.36.2023.05.01.10.06.59; Mon, 01 May 2023 10:07:14 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=QWt5E67j; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232589AbjEAQ6k (ORCPT + 99 others); Mon, 1 May 2023 12:58:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33094 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232447AbjEAQ50 (ORCPT ); Mon, 1 May 2023 12:57:26 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5A09B2711 for ; Mon, 1 May 2023 09:55:53 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-54bfd2c7ad6so53027837b3.2 for ; Mon, 01 May 2023 09:55:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1682960143; x=1685552143; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=KlR34XlXux9ljlJXPX99aK4k/cmOetFB9Q46atx2T7Y=; b=QWt5E67jzX3Q9n5FFMEQTPcgcYXDs3S4iywG6hR5j3a3JlBUkYCc4DtOpxxkkjLFnu 4D+SF9vM2W+aSyyk8kJjTG69VNYflbotzNlQqwEogeuQynY5M8r1b7VoJRcfxutnecbL LuU9GuJRo0TkDfrLBSlYcY4SPMYXeI4QMPcJoAM+FNu7Tta0lyhA6wvriGeN5RlC9+S0 iffTz/pkSCC3kF9yyYctejaCym7L71sJ6nP1Y94qsXlQPHEO6LhbQR95ce2BCueglN+Q qfFo6LHwZ424yz+z+s5ncB2VWclW4myqX2EcpXj3e8kJr6XkScT1PFtCKYGoTWUopffl ivjw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682960143; x=1685552143; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=KlR34XlXux9ljlJXPX99aK4k/cmOetFB9Q46atx2T7Y=; b=QE6TYRdvbtuuoH+6RNFX05SkGBbqBxhOJa9dF7WAw+/O2bIN/55Y+zqZaLGeCcRwj2 xi6X8zh4FlNVx2V/kP6/kXxzeZf7EEIWlaEg3dLCG6KBj1IjUmgIajSFL7Yj2oL4/NEN 7m1ckDtPAna7D5jjortMYowlJbbTIIreMQiqnLw1lfy+SDhE3goMVD1LnNqtVbBVgKOL WAjGYyLh/rxWZeBSnYoQJrX0Vy6bkbvUSUYnxUy6Dx2gorqz23f6f+z/6odKu8+7QYuc dW5uAo33XPcMhDJcKJ9FFv91rAk6EzIal/McTwZt9u2vAzHnUQ0PwsAyL6B5eZeoyYA9 jBmg== X-Gm-Message-State: AC+VfDwpykJQAyazgGaWwwGD2d7pxvdsB9qMhL3PuH/lOeT6Hi2cHVqV F9LtteLJtzlK6nfNfmTmb8BWMjPLvUQ= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:6d24:3efd:facc:7ac4]) (user=surenb job=sendgmr) by 2002:a81:b3c8:0:b0:559:e792:4e87 with SMTP id r191-20020a81b3c8000000b00559e7924e87mr4661945ywh.7.1682960142770; Mon, 01 May 2023 09:55:42 -0700 (PDT) Date: Mon, 1 May 2023 09:54:25 -0700 In-Reply-To: <20230501165450.15352-1-surenb@google.com> Mime-Version: 1.0 References: <20230501165450.15352-1-surenb@google.com> X-Mailer: git-send-email 2.40.1.495.gc816e09b53d-goog Message-ID: <20230501165450.15352-16-surenb@google.com> Subject: [PATCH 15/40] lib: prevent module unloading if memory is not freed From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, ldufour@linux.ibm.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org X-Spam-Status: No, score=-9.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1764712339874521502?= X-GMAIL-MSGID: =?utf-8?q?1764712339874521502?= Skip freeing module's data section if there are non-zero allocation tags because otherwise, once these allocations are freed, the access to their code tag would cause UAF. Signed-off-by: Suren Baghdasaryan --- include/linux/codetag.h | 6 +++--- kernel/module/main.c | 23 +++++++++++++++-------- lib/codetag.c | 11 ++++++++--- 3 files changed, 26 insertions(+), 14 deletions(-) diff --git a/include/linux/codetag.h b/include/linux/codetag.h index 386733e89b31..d98e4c8e86f0 100644 --- a/include/linux/codetag.h +++ b/include/linux/codetag.h @@ -44,7 +44,7 @@ struct codetag_type_desc { size_t tag_size; void (*module_load)(struct codetag_type *cttype, struct codetag_module *cmod); - void (*module_unload)(struct codetag_type *cttype, + bool (*module_unload)(struct codetag_type *cttype, struct codetag_module *cmod); }; @@ -74,10 +74,10 @@ codetag_register_type(const struct codetag_type_desc *desc); #ifdef CONFIG_CODE_TAGGING void codetag_load_module(struct module *mod); -void codetag_unload_module(struct module *mod); +bool codetag_unload_module(struct module *mod); #else static inline void codetag_load_module(struct module *mod) {} -static inline void codetag_unload_module(struct module *mod) {} +static inline bool codetag_unload_module(struct module *mod) { return true; } #endif #endif /* _LINUX_CODETAG_H */ diff --git a/kernel/module/main.c b/kernel/module/main.c index 4232e7bff549..9ff56f2bb09d 100644 --- a/kernel/module/main.c +++ b/kernel/module/main.c @@ -1218,15 +1218,19 @@ static void *module_memory_alloc(unsigned int size, enum mod_mem_type type) return module_alloc(size); } -static void module_memory_free(void *ptr, enum mod_mem_type type) +static void module_memory_free(void *ptr, enum mod_mem_type type, + bool unload_codetags) { + if (!unload_codetags && mod_mem_type_is_core_data(type)) + return; + if (mod_mem_use_vmalloc(type)) vfree(ptr); else module_memfree(ptr); } -static void free_mod_mem(struct module *mod) +static void free_mod_mem(struct module *mod, bool unload_codetags) { for_each_mod_mem_type(type) { struct module_memory *mod_mem = &mod->mem[type]; @@ -1237,20 +1241,23 @@ static void free_mod_mem(struct module *mod) /* Free lock-classes; relies on the preceding sync_rcu(). */ lockdep_free_key_range(mod_mem->base, mod_mem->size); if (mod_mem->size) - module_memory_free(mod_mem->base, type); + module_memory_free(mod_mem->base, type, + unload_codetags); } /* MOD_DATA hosts mod, so free it at last */ lockdep_free_key_range(mod->mem[MOD_DATA].base, mod->mem[MOD_DATA].size); - module_memory_free(mod->mem[MOD_DATA].base, MOD_DATA); + module_memory_free(mod->mem[MOD_DATA].base, MOD_DATA, unload_codetags); } /* Free a module, remove from lists, etc. */ static void free_module(struct module *mod) { + bool unload_codetags; + trace_module_free(mod); - codetag_unload_module(mod); + unload_codetags = codetag_unload_module(mod); mod_sysfs_teardown(mod); /* @@ -1292,7 +1299,7 @@ static void free_module(struct module *mod) kfree(mod->args); percpu_modfree(mod); - free_mod_mem(mod); + free_mod_mem(mod, unload_codetags); } void *__symbol_get(const char *symbol) @@ -2294,7 +2301,7 @@ static int move_module(struct module *mod, struct load_info *info) return 0; out_enomem: for (t--; t >= 0; t--) - module_memory_free(mod->mem[t].base, t); + module_memory_free(mod->mem[t].base, t, true); return ret; } @@ -2424,7 +2431,7 @@ static void module_deallocate(struct module *mod, struct load_info *info) percpu_modfree(mod); module_arch_freeing_init(mod); - free_mod_mem(mod); + free_mod_mem(mod, true); } int __weak module_finalize(const Elf_Ehdr *hdr, diff --git a/lib/codetag.c b/lib/codetag.c index 4ea57fb37346..0ad4ea66c769 100644 --- a/lib/codetag.c +++ b/lib/codetag.c @@ -5,6 +5,7 @@ #include #include #include +#include struct codetag_type { struct list_head link; @@ -219,12 +220,13 @@ void codetag_load_module(struct module *mod) mutex_unlock(&codetag_lock); } -void codetag_unload_module(struct module *mod) +bool codetag_unload_module(struct module *mod) { struct codetag_type *cttype; + bool unload_ok = true; if (!mod) - return; + return true; mutex_lock(&codetag_lock); list_for_each_entry(cttype, &codetag_types, link) { @@ -241,7 +243,8 @@ void codetag_unload_module(struct module *mod) } if (found) { if (cttype->desc.module_unload) - cttype->desc.module_unload(cttype, cmod); + if (!cttype->desc.module_unload(cttype, cmod)) + unload_ok = false; cttype->count -= range_size(cttype, &cmod->range); idr_remove(&cttype->mod_idr, mod_id); @@ -250,4 +253,6 @@ void codetag_unload_module(struct module *mod) up_write(&cttype->mod_lock); } mutex_unlock(&codetag_lock); + + return unload_ok; } From patchwork Mon May 1 16:54:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 89102 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp70448vqo; Mon, 1 May 2023 10:17:39 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ7aMmdPqhMvtHrdQPBZH9vnGIORbvuJLOizDO6/qxZiDsDU7YfwSn+GrMh0yQlkQjDFmgAl X-Received: by 2002:a05:6a00:1151:b0:641:4d8a:23e3 with SMTP id b17-20020a056a00115100b006414d8a23e3mr8637619pfm.13.1682961459355; Mon, 01 May 2023 10:17:39 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682961459; cv=none; d=google.com; s=arc-20160816; b=Y0h3h9TS2SwcjCxhxJQBUqD55thYOXtrXkkSHQLHK1xhExktibM5ZmIVbQP2xbeObA P4xvigK+sPFhEOTIu5kjSCmANTA8oT9OT/jpiUDIVpt/mKr5arqiNecE68SAb5uIVBrD vcJNtWZ/9OOu5grVz37qCEay6/92GoPXHL3MLryu9Reeu/8DyNF/EfHHYDe8LNSRuhFM blfhy8PDC6uRexe2B1zr8eJoUQ1x2WV5CN8CInNGU5C09TJPjah1DxV70xBx90qdKx26 /cliZPL1aNYQJywJAoVJjEz+h9+FKEeHlqs161ERRCLxJlyKjlf2L6Del9M6J3GJ+/6y gtZA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:dkim-signature; bh=cfU8v4uljuDyceCulpsszZZ0MEofJFrAynDM4I2qv80=; b=HzZhf5kt6xogqrupTD2bNGvZAgJE1qGrRGRiccQAvaViDh29r8qevEMoZL66BLrbmt sJmGn0DbPS1RPxSNKNstT6+Da9O3IPnNDZWwblLyCaRHOYwQ7/hKxJVXDOZgne4PN78b H67EGtrblNq/q2Kh9HvEgZOYewmlaUnAc2jfiLAVITjONpH9oRFPiodiXM8DbMmbHazW ZOfH4TIwWjmXlAiefBdmCb0RkYP53ODmeehdnAiZxggNP4bsI3eQQuPEHtHFgDhplQWa ZGqIvtkjmC2BId6fJHIvaS20fMATbVuiQ+YchuLB3Na52TWTUT0C/+c7AfW0+cZGrJl/ VbMA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=7DF+QMYA; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id j23-20020a632317000000b0051a650b8f52si27853057pgj.259.2023.05.01.10.17.25; Mon, 01 May 2023 10:17:39 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=7DF+QMYA; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232778AbjEAQ6m (ORCPT + 99 others); Mon, 1 May 2023 12:58:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33122 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232573AbjEAQ51 (ORCPT ); Mon, 1 May 2023 12:57:27 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 98FBD2719 for ; Mon, 1 May 2023 09:55:57 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id 3f1490d57ef6-b9a6f15287eso27706473276.1 for ; Mon, 01 May 2023 09:55:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1682960145; x=1685552145; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=cfU8v4uljuDyceCulpsszZZ0MEofJFrAynDM4I2qv80=; b=7DF+QMYAAMLutPK+67W4THAVtEPwoqDsbe/74l1tBpyC07tcRQ6NcM5IV8D6bbGSDO mAKOZqw8S8PkrBoJSwJTCQQPpA7SwA7cyZxlWbMDiokIgQwGSl9egB8QrEXUBH/T64s1 zNCYsAsh3eGpCoX4dQnQUqKiZzSauwZNQoR9XmoXX99k0D1LyT9NYpubJVC6Qn7rUrDx j9q+sgeZA/RMhwqSDAOuZPyoK6vZlZ73Jq2IAunuj8KRGdDA/IrEELOEoHnv6XAiPWlM lMDNSxrkbvL1rl0oDCkvCwbUBvti8udh5vGgrmSEH3MVGbNpRYhf9xH2dPTrsqzR9Dja mOjQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682960145; x=1685552145; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=cfU8v4uljuDyceCulpsszZZ0MEofJFrAynDM4I2qv80=; b=iO69YAhsXgA8JdAanRT8Jc13V2Te1V/5RnX/dMImSPugg7x5v8jYt1GHO8vd/T76Px ZEvlZRmwsNhHutn7PvbA3B+nkuDlMNj4O+4ZGEvt7l3Fabpr61oATCN4iZsk05qLOhlY icIi3ddgnF1H9pMVjtdpRxs6IdbnGB9pI3NVauZx8WszpTdvYj4KhCEbLTxSWEpz84VM MjxnsN+7Ez73daHSJAbvuO67QqIQE0rk9L62iduOYRZBB8uK1/QU3oiHUFJdFvKbCgC1 D3ZQyn/A6xSB05qGdmci8uLN9a2XO3WHnwDjCnlbvTPiFG2DS+ZMcvlb0m2W2S7N/S3c OnlQ== X-Gm-Message-State: AC+VfDwRwN5JtipXPlGR6fWPg9VxxNQcaVRqd6/iDd+1GeOmNdlymkA0 VyjV0XItrQVsQjJlgPE2kVa8pPaX10U= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:6d24:3efd:facc:7ac4]) (user=surenb job=sendgmr) by 2002:a25:3482:0:b0:b94:6989:7fa6 with SMTP id b124-20020a253482000000b00b9469897fa6mr8599498yba.4.1682960144921; Mon, 01 May 2023 09:55:44 -0700 (PDT) Date: Mon, 1 May 2023 09:54:26 -0700 In-Reply-To: <20230501165450.15352-1-surenb@google.com> Mime-Version: 1.0 References: <20230501165450.15352-1-surenb@google.com> X-Mailer: git-send-email 2.40.1.495.gc816e09b53d-goog Message-ID: <20230501165450.15352-17-surenb@google.com> Subject: [PATCH 16/40] lib: code tagging query helper functions From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, ldufour@linux.ibm.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org X-Spam-Status: No, score=-9.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1764712995391300125?= X-GMAIL-MSGID: =?utf-8?q?1764712995391300125?= From: Kent Overstreet Provide codetag_query_parse() to parse codetag queries and codetag_matches_query() to check if the query affects a given codetag. Signed-off-by: Kent Overstreet Signed-off-by: Suren Baghdasaryan --- include/linux/codetag.h | 27 ++++++++ lib/codetag.c | 135 ++++++++++++++++++++++++++++++++++++++++ 2 files changed, 162 insertions(+) diff --git a/include/linux/codetag.h b/include/linux/codetag.h index d98e4c8e86f0..87207f199ac9 100644 --- a/include/linux/codetag.h +++ b/include/linux/codetag.h @@ -80,4 +80,31 @@ static inline void codetag_load_module(struct module *mod) {} static inline bool codetag_unload_module(struct module *mod) { return true; } #endif +/* Codetag query parsing */ + +struct codetag_query { + const char *filename; + const char *module; + const char *function; + const char *class; + unsigned int first_line, last_line; + unsigned int first_index, last_index; + unsigned int cur_index; + + bool match_line:1; + bool match_index:1; + + unsigned int set_enabled:1; + unsigned int enabled:2; + + unsigned int set_frequency:1; + unsigned int frequency; +}; + +char *codetag_query_parse(struct codetag_query *q, char *buf); +bool codetag_matches_query(struct codetag_query *q, + const struct codetag *ct, + const struct codetag_module *mod, + const char *class); + #endif /* _LINUX_CODETAG_H */ diff --git a/lib/codetag.c b/lib/codetag.c index 0ad4ea66c769..84f90f3b922c 100644 --- a/lib/codetag.c +++ b/lib/codetag.c @@ -256,3 +256,138 @@ bool codetag_unload_module(struct module *mod) return unload_ok; } + +/* Codetag query parsing */ + +#define CODETAG_QUERY_TOKENS() \ + x(func) \ + x(file) \ + x(line) \ + x(module) \ + x(class) \ + x(index) + +enum tokens { +#define x(name) TOK_##name, + CODETAG_QUERY_TOKENS() +#undef x +}; + +static const char * const token_strs[] = { +#define x(name) #name, + CODETAG_QUERY_TOKENS() +#undef x + NULL +}; + +static int parse_range(char *str, unsigned int *first, unsigned int *last) +{ + char *first_str = str; + char *last_str = strchr(first_str, '-'); + + if (last_str) + *last_str++ = '\0'; + + if (kstrtouint(first_str, 10, first)) + return -EINVAL; + + if (!last_str) + *last = *first; + else if (kstrtouint(last_str, 10, last)) + return -EINVAL; + + return 0; +} + +char *codetag_query_parse(struct codetag_query *q, char *buf) +{ + while (1) { + char *p = buf; + char *str1 = strsep_no_empty(&p, " \t\r\n"); + char *str2 = strsep_no_empty(&p, " \t\r\n"); + int ret, token; + + if (!str1 || !str2) + break; + + token = match_string(token_strs, ARRAY_SIZE(token_strs), str1); + if (token < 0) + break; + + switch (token) { + case TOK_func: + q->function = str2; + break; + case TOK_file: + q->filename = str2; + break; + case TOK_line: + ret = parse_range(str2, &q->first_line, &q->last_line); + if (ret) + return ERR_PTR(ret); + q->match_line = true; + break; + case TOK_module: + q->module = str2; + break; + case TOK_class: + q->class = str2; + break; + case TOK_index: + ret = parse_range(str2, &q->first_index, &q->last_index); + if (ret) + return ERR_PTR(ret); + q->match_index = true; + break; + } + + buf = p; + } + + return buf; +} + +bool codetag_matches_query(struct codetag_query *q, + const struct codetag *ct, + const struct codetag_module *mod, + const char *class) +{ + size_t classlen = q->class ? strlen(q->class) : 0; + + if (q->module && + (!mod->mod || + strcmp(q->module, ct->modname))) + return false; + + if (q->filename && + strcmp(q->filename, ct->filename) && + strcmp(q->filename, kbasename(ct->filename))) + return false; + + if (q->function && + strcmp(q->function, ct->function)) + return false; + + /* match against the line number range */ + if (q->match_line && + (ct->lineno < q->first_line || + ct->lineno > q->last_line)) + return false; + + /* match against the class */ + if (classlen && + (strncmp(q->class, class, classlen) || + (class[classlen] && class[classlen] != ':'))) + return false; + + /* match against the fault index */ + if (q->match_index && + (q->cur_index < q->first_index || + q->cur_index > q->last_index)) { + q->cur_index++; + return false; + } + + q->cur_index++; + return true; +} From patchwork Mon May 1 16:54:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 89116 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp72283vqo; Mon, 1 May 2023 10:20:46 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ5lt/2YH6h4wZK+PqgtJpKnnxZEzB15snOwedfgkj+2piL/QGQd15QHvM5nQ+HpBUYMn+E4 X-Received: by 2002:a17:90a:138a:b0:24e:27a:e91f with SMTP id i10-20020a17090a138a00b0024e027ae91fmr4390336pja.11.1682961646008; Mon, 01 May 2023 10:20:46 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682961645; cv=none; d=google.com; s=arc-20160816; b=dfOcJmgJvcC3a/h+A7ojeUIwD1b2JSpAkbZuows+yPsBRDf65/2fDNSOuygq0GSc7Y XeCEnCz3X3ETBkk47TBe3K7ljTbX6Ockv3kGb5wv5sOJyoTYga/Mn74xgYF24XfsC/tk A388uiOmDC8uy2g5XCmAJUSIduiRiKpIqex2GDPug0kNnmG2ed1s7S61Bn21NpZKX10p 2U1LwVHv8uIM/Ftl8udHlPViYA/e/JtuctXrn/5ZlFOMCW4VQDVw2vnRLKOlAdacbZJn uP23i60+ncWYhrHKv1cmT4YhfAh72BCuC0yeYh8wojgoycQWj8FAMODijpBmG8juSqhR 535A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:dkim-signature; bh=n4nztol7rM1gH9OZw6qc/mLZ9u/ayJFp51XDo0Zgdb0=; b=m6TUd+z0H2f+BtSHjRgjqx9P0UikQuMrLlBZbAYIen3xqBzVyuphYj9SHTo1pgcHcn eEeKPIXSuzZAo1MqD6pubRCcl5tJS0AFidKPTW8MLsgDATJ8TXyi9gfxpM0OSirBz9U4 1gME/A5i3rQLVzy4wvwu05JITL5YjBNVkFMMHPqYkCFrJD5VBXXZDmZJXYNEJx0xfIaE RUJh7Vuc9jTaX8Z87UiSNQW8LDqxs68vBsPHGMJiJDvEmqW/Wt/rXt8QGBqGmP2fixEl 10FXbBLnwG8TnLwMwlmpUhCJosD3lpnHcaw4ymJLowFFLNIsVQ4Za+ueDcVCodF7LhMd Sl1w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=EtKwNJp9; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id k19-20020a63ff13000000b004fb98290dbdsi28435680pgi.50.2023.05.01.10.20.31; Mon, 01 May 2023 10:20:45 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=EtKwNJp9; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232825AbjEAQ7G (ORCPT + 99 others); Mon, 1 May 2023 12:59:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59610 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232955AbjEAQ5t (ORCPT ); Mon, 1 May 2023 12:57:49 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 92D582737 for ; Mon, 1 May 2023 09:56:03 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id 3f1490d57ef6-b9a7e65b34aso4867746276.0 for ; Mon, 01 May 2023 09:56:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1682960147; x=1685552147; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=n4nztol7rM1gH9OZw6qc/mLZ9u/ayJFp51XDo0Zgdb0=; b=EtKwNJp9DhXdPpGMnxJhURr2M+CEUDCL3ptY/gnCpcS0fCNClhBRQnKLQaL1vap9Xv flIa0VF5vnwD4+yj//NKKusPsaplJKWYoL8OXzGJevivoC8g8NK/KucCrwwcpJv7veAa xgY8hftcgpKYNDC1x16rPLL3UZWkzbNXNnzUPnYMfXTjNS/84SrVYM1+6fXXIcqe0gMq Pvc/uc9Lu7NjvCYiVn+U/2J1Q6ZWwlWHji/FlDVWwKbAq0PnDxII8zNEOzDe0Qjfw6ov uROLRXRRxNFvvQjU2j5RxMIKm9rtC5Y4CTJTP9LtI2yGnirb5FQXS5XGTlB6aZv4UEfC Y5Tg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682960147; x=1685552147; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=n4nztol7rM1gH9OZw6qc/mLZ9u/ayJFp51XDo0Zgdb0=; b=BBwHD72oSG2V/vCEYsMbvg6eOaVDW+yebmzfU7zC7yXrVVmapFkaeZbmJEZnbdeias 8kKdhTcIoFimWmBnxg06hcqUSao2l9o3C4ElcU2RDK6gPeRdvNZ4mCGNtCV0yYsn5uj+ aibj9S8ke1tJ6j84+De12nDQ/uD6bvxDHqogJURRUZNFfW7DKOo9h+cJogFdl15YwzE3 JqZRAscKAoCwA4Vy2O3sz56kzhGGDLpYDBzAxm4v/3igW200/70wKc5Rzsh83eFurLcb /WyQbbkOeFZTU3yvQrHrPjZ7c+NU0s8zAnL1lwiYjjRfyQrXgFHUwx7TwnIEbSIzHvld qTlA== X-Gm-Message-State: AC+VfDxD9OQ8oSC22yu6s+ot2hqg9gjqv3/nPxMlvbt8Boq2w8zGlP2Q vEmZZRbw2/+UQgREM1l5GW28vGGGOhc= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:6d24:3efd:facc:7ac4]) (user=surenb job=sendgmr) by 2002:a25:bb0b:0:b0:b9a:6508:1b5f with SMTP id z11-20020a25bb0b000000b00b9a65081b5fmr5519617ybg.11.1682960147095; Mon, 01 May 2023 09:55:47 -0700 (PDT) Date: Mon, 1 May 2023 09:54:27 -0700 In-Reply-To: <20230501165450.15352-1-surenb@google.com> Mime-Version: 1.0 References: <20230501165450.15352-1-surenb@google.com> X-Mailer: git-send-email 2.40.1.495.gc816e09b53d-goog Message-ID: <20230501165450.15352-18-surenb@google.com> Subject: [PATCH 17/40] lib: add allocation tagging support for memory allocation profiling From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, ldufour@linux.ibm.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org X-Spam-Status: No, score=-9.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1764713190976880380?= X-GMAIL-MSGID: =?utf-8?q?1764713190976880380?= Introduce CONFIG_MEM_ALLOC_PROFILING which provides definitions to easily instrument memory allocators. It also registers an "alloc_tags" codetag type with "allocations" defbugfs interface to output allocation tag information. CONFIG_MEM_ALLOC_PROFILING_DEBUG is provided for debugging the memory allocation profiling instrumentation. Co-developed-by: Kent Overstreet Signed-off-by: Kent Overstreet Signed-off-by: Suren Baghdasaryan --- .../admin-guide/kernel-parameters.txt | 2 + include/asm-generic/codetag.lds.h | 14 ++ include/asm-generic/vmlinux.lds.h | 3 + include/linux/alloc_tag.h | 105 +++++++++++ include/linux/sched.h | 24 +++ lib/Kconfig.debug | 19 ++ lib/Makefile | 2 + lib/alloc_tag.c | 177 ++++++++++++++++++ scripts/module.lds.S | 7 + 9 files changed, 353 insertions(+) create mode 100644 include/asm-generic/codetag.lds.h create mode 100644 include/linux/alloc_tag.h create mode 100644 lib/alloc_tag.c diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index 9e5bab29685f..2fd8e56b7af8 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -3770,6 +3770,8 @@ nomce [X86-32] Disable Machine Check Exception + nomem_profiling Disable memory allocation profiling. + nomfgpt [X86-32] Disable Multi-Function General Purpose Timer usage (for AMD Geode machines). diff --git a/include/asm-generic/codetag.lds.h b/include/asm-generic/codetag.lds.h new file mode 100644 index 000000000000..64f536b80380 --- /dev/null +++ b/include/asm-generic/codetag.lds.h @@ -0,0 +1,14 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +#ifndef __ASM_GENERIC_CODETAG_LDS_H +#define __ASM_GENERIC_CODETAG_LDS_H + +#define SECTION_WITH_BOUNDARIES(_name) \ + . = ALIGN(8); \ + __start_##_name = .; \ + KEEP(*(_name)) \ + __stop_##_name = .; + +#define CODETAG_SECTIONS() \ + SECTION_WITH_BOUNDARIES(alloc_tags) + +#endif /* __ASM_GENERIC_CODETAG_LDS_H */ diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h index d1f57e4868ed..985ff045c2a2 100644 --- a/include/asm-generic/vmlinux.lds.h +++ b/include/asm-generic/vmlinux.lds.h @@ -50,6 +50,8 @@ * [__nosave_begin, __nosave_end] for the nosave data */ +#include + #ifndef LOAD_OFFSET #define LOAD_OFFSET 0 #endif @@ -374,6 +376,7 @@ . = ALIGN(8); \ BOUNDED_SECTION_BY(__dyndbg_classes, ___dyndbg_classes) \ BOUNDED_SECTION_BY(__dyndbg, ___dyndbg) \ + CODETAG_SECTIONS() \ LIKELY_PROFILE() \ BRANCH_PROFILE() \ TRACE_PRINTKS() \ diff --git a/include/linux/alloc_tag.h b/include/linux/alloc_tag.h new file mode 100644 index 000000000000..d913f8d9a7d8 --- /dev/null +++ b/include/linux/alloc_tag.h @@ -0,0 +1,105 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * allocation tagging + */ +#ifndef _LINUX_ALLOC_TAG_H +#define _LINUX_ALLOC_TAG_H + +#include +#include +#include +#include +#include + +/* + * An instance of this structure is created in a special ELF section at every + * allocation callsite. At runtime, the special section is treated as + * an array of these. Embedded codetag utilizes codetag framework. + */ +struct alloc_tag { + struct codetag ct; + struct lazy_percpu_counter bytes_allocated; +} __aligned(8); + +#ifdef CONFIG_MEM_ALLOC_PROFILING + +static inline struct alloc_tag *ct_to_alloc_tag(struct codetag *ct) +{ + return container_of(ct, struct alloc_tag, ct); +} + +#define DEFINE_ALLOC_TAG(_alloc_tag, _old) \ + static struct alloc_tag _alloc_tag __used __aligned(8) \ + __section("alloc_tags") = { .ct = CODE_TAG_INIT }; \ + struct alloc_tag * __maybe_unused _old = alloc_tag_save(&_alloc_tag) + +extern struct static_key_true mem_alloc_profiling_key; + +static inline bool mem_alloc_profiling_enabled(void) +{ + return static_branch_likely(&mem_alloc_profiling_key); +} + +static inline void __alloc_tag_sub(union codetag_ref *ref, size_t bytes, + bool may_allocate) +{ + struct alloc_tag *tag; + +#ifdef CONFIG_MEM_ALLOC_PROFILING_DEBUG + /* The switch should be checked before this */ + BUG_ON(!mem_alloc_profiling_enabled()); + + WARN_ONCE(ref && !ref->ct, "alloc_tag was not set\n"); +#endif + if (!ref || !ref->ct) + return; + + tag = ct_to_alloc_tag(ref->ct); + + if (may_allocate) + lazy_percpu_counter_add(&tag->bytes_allocated, -bytes); + else + lazy_percpu_counter_add_noupgrade(&tag->bytes_allocated, -bytes); + ref->ct = NULL; +} + +static inline void alloc_tag_sub(union codetag_ref *ref, size_t bytes) +{ + __alloc_tag_sub(ref, bytes, true); +} + +static inline void alloc_tag_sub_noalloc(union codetag_ref *ref, size_t bytes) +{ + __alloc_tag_sub(ref, bytes, false); +} + +static inline void alloc_tag_add(union codetag_ref *ref, struct alloc_tag *tag, size_t bytes) +{ +#ifdef CONFIG_MEM_ALLOC_PROFILING_DEBUG + /* The switch should be checked before this */ + BUG_ON(!mem_alloc_profiling_enabled()); + + WARN_ONCE(ref && ref->ct, + "alloc_tag was not cleared (got tag for %s:%u)\n",\ + ref->ct->filename, ref->ct->lineno); + + WARN_ONCE(!tag, "current->alloc_tag not set"); +#endif + if (!ref || !tag) + return; + + ref->ct = &tag->ct; + lazy_percpu_counter_add(&tag->bytes_allocated, bytes); +} + +#else + +#define DEFINE_ALLOC_TAG(_alloc_tag, _old) +static inline void alloc_tag_sub(union codetag_ref *ref, size_t bytes) {} +static inline void alloc_tag_sub_noalloc(union codetag_ref *ref, size_t bytes) {} +static inline void alloc_tag_add(union codetag_ref *ref, struct alloc_tag *tag, + size_t bytes) {} + +#endif + +#endif /* _LINUX_ALLOC_TAG_H */ diff --git a/include/linux/sched.h b/include/linux/sched.h index 35e7efdea2d9..33708bf8f191 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -763,6 +763,10 @@ struct task_struct { unsigned int flags; unsigned int ptrace; +#ifdef CONFIG_MEM_ALLOC_PROFILING + struct alloc_tag *alloc_tag; +#endif + #ifdef CONFIG_SMP int on_cpu; struct __call_single_node wake_entry; @@ -802,6 +806,7 @@ struct task_struct { struct task_group *sched_task_group; #endif + #ifdef CONFIG_UCLAMP_TASK /* * Clamp values requested for a scheduling entity. @@ -2444,4 +2449,23 @@ static inline void sched_core_fork(struct task_struct *p) { } extern void sched_set_stop_task(int cpu, struct task_struct *stop); +#ifdef CONFIG_MEM_ALLOC_PROFILING +static inline struct alloc_tag *alloc_tag_save(struct alloc_tag *tag) +{ + swap(current->alloc_tag, tag); + return tag; +} + +static inline void alloc_tag_restore(struct alloc_tag *tag, struct alloc_tag *old) +{ +#ifdef CONFIG_MEM_ALLOC_PROFILING_DEBUG + WARN(current->alloc_tag != tag, "current->alloc_tag was changed:\n"); +#endif + current->alloc_tag = old; +} +#else +static inline struct alloc_tag *alloc_tag_save(struct alloc_tag *tag) { return NULL; } +#define alloc_tag_restore(_tag, _old) +#endif + #endif diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug index 5078da7d3ffb..da0a91ea6042 100644 --- a/lib/Kconfig.debug +++ b/lib/Kconfig.debug @@ -961,6 +961,25 @@ config CODE_TAGGING bool select KALLSYMS +config MEM_ALLOC_PROFILING + bool "Enable memory allocation profiling" + default n + depends on DEBUG_FS + select CODE_TAGGING + select LAZY_PERCPU_COUNTER + help + Track allocation source code and record total allocation size + initiated at that code location. The mechanism can be used to track + memory leaks with a low performance impact. + +config MEM_ALLOC_PROFILING_DEBUG + bool "Memory allocation profiler debugging" + default n + depends on MEM_ALLOC_PROFILING + help + Adds warnings with helpful error messages for memory allocation + profiling. + source "lib/Kconfig.kasan" source "lib/Kconfig.kfence" source "lib/Kconfig.kmsan" diff --git a/lib/Makefile b/lib/Makefile index 28d70ecf2976..8d09ccb4d30c 100644 --- a/lib/Makefile +++ b/lib/Makefile @@ -229,6 +229,8 @@ obj-$(CONFIG_OF_RECONFIG_NOTIFIER_ERROR_INJECT) += \ obj-$(CONFIG_FUNCTION_ERROR_INJECTION) += error-inject.o obj-$(CONFIG_CODE_TAGGING) += codetag.o +obj-$(CONFIG_MEM_ALLOC_PROFILING) += alloc_tag.o + lib-$(CONFIG_GENERIC_BUG) += bug.o obj-$(CONFIG_HAVE_ARCH_TRACEHOOK) += syscall.o diff --git a/lib/alloc_tag.c b/lib/alloc_tag.c new file mode 100644 index 000000000000..3c4cfeb79862 --- /dev/null +++ b/lib/alloc_tag.c @@ -0,0 +1,177 @@ +// SPDX-License-Identifier: GPL-2.0-only +#include +#include +#include +#include +#include +#include +#include + +DEFINE_STATIC_KEY_TRUE(mem_alloc_profiling_key); + +/* + * Won't need to be exported once page allocation accounting is moved to the + * correct place: + */ +EXPORT_SYMBOL(mem_alloc_profiling_key); + +static int __init mem_alloc_profiling_disable(char *s) +{ + static_branch_disable(&mem_alloc_profiling_key); + return 1; +} +__setup("nomem_profiling", mem_alloc_profiling_disable); + +struct alloc_tag_file_iterator { + struct codetag_iterator ct_iter; + struct seq_buf buf; + char rawbuf[4096]; +}; + +struct user_buf { + char __user *buf; /* destination user buffer */ + size_t size; /* size of requested read */ + ssize_t ret; /* bytes read so far */ +}; + +static int flush_ubuf(struct user_buf *dst, struct seq_buf *src) +{ + if (src->len) { + size_t bytes = min_t(size_t, src->len, dst->size); + int err = copy_to_user(dst->buf, src->buffer, bytes); + + if (err) + return err; + + dst->ret += bytes; + dst->buf += bytes; + dst->size -= bytes; + src->len -= bytes; + memmove(src->buffer, src->buffer + bytes, src->len); + } + + return 0; +} + +static int allocations_file_open(struct inode *inode, struct file *file) +{ + struct codetag_type *cttype = inode->i_private; + struct alloc_tag_file_iterator *iter; + + iter = kzalloc(sizeof(*iter), GFP_KERNEL); + if (!iter) + return -ENOMEM; + + codetag_lock_module_list(cttype, true); + iter->ct_iter = codetag_get_ct_iter(cttype); + codetag_lock_module_list(cttype, false); + seq_buf_init(&iter->buf, iter->rawbuf, sizeof(iter->rawbuf)); + file->private_data = iter; + + return 0; +} + +static int allocations_file_release(struct inode *inode, struct file *file) +{ + struct alloc_tag_file_iterator *iter = file->private_data; + + kfree(iter); + return 0; +} + +static void alloc_tag_to_text(struct seq_buf *out, struct codetag *ct) +{ + struct alloc_tag *tag = ct_to_alloc_tag(ct); + char buf[10]; + + string_get_size(lazy_percpu_counter_read(&tag->bytes_allocated), 1, + STRING_UNITS_2, buf, sizeof(buf)); + + seq_buf_printf(out, "%8s ", buf); + codetag_to_text(out, ct); + seq_buf_putc(out, '\n'); +} + +static ssize_t allocations_file_read(struct file *file, char __user *ubuf, + size_t size, loff_t *ppos) +{ + struct alloc_tag_file_iterator *iter = file->private_data; + struct user_buf buf = { .buf = ubuf, .size = size }; + struct codetag *ct; + int err = 0; + + codetag_lock_module_list(iter->ct_iter.cttype, true); + while (1) { + err = flush_ubuf(&buf, &iter->buf); + if (err || !buf.size) + break; + + ct = codetag_next_ct(&iter->ct_iter); + if (!ct) + break; + + alloc_tag_to_text(&iter->buf, ct); + } + codetag_lock_module_list(iter->ct_iter.cttype, false); + + return err ? : buf.ret; +} + +static const struct file_operations allocations_file_ops = { + .owner = THIS_MODULE, + .open = allocations_file_open, + .release = allocations_file_release, + .read = allocations_file_read, +}; + +static int __init dbgfs_init(struct codetag_type *cttype) +{ + struct dentry *file; + + file = debugfs_create_file("allocations", 0444, NULL, cttype, + &allocations_file_ops); + + return IS_ERR(file) ? PTR_ERR(file) : 0; +} + +static bool alloc_tag_module_unload(struct codetag_type *cttype, struct codetag_module *cmod) +{ + struct codetag_iterator iter = codetag_get_ct_iter(cttype); + bool module_unused = true; + struct alloc_tag *tag; + struct codetag *ct; + size_t bytes; + + for (ct = codetag_next_ct(&iter); ct; ct = codetag_next_ct(&iter)) { + if (iter.cmod != cmod) + continue; + + tag = ct_to_alloc_tag(ct); + bytes = lazy_percpu_counter_read(&tag->bytes_allocated); + + if (!WARN(bytes, "%s:%u module %s func:%s has %zu allocated at module unload", + ct->filename, ct->lineno, ct->modname, ct->function, bytes)) + lazy_percpu_counter_exit(&tag->bytes_allocated); + else + module_unused = false; + } + + return module_unused; +} + +static int __init alloc_tag_init(void) +{ + struct codetag_type *cttype; + const struct codetag_type_desc desc = { + .section = "alloc_tags", + .tag_size = sizeof(struct alloc_tag), + .module_unload = alloc_tag_module_unload, + }; + + cttype = codetag_register_type(&desc); + if (IS_ERR_OR_NULL(cttype)) + return PTR_ERR(cttype); + + return dbgfs_init(cttype); +} +module_init(alloc_tag_init); diff --git a/scripts/module.lds.S b/scripts/module.lds.S index bf5bcf2836d8..45c67a0994f3 100644 --- a/scripts/module.lds.S +++ b/scripts/module.lds.S @@ -9,6 +9,8 @@ #define DISCARD_EH_FRAME *(.eh_frame) #endif +#include + SECTIONS { /DISCARD/ : { *(.discard) @@ -47,12 +49,17 @@ SECTIONS { .data : { *(.data .data.[0-9a-zA-Z_]*) *(.data..L*) + CODETAG_SECTIONS() } .rodata : { *(.rodata .rodata.[0-9a-zA-Z_]*) *(.rodata..L*) } +#else + .data : { + CODETAG_SECTIONS() + } #endif } From patchwork Mon May 1 16:54:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 89115 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp71795vqo; Mon, 1 May 2023 10:19:53 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ4CMM0tvMzLnB9hHSzAjqtL1xn4I8tuWKVJx4ZupYeemU51WvVuYZz3eLaWniXXgcDktA3X X-Received: by 2002:a05:6a21:2d8b:b0:f0:dfe2:21af with SMTP id ty11-20020a056a212d8b00b000f0dfe221afmr14213947pzb.29.1682961593495; Mon, 01 May 2023 10:19:53 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682961593; cv=none; d=google.com; s=arc-20160816; b=lFPuUmeXWRZ6ULfqaiPCMILtdpsTE/wpQ6+eoIJSgqrB3cMulsaXL076frlCS5jzob /JnSbeeGLUvZ8S95FH0Era71yF+t7QPhRsRyN5GX4TPACPhznDQ+ME0ck4bzBVP1RUWC qV4iKy22uaP2A8yiN40BybOLSuQLjeA9N1u0hGIkeP2UxwFxX58akJblGA2NoVG0rY1c gaJyOJR/QSM3dR89dZM6e6m9mkVz5P8Atzjhh2e7jiQaNJt7HEyYJYNDyllONFeW3X/Z ekeIGXPT1lYv6hJv/yZTTVpZEBAfwSlhMGrXm8uUX9VzI3EQMqfGjePQg1j3NXmWfdjP KNCg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:dkim-signature; bh=ANclwoiguD8BsbfKgYnGL1LnBw+SqNV6XnzUrNDvkRU=; b=PiAZRpKuOAZBhMTT7Y4YhPeIA1mD8ma/jtfkuBBdqJN0tq4ML7nyWHbQ17wTKwksK0 xeb+tOUdZJMxfdj6KfboWLTfpgYLNljgZk0FzN95qFacdxDjTJos444Wo8j4FjbK6xTI yp5trCTlfyoM7Ysa0Pc2poffmKcjlcfmXIZNh/4fIKB4B4XvkGOCpIBfUYofkOMA/CX0 iyuJz09FsG7DUAtLb507UWKI3TcJ35vH7X22hJUXI+fiazxgIXqWpiiv21uyKewByphX v2d6yEIiJ/B70cd14fw29FGrm1dXQn9yPbZYyyM+CI0T8EetJudVlFHBqXWfHlt2k5lo 4XqA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=PkT7GX0U; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id p11-20020a17090a868b00b002476f3bcac7si30798743pjn.179.2023.05.01.10.19.40; Mon, 01 May 2023 10:19:53 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=PkT7GX0U; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233056AbjEAQ7J (ORCPT + 99 others); Mon, 1 May 2023 12:59:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60834 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232711AbjEAQ54 (ORCPT ); Mon, 1 May 2023 12:57:56 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 726BF2D5E for ; Mon, 1 May 2023 09:56:05 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id 3f1490d57ef6-b9a6f15287eso27706848276.1 for ; Mon, 01 May 2023 09:56:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1682960149; x=1685552149; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=ANclwoiguD8BsbfKgYnGL1LnBw+SqNV6XnzUrNDvkRU=; b=PkT7GX0UpBWBL3ZxZZQ1u+ssYXtbTc+WLolVY9j+MP76VowH5UScBu+NGiGIbJ7Y6L eZfY/6ZYqoCRi2CKlp/WtuvsiD21usF1/P37qu0gPbOtENmIl51pQz/p9+lm6j/kWelt Ov6MNUZwC6/nK8n86rfdlgBHbFP156NpwLcMsML+lMt+4KcACeUQ1EH3vrJuTYlZHnxu 7BuZUefNEtoVSDEOw+FF7cwg3Q8gxPu6E4rHt6NEwlnEpLRRqtUVy0vsJsqLucVJRBrd gANWGNYs/01Nqy3PLDr7YbpWPHn8hbMhpjK41s03TQMV5IjienckbULG1YjCuS8h+b0d B72Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682960149; x=1685552149; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=ANclwoiguD8BsbfKgYnGL1LnBw+SqNV6XnzUrNDvkRU=; b=QV1/YEnhcvlnBkExlvGhfkOaI5VEx4aeNty/I4GPcRS8yM+s/76m6XqW/MKmaQkcog Ec7+uP3m7QDoX1vIvO/NpwuYa5iCXFxcX4k2TmJ9SkIHAKNncIHK1dQaBkGVRJIURHJT QsLmtqlJkojW4EKqZP3vwcmV+0IfcJa3Mm7+P0q9XAJ5jLgOOmSQKG3axXapum1obCuk CLtrIyHGChkEqx3DWchByz7nZKxz/djEsz9AG37d6fvVVFj9V37fQVWjYHfAzEP0sV1A CisP1oWOwFMPOBwO6Evn4yEEE3cMCZe/T64VIHZ05Bgb2nIAXN0SR/5W6r+QW7ofrP+A jWgg== X-Gm-Message-State: AC+VfDzjjZi6Tb0xoCFzHSCRT2inSJKrVVgQ2lDRsehFCxiE2rnDh15X BfWF2Pzd8mpG3ZGo3EHLOQbpynsdvlM= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:6d24:3efd:facc:7ac4]) (user=surenb job=sendgmr) by 2002:a25:dbd2:0:b0:b99:cd69:cc32 with SMTP id g201-20020a25dbd2000000b00b99cd69cc32mr11391322ybf.0.1682960149191; Mon, 01 May 2023 09:55:49 -0700 (PDT) Date: Mon, 1 May 2023 09:54:28 -0700 In-Reply-To: <20230501165450.15352-1-surenb@google.com> Mime-Version: 1.0 References: <20230501165450.15352-1-surenb@google.com> X-Mailer: git-send-email 2.40.1.495.gc816e09b53d-goog Message-ID: <20230501165450.15352-19-surenb@google.com> Subject: [PATCH 18/40] lib: introduce support for page allocation tagging From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, ldufour@linux.ibm.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org X-Spam-Status: No, score=-9.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1764713135793052711?= X-GMAIL-MSGID: =?utf-8?q?1764713135793052711?= Introduce helper functions to easily instrument page allocators by storing a pointer to the allocation tag associated with the code that allocated the page in a page_ext field. Signed-off-by: Suren Baghdasaryan Co-developed-by: Kent Overstreet Signed-off-by: Kent Overstreet --- include/linux/pgalloc_tag.h | 33 +++++++++++++++++++++++++++++++++ lib/Kconfig.debug | 1 + lib/alloc_tag.c | 17 +++++++++++++++++ mm/page_ext.c | 12 +++++++++--- 4 files changed, 60 insertions(+), 3 deletions(-) create mode 100644 include/linux/pgalloc_tag.h diff --git a/include/linux/pgalloc_tag.h b/include/linux/pgalloc_tag.h new file mode 100644 index 000000000000..f8c7b6ef9c75 --- /dev/null +++ b/include/linux/pgalloc_tag.h @@ -0,0 +1,33 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * page allocation tagging + */ +#ifndef _LINUX_PGALLOC_TAG_H +#define _LINUX_PGALLOC_TAG_H + +#include +#include + +extern struct page_ext_operations page_alloc_tagging_ops; +struct page_ext *lookup_page_ext(const struct page *page); + +static inline union codetag_ref *get_page_tag_ref(struct page *page) +{ + if (page && mem_alloc_profiling_enabled()) { + struct page_ext *page_ext = lookup_page_ext(page); + + if (page_ext) + return (void *)page_ext + page_alloc_tagging_ops.offset; + } + return NULL; +} + +static inline void pgalloc_tag_dec(struct page *page, unsigned int order) +{ + union codetag_ref *ref = get_page_tag_ref(page); + + if (ref) + alloc_tag_sub(ref, PAGE_SIZE << order); +} + +#endif /* _LINUX_PGALLOC_TAG_H */ diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug index da0a91ea6042..d3aa5ee0bf0d 100644 --- a/lib/Kconfig.debug +++ b/lib/Kconfig.debug @@ -967,6 +967,7 @@ config MEM_ALLOC_PROFILING depends on DEBUG_FS select CODE_TAGGING select LAZY_PERCPU_COUNTER + select PAGE_EXTENSION help Track allocation source code and record total allocation size initiated at that code location. The mechanism can be used to track diff --git a/lib/alloc_tag.c b/lib/alloc_tag.c index 3c4cfeb79862..4a0b95a46b2e 100644 --- a/lib/alloc_tag.c +++ b/lib/alloc_tag.c @@ -4,6 +4,7 @@ #include #include #include +#include #include #include @@ -159,6 +160,22 @@ static bool alloc_tag_module_unload(struct codetag_type *cttype, struct codetag_ return module_unused; } +static __init bool need_page_alloc_tagging(void) +{ + return true; +} + +static __init void init_page_alloc_tagging(void) +{ +} + +struct page_ext_operations page_alloc_tagging_ops = { + .size = sizeof(union codetag_ref), + .need = need_page_alloc_tagging, + .init = init_page_alloc_tagging, +}; +EXPORT_SYMBOL(page_alloc_tagging_ops); + static int __init alloc_tag_init(void) { struct codetag_type *cttype; diff --git a/mm/page_ext.c b/mm/page_ext.c index dc1626be458b..eaf054ec276c 100644 --- a/mm/page_ext.c +++ b/mm/page_ext.c @@ -10,6 +10,7 @@ #include #include #include +#include /* * struct page extension @@ -82,6 +83,9 @@ static struct page_ext_operations *page_ext_ops[] __initdata = { #if defined(CONFIG_PAGE_IDLE_FLAG) && !defined(CONFIG_64BIT) &page_idle_ops, #endif +#ifdef CONFIG_MEM_ALLOC_PROFILING + &page_alloc_tagging_ops, +#endif #ifdef CONFIG_PAGE_TABLE_CHECK &page_table_check_ops, #endif @@ -90,7 +94,7 @@ static struct page_ext_operations *page_ext_ops[] __initdata = { unsigned long page_ext_size; static unsigned long total_usage; -static struct page_ext *lookup_page_ext(const struct page *page); +struct page_ext *lookup_page_ext(const struct page *page); bool early_page_ext __meminitdata; static int __init setup_early_page_ext(char *str) @@ -199,7 +203,7 @@ void __meminit pgdat_page_ext_init(struct pglist_data *pgdat) pgdat->node_page_ext = NULL; } -static struct page_ext *lookup_page_ext(const struct page *page) +struct page_ext *lookup_page_ext(const struct page *page) { unsigned long pfn = page_to_pfn(page); unsigned long index; @@ -219,6 +223,7 @@ static struct page_ext *lookup_page_ext(const struct page *page) MAX_ORDER_NR_PAGES); return get_entry(base, index); } +EXPORT_SYMBOL(lookup_page_ext); static int __init alloc_node_page_ext(int nid) { @@ -278,7 +283,7 @@ static bool page_ext_invalid(struct page_ext *page_ext) return !page_ext || (((unsigned long)page_ext & PAGE_EXT_INVALID) == PAGE_EXT_INVALID); } -static struct page_ext *lookup_page_ext(const struct page *page) +struct page_ext *lookup_page_ext(const struct page *page) { unsigned long pfn = page_to_pfn(page); struct mem_section *section = __pfn_to_section(pfn); @@ -295,6 +300,7 @@ static struct page_ext *lookup_page_ext(const struct page *page) return NULL; return get_entry(page_ext, pfn); } +EXPORT_SYMBOL(lookup_page_ext); static void *__meminit alloc_page_ext(size_t size, int nid) { From patchwork Mon May 1 16:54:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 89125 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp73032vqo; Mon, 1 May 2023 10:22:02 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6FuIraK7Bx+A3K/qlHFX8mmkluroBizkyvVF8/EQdxSqW4gnAyASKKHCt9PvU9iYdn2k+6 X-Received: by 2002:a17:902:b48e:b0:1a9:a3b3:f935 with SMTP id y14-20020a170902b48e00b001a9a3b3f935mr14472425plr.57.1682961721754; Mon, 01 May 2023 10:22:01 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682961721; cv=none; d=google.com; s=arc-20160816; b=OSzD7/CSuywgrMcDSeIVt8P1Uzvhdv4UoYu4/yqv6jVBe+3xihgaHYNtaZvEekQQzU wYqTweOStLLk2f/8eYF4koz3SZk0jSHMJvswyY+E+GMv75D4xAjOAW0qg60vq/xK3wvD +EbYKPqUQ+ha1H2qogrvoGq1HAmwgUJi9ztfYgq1oNuJiLBRPeGqDWH3Xep0AafJhH9f GkYnOoXV/s2kUMgt/ZxrTklbAk+nYbrRNyw1oLB8kOC/t8HTM31W0KcrfI2hOyq12k3V gxgQ+MW9yYVKadQqGxsjYqcmvcf0e7SPTZJRmGHlSaMEJ0dterTmowyrR7W4ps/uQLcC BW+w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:dkim-signature; bh=LeC1gX4P3RQjW1NYELsxA3PFCP+EdMyYyajg3MbW7ME=; b=Jq6rRDY8M7jkqosg6WOKw0CCbhmnG5cJ7Higm1aTuavcv5ZtsYcAY+bdFYV2lMjR9Y cHnUSWCHsaXPKKGpNwvMvqTEs4t1sFJEXogCCgj+5m4wsgFs87dodAQxmFhq2pfXdhfw Aspk4gy09YUgFPmrUgtXORNkyV//F8b+0G5Wj2uINy5miKJa56flvAL4F9PdjA+B9+a3 DHAGtbU19F7yXoqiutFgJezPY09wh13wIzSFwVkZB/c7BJwheh9YB45ozR5IBjgbXlAL ybKuI0K2EFCbySgzvdOjZ6GeTaTVW5M8n+F9FA01aF+6wa5jThWv0La9jHodyV1wrBky /bVQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=hhMiGEI+; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id g5-20020a170902934500b001aaff31bcccsi1443750plp.124.2023.05.01.10.21.45; Mon, 01 May 2023 10:22:01 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=hhMiGEI+; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232775AbjEAQ63 (ORCPT + 99 others); Mon, 1 May 2023 12:58:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32828 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232556AbjEAQ5Z (ORCPT ); Mon, 1 May 2023 12:57:25 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 13B901BD7 for ; Mon, 1 May 2023 09:55:52 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-559fb850e08so31604337b3.3 for ; Mon, 01 May 2023 09:55:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1682960151; x=1685552151; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=LeC1gX4P3RQjW1NYELsxA3PFCP+EdMyYyajg3MbW7ME=; b=hhMiGEI+67cfIr24zB7jRNLa5lThUyRZeDPUTiHuwGakXI2KeF15k1Fm45C272zpQV qXSafKQv0FF+5oEv4p+76TG6tqqAwOk8ME5PBGvQt4pav/Pg3OYN+Hq4oMgw3F6ZdSvk RmJQnKNLzSXmDQCmLfbgpYZMfqGB21dxyyqy9AGFiAL7FUCD/QaAfwt7HNGi0uvHJu82 6NgN1S+GtTfGCTv2eTqAiK6fBQsiqTLf20nkiQTXY3Jq4fbsHdFYt1d0Idxgqw9T5cZH WVbgLVG1ivKLyqdpAiO34AXCSDhpXsNhycXd99B9WHPgsZw1MFzj6S84fK28jMLqmH9m yXMg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682960151; x=1685552151; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=LeC1gX4P3RQjW1NYELsxA3PFCP+EdMyYyajg3MbW7ME=; b=jGML9j+094R4ED6TJPo4rEICJlxmkUxBnasZ3wyhNfQ2rvHhoagNZ1Tq6jsZNjl7Qi avqbuj7aYM1xwKmezPs+6Oj404w54nzRmSeYtA/voqfovqOJzi6CZIDFFBTy6aZBoT/2 jQvKZtVcYi/00NY0Lyy7b2NslTvNzpgeXrwjzOAzjlCeL+2yorf//VTu9GcSpSHP8xzy 8hfRw3vKEmAXPG8wLgaYQEOujF8hhYtln2Q1eZcsHBnjnd7/AVlDTB73J9tJKucWHEzC 0QsaCdENywZMGLO2z7uzZhw1ZkC1ip7TjmGq7cKfUOwnr0RQn7KjS7bB2AadDUch8HKP U2dQ== X-Gm-Message-State: AC+VfDxtkvGdzSOHKrJiUzkzO3VvwOrDKiJq4DmaG4GYD4L5+MncEbjv DeouD43tmprU1TQ4MUB4SMqsEP0NQ6k= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:6d24:3efd:facc:7ac4]) (user=surenb job=sendgmr) by 2002:a0d:ec4c:0:b0:55a:20a3:5ce3 with SMTP id r12-20020a0dec4c000000b0055a20a35ce3mr2799132ywn.3.1682960151497; Mon, 01 May 2023 09:55:51 -0700 (PDT) Date: Mon, 1 May 2023 09:54:29 -0700 In-Reply-To: <20230501165450.15352-1-surenb@google.com> Mime-Version: 1.0 References: <20230501165450.15352-1-surenb@google.com> X-Mailer: git-send-email 2.40.1.495.gc816e09b53d-goog Message-ID: <20230501165450.15352-20-surenb@google.com> Subject: [PATCH 19/40] change alloc_pages name in dma_map_ops to avoid name conflicts From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, ldufour@linux.ibm.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org X-Spam-Status: No, score=-9.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1764713270260905244?= X-GMAIL-MSGID: =?utf-8?q?1764713270260905244?= After redefining alloc_pages, all uses of that name are being replaced. Change the conflicting names to prevent preprocessor from replacing them when it's not intended. Signed-off-by: Suren Baghdasaryan --- arch/x86/kernel/amd_gart_64.c | 2 +- drivers/iommu/dma-iommu.c | 2 +- drivers/xen/grant-dma-ops.c | 2 +- drivers/xen/swiotlb-xen.c | 2 +- include/linux/dma-map-ops.h | 2 +- kernel/dma/mapping.c | 4 ++-- 6 files changed, 7 insertions(+), 7 deletions(-) diff --git a/arch/x86/kernel/amd_gart_64.c b/arch/x86/kernel/amd_gart_64.c index 56a917df410d..842a0ec5eaa9 100644 --- a/arch/x86/kernel/amd_gart_64.c +++ b/arch/x86/kernel/amd_gart_64.c @@ -676,7 +676,7 @@ static const struct dma_map_ops gart_dma_ops = { .get_sgtable = dma_common_get_sgtable, .dma_supported = dma_direct_supported, .get_required_mask = dma_direct_get_required_mask, - .alloc_pages = dma_direct_alloc_pages, + .alloc_pages_op = dma_direct_alloc_pages, .free_pages = dma_direct_free_pages, }; diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index 7a9f0b0bddbd..76a9d5ca4eee 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -1556,7 +1556,7 @@ static const struct dma_map_ops iommu_dma_ops = { .flags = DMA_F_PCI_P2PDMA_SUPPORTED, .alloc = iommu_dma_alloc, .free = iommu_dma_free, - .alloc_pages = dma_common_alloc_pages, + .alloc_pages_op = dma_common_alloc_pages, .free_pages = dma_common_free_pages, .alloc_noncontiguous = iommu_dma_alloc_noncontiguous, .free_noncontiguous = iommu_dma_free_noncontiguous, diff --git a/drivers/xen/grant-dma-ops.c b/drivers/xen/grant-dma-ops.c index 9784a77fa3c9..6c7d984f164d 100644 --- a/drivers/xen/grant-dma-ops.c +++ b/drivers/xen/grant-dma-ops.c @@ -282,7 +282,7 @@ static int xen_grant_dma_supported(struct device *dev, u64 mask) static const struct dma_map_ops xen_grant_dma_ops = { .alloc = xen_grant_dma_alloc, .free = xen_grant_dma_free, - .alloc_pages = xen_grant_dma_alloc_pages, + .alloc_pages_op = xen_grant_dma_alloc_pages, .free_pages = xen_grant_dma_free_pages, .mmap = dma_common_mmap, .get_sgtable = dma_common_get_sgtable, diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c index 67aa74d20162..5ab2616153f0 100644 --- a/drivers/xen/swiotlb-xen.c +++ b/drivers/xen/swiotlb-xen.c @@ -403,6 +403,6 @@ const struct dma_map_ops xen_swiotlb_dma_ops = { .dma_supported = xen_swiotlb_dma_supported, .mmap = dma_common_mmap, .get_sgtable = dma_common_get_sgtable, - .alloc_pages = dma_common_alloc_pages, + .alloc_pages_op = dma_common_alloc_pages, .free_pages = dma_common_free_pages, }; diff --git a/include/linux/dma-map-ops.h b/include/linux/dma-map-ops.h index 31f114f486c4..d741940dcb3b 100644 --- a/include/linux/dma-map-ops.h +++ b/include/linux/dma-map-ops.h @@ -27,7 +27,7 @@ struct dma_map_ops { unsigned long attrs); void (*free)(struct device *dev, size_t size, void *vaddr, dma_addr_t dma_handle, unsigned long attrs); - struct page *(*alloc_pages)(struct device *dev, size_t size, + struct page *(*alloc_pages_op)(struct device *dev, size_t size, dma_addr_t *dma_handle, enum dma_data_direction dir, gfp_t gfp); void (*free_pages)(struct device *dev, size_t size, struct page *vaddr, diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c index 9a4db5cce600..fc42930af14b 100644 --- a/kernel/dma/mapping.c +++ b/kernel/dma/mapping.c @@ -570,9 +570,9 @@ static struct page *__dma_alloc_pages(struct device *dev, size_t size, size = PAGE_ALIGN(size); if (dma_alloc_direct(dev, ops)) return dma_direct_alloc_pages(dev, size, dma_handle, dir, gfp); - if (!ops->alloc_pages) + if (!ops->alloc_pages_op) return NULL; - return ops->alloc_pages(dev, size, dma_handle, dir, gfp); + return ops->alloc_pages_op(dev, size, dma_handle, dir, gfp); } struct page *dma_alloc_pages(struct device *dev, size_t size, From patchwork Mon May 1 16:54:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 89127 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp73385vqo; Mon, 1 May 2023 10:22:40 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ5cjRN0BTDaDx8gNh+XsENYNuSZDzWETYvgHv3cqofwDe00AknigNSgMYUlmAmOOngCHJHj X-Received: by 2002:a05:6a00:248b:b0:63f:ffd:5360 with SMTP id c11-20020a056a00248b00b0063f0ffd5360mr22264756pfv.21.1682961760163; Mon, 01 May 2023 10:22:40 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682961760; cv=none; d=google.com; s=arc-20160816; b=PRtQCbUhR5cTZBMmxymQiCcxLNUDrAuZO/erswbK70ptuiMtQbBvy/0O6hYonvenw1 MwPnEuEsbumZDXZEhNM4o0FQThloh8TfjXZ30GTluWWbq+WBHS/3MXSNccWeAtV5akcR NrTQoUnTz6LQqlXqFBDo62mvG90yHcntLoum4PT1tUcNkjApV3XF3GvEX7O5OZ9TLnc9 H7DkdFevuZdBVLayg2KrpFNinRNhisuVtECYuXGalNasaYG/nMNRKtGo//G+ytn3CfRg knw4nIOdje/7G1DuRgQRV7qKfwccq/a67nxlk4GPi64PTTBu8r63jH15Sihsxw4BiHeT CL/g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:dkim-signature; bh=Mri2z+LvwzIobpEl78ZbT4L3ds7H/B8V3mrszruJwqU=; b=mSDuWIqsaY+Rl4F+jZ3dOMpCm/SUOwVxAgmXFr1M1N/BATzl5mJ2VLvEfOY/o9gqWQ ry9upaRScXOk5+qpqt9u5ijr3sH3ncHaLc3FS3gdMrOHEE1YWJJlMFfoord9Bh9drfRA nzfe+QTIRrDsrch216lF4//jjqm9F3c0YDzAqb3Bx21LTmPKxEinfBOpKDb8ZSTkCSr8 j62fKlyY8xIylv8m/5g1vGKS36j1jp8arb1cqYhTthRZnxwpcR3HCD3V1jtqDnFDSeip 6QP7kToFZn0J7a9sG4h8NUNHqWZ8IJn6KLvqPtdeLEKTnMsWecPw01y2MPKJbhcwcPLy kVbQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=t3klBo0f; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id f63-20020a625142000000b0063f32f045aasi22899269pfb.342.2023.05.01.10.22.25; Mon, 01 May 2023 10:22:40 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=t3klBo0f; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233087AbjEAQ7n (ORCPT + 99 others); Mon, 1 May 2023 12:59:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33094 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232999AbjEAQ6i (ORCPT ); Mon, 1 May 2023 12:58:38 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4BA981BDF for ; Mon, 1 May 2023 09:56:09 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-55a7d1f6914so10695457b3.1 for ; Mon, 01 May 2023 09:56:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1682960154; x=1685552154; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Mri2z+LvwzIobpEl78ZbT4L3ds7H/B8V3mrszruJwqU=; b=t3klBo0fosZzY+6Mn34UkeRIZ8VlBk56iQvcqMHnlZoMdspE+7NdZvE4k2vyn3ecSC wFY0JV6O1SpEH2hrWnI2ZwvcFG9UuRrWrZ6FC4svxPT3iRFpboq1m9ULo8cCPr6ZaPHa Y3x/6J6egKUSo5bKdDK02WHSlKy43vJBZZpKM6scw7eAqstx/nyrECEQlmlwNzLP6+c1 x6typBX0rqpw1oi/w9vTNxPJlYF7ooBhaib/snmI2nYMcyeE2e2JEAGsaa+GQO7ffHeg n1W4cCFYszTARUhVglWM9nb/Nb2Udwpan9HFpZ/k9Jx52FO/vN20r/r10gSwNfC0onxH nPNA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682960154; x=1685552154; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Mri2z+LvwzIobpEl78ZbT4L3ds7H/B8V3mrszruJwqU=; b=BCzQ7V3utQQSntjffr91glDv+zvgXyWZE3SXq9LsbdyaJz7gRHIZ3v5fTbSmhBc8PQ xqAtFHssBMHCGUsMrBVWpDRBbNguo8hftIrFIz+DtIfpXAK29SpRO/PaE9+z78krrlgg htmSLOF8n3YASFX8TR85IjF1JvN5xidlzCfSNufM6nq8uBqQev3mLbTGvV6b5ceCZVhv 4+K2L0fV/fki0ysVkc/DMEpn3JgfNTxCEUUqRKLIamZGDpW1de4c52MFyQcpLujBDC0F sOkQ7+sNjTh1BCHaaucoOhpYrgNy5kAcHDUBsj2AulhzU2n/mrrf9GCUcotE+bHG3LsD 6jAw== X-Gm-Message-State: AC+VfDyPeTjMcQzizH6yfZxcnT1iauoeFAq8cszppOxVy9X1T8oWs16J 4OoP5sg2cKmFtFM/7zOjt6SxjfbMxsI= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:6d24:3efd:facc:7ac4]) (user=surenb job=sendgmr) by 2002:a81:9f09:0:b0:559:e830:60f1 with SMTP id s9-20020a819f09000000b00559e83060f1mr4600345ywn.8.1682960153830; Mon, 01 May 2023 09:55:53 -0700 (PDT) Date: Mon, 1 May 2023 09:54:30 -0700 In-Reply-To: <20230501165450.15352-1-surenb@google.com> Mime-Version: 1.0 References: <20230501165450.15352-1-surenb@google.com> X-Mailer: git-send-email 2.40.1.495.gc816e09b53d-goog Message-ID: <20230501165450.15352-21-surenb@google.com> Subject: [PATCH 20/40] mm: enable page allocation tagging From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, ldufour@linux.ibm.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org X-Spam-Status: No, score=-9.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1764713310207061058?= X-GMAIL-MSGID: =?utf-8?q?1764713310207061058?= Redefine page allocators to record allocation tags upon their invocation. Instrument post_alloc_hook and free_pages_prepare to modify current allocation tag. Signed-off-by: Suren Baghdasaryan --- include/linux/alloc_tag.h | 11 ++++ include/linux/gfp.h | 123 +++++++++++++++++++++++++----------- include/linux/page_ext.h | 1 - include/linux/pagemap.h | 9 ++- include/linux/pgalloc_tag.h | 38 +++++++++-- mm/compaction.c | 9 ++- mm/filemap.c | 6 +- mm/mempolicy.c | 30 ++++----- mm/mm_init.c | 1 + mm/page_alloc.c | 73 ++++++++++++--------- 10 files changed, 208 insertions(+), 93 deletions(-) diff --git a/include/linux/alloc_tag.h b/include/linux/alloc_tag.h index d913f8d9a7d8..07922d81b641 100644 --- a/include/linux/alloc_tag.h +++ b/include/linux/alloc_tag.h @@ -102,4 +102,15 @@ static inline void alloc_tag_add(union codetag_ref *ref, struct alloc_tag *tag, #endif +#define alloc_hooks(_do_alloc, _res_type, _err) \ +({ \ + _res_type _res; \ + DEFINE_ALLOC_TAG(_alloc_tag, _old); \ + \ + _res = _do_alloc; \ + alloc_tag_restore(&_alloc_tag, _old); \ + _res; \ +}) + + #endif /* _LINUX_ALLOC_TAG_H */ diff --git a/include/linux/gfp.h b/include/linux/gfp.h index ed8cb537c6a7..0cb4a515109a 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -6,6 +6,8 @@ #include #include +#include +#include struct vm_area_struct; @@ -174,42 +176,57 @@ static inline void arch_free_page(struct page *page, int order) { } static inline void arch_alloc_page(struct page *page, int order) { } #endif -struct page *__alloc_pages(gfp_t gfp, unsigned int order, int preferred_nid, +struct page *_alloc_pages2(gfp_t gfp, unsigned int order, int preferred_nid, nodemask_t *nodemask); -struct folio *__folio_alloc(gfp_t gfp, unsigned int order, int preferred_nid, +#define __alloc_pages(_gfp, _order, _preferred_nid, _nodemask) \ + alloc_hooks(_alloc_pages2(_gfp, _order, _preferred_nid, \ + _nodemask), struct page *, NULL) + +struct folio *_folio_alloc2(gfp_t gfp, unsigned int order, int preferred_nid, nodemask_t *nodemask); +#define __folio_alloc(_gfp, _order, _preferred_nid, _nodemask) \ + alloc_hooks(_folio_alloc2(_gfp, _order, _preferred_nid, \ + _nodemask), struct folio *, NULL) -unsigned long __alloc_pages_bulk(gfp_t gfp, int preferred_nid, +unsigned long _alloc_pages_bulk(gfp_t gfp, int preferred_nid, nodemask_t *nodemask, int nr_pages, struct list_head *page_list, struct page **page_array); - -unsigned long alloc_pages_bulk_array_mempolicy(gfp_t gfp, +#define __alloc_pages_bulk(_gfp, _preferred_nid, _nodemask, _nr_pages, \ + _page_list, _page_array) \ + alloc_hooks(_alloc_pages_bulk(_gfp, _preferred_nid, \ + _nodemask, _nr_pages, \ + _page_list, _page_array), \ + unsigned long, 0) + +unsigned long _alloc_pages_bulk_array_mempolicy(gfp_t gfp, unsigned long nr_pages, struct page **page_array); +#define alloc_pages_bulk_array_mempolicy(_gfp, _nr_pages, _page_array) \ + alloc_hooks(_alloc_pages_bulk_array_mempolicy(_gfp, \ + _nr_pages, _page_array), \ + unsigned long, 0) /* Bulk allocate order-0 pages */ -static inline unsigned long -alloc_pages_bulk_list(gfp_t gfp, unsigned long nr_pages, struct list_head *list) -{ - return __alloc_pages_bulk(gfp, numa_mem_id(), NULL, nr_pages, list, NULL); -} +#define alloc_pages_bulk_list(_gfp, _nr_pages, _list) \ + __alloc_pages_bulk(_gfp, numa_mem_id(), NULL, _nr_pages, _list, NULL) -static inline unsigned long -alloc_pages_bulk_array(gfp_t gfp, unsigned long nr_pages, struct page **page_array) -{ - return __alloc_pages_bulk(gfp, numa_mem_id(), NULL, nr_pages, NULL, page_array); -} +#define alloc_pages_bulk_array(_gfp, _nr_pages, _page_array) \ + __alloc_pages_bulk(_gfp, numa_mem_id(), NULL, _nr_pages, NULL, _page_array) static inline unsigned long -alloc_pages_bulk_array_node(gfp_t gfp, int nid, unsigned long nr_pages, struct page **page_array) +_alloc_pages_bulk_array_node(gfp_t gfp, int nid, unsigned long nr_pages, struct page **page_array) { if (nid == NUMA_NO_NODE) nid = numa_mem_id(); - return __alloc_pages_bulk(gfp, nid, NULL, nr_pages, NULL, page_array); + return _alloc_pages_bulk(gfp, nid, NULL, nr_pages, NULL, page_array); } +#define alloc_pages_bulk_array_node(_gfp, _nid, _nr_pages, _page_array) \ + alloc_hooks(_alloc_pages_bulk_array_node(_gfp, _nid, _nr_pages, _page_array), \ + unsigned long, 0) + static inline void warn_if_node_offline(int this_node, gfp_t gfp_mask) { gfp_t warn_gfp = gfp_mask & (__GFP_THISNODE|__GFP_NOWARN); @@ -229,21 +246,25 @@ static inline void warn_if_node_offline(int this_node, gfp_t gfp_mask) * online. For more general interface, see alloc_pages_node(). */ static inline struct page * -__alloc_pages_node(int nid, gfp_t gfp_mask, unsigned int order) +_alloc_pages_node2(int nid, gfp_t gfp_mask, unsigned int order) { VM_BUG_ON(nid < 0 || nid >= MAX_NUMNODES); warn_if_node_offline(nid, gfp_mask); - return __alloc_pages(gfp_mask, order, nid, NULL); + return _alloc_pages2(gfp_mask, order, nid, NULL); } +#define __alloc_pages_node(_nid, _gfp_mask, _order) \ + alloc_hooks(_alloc_pages_node2(_nid, _gfp_mask, _order), \ + struct page *, NULL) + static inline struct folio *__folio_alloc_node(gfp_t gfp, unsigned int order, int nid) { VM_BUG_ON(nid < 0 || nid >= MAX_NUMNODES); warn_if_node_offline(nid, gfp); - return __folio_alloc(gfp, order, nid, NULL); + return _folio_alloc2(gfp, order, nid, NULL); } /* @@ -251,32 +272,45 @@ struct folio *__folio_alloc_node(gfp_t gfp, unsigned int order, int nid) * prefer the current CPU's closest node. Otherwise node must be valid and * online. */ -static inline struct page *alloc_pages_node(int nid, gfp_t gfp_mask, +static inline struct page *_alloc_pages_node(int nid, gfp_t gfp_mask, unsigned int order) { if (nid == NUMA_NO_NODE) nid = numa_mem_id(); - return __alloc_pages_node(nid, gfp_mask, order); + return _alloc_pages_node2(nid, gfp_mask, order); } +#define alloc_pages_node(_nid, _gfp_mask, _order) \ + alloc_hooks(_alloc_pages_node(_nid, _gfp_mask, _order), \ + struct page *, NULL) + #ifdef CONFIG_NUMA -struct page *alloc_pages(gfp_t gfp, unsigned int order); -struct folio *folio_alloc(gfp_t gfp, unsigned order); -struct folio *vma_alloc_folio(gfp_t gfp, int order, struct vm_area_struct *vma, +struct page *_alloc_pages(gfp_t gfp, unsigned int order); +struct folio *_folio_alloc(gfp_t gfp, unsigned int order); +struct folio *_vma_alloc_folio(gfp_t gfp, int order, struct vm_area_struct *vma, unsigned long addr, bool hugepage); #else -static inline struct page *alloc_pages(gfp_t gfp_mask, unsigned int order) +static inline struct page *_alloc_pages(gfp_t gfp_mask, unsigned int order) { - return alloc_pages_node(numa_node_id(), gfp_mask, order); + return _alloc_pages_node(numa_node_id(), gfp_mask, order); } -static inline struct folio *folio_alloc(gfp_t gfp, unsigned int order) +static inline struct folio *_folio_alloc(gfp_t gfp, unsigned int order) { return __folio_alloc_node(gfp, order, numa_node_id()); } -#define vma_alloc_folio(gfp, order, vma, addr, hugepage) \ - folio_alloc(gfp, order) +#define _vma_alloc_folio(gfp, order, vma, addr, hugepage) \ + _folio_alloc(gfp, order) #endif + +#define alloc_pages(_gfp, _order) \ + alloc_hooks(_alloc_pages(_gfp, _order), struct page *, NULL) +#define folio_alloc(_gfp, _order) \ + alloc_hooks(_folio_alloc(_gfp, _order), struct folio *, NULL) +#define vma_alloc_folio(_gfp, _order, _vma, _addr, _hugepage) \ + alloc_hooks(_vma_alloc_folio(_gfp, _order, _vma, _addr, \ + _hugepage), struct folio *, NULL) + #define alloc_page(gfp_mask) alloc_pages(gfp_mask, 0) static inline struct page *alloc_page_vma(gfp_t gfp, struct vm_area_struct *vma, unsigned long addr) @@ -286,12 +320,21 @@ static inline struct page *alloc_page_vma(gfp_t gfp, return &folio->page; } -extern unsigned long __get_free_pages(gfp_t gfp_mask, unsigned int order); -extern unsigned long get_zeroed_page(gfp_t gfp_mask); +extern unsigned long _get_free_pages(gfp_t gfp_mask, unsigned int order); +#define __get_free_pages(_gfp_mask, _order) \ + alloc_hooks(_get_free_pages(_gfp_mask, _order), unsigned long, 0) +extern unsigned long _get_zeroed_page(gfp_t gfp_mask); +#define get_zeroed_page(_gfp_mask) \ + alloc_hooks(_get_zeroed_page(_gfp_mask), unsigned long, 0) -void *alloc_pages_exact(size_t size, gfp_t gfp_mask) __alloc_size(1); +void *_alloc_pages_exact(size_t size, gfp_t gfp_mask) __alloc_size(1); +#define alloc_pages_exact(_size, _gfp_mask) \ + alloc_hooks(_alloc_pages_exact(_size, _gfp_mask), void *, NULL) void free_pages_exact(void *virt, size_t size); -__meminit void *alloc_pages_exact_nid(int nid, size_t size, gfp_t gfp_mask) __alloc_size(2); + +__meminit void *_alloc_pages_exact_nid(int nid, size_t size, gfp_t gfp_mask) __alloc_size(2); +#define alloc_pages_exact_nid(_nid, _size, _gfp_mask) \ + alloc_hooks(_alloc_pages_exact_nid(_nid, _size, _gfp_mask), void *, NULL) #define __get_free_page(gfp_mask) \ __get_free_pages((gfp_mask), 0) @@ -354,10 +397,16 @@ static inline bool pm_suspended_storage(void) #ifdef CONFIG_CONTIG_ALLOC /* The below functions must be run on a range from a single zone. */ -extern int alloc_contig_range(unsigned long start, unsigned long end, +extern int _alloc_contig_range(unsigned long start, unsigned long end, unsigned migratetype, gfp_t gfp_mask); -extern struct page *alloc_contig_pages(unsigned long nr_pages, gfp_t gfp_mask, - int nid, nodemask_t *nodemask); +#define alloc_contig_range(_start, _end, _migratetype, _gfp_mask) \ + alloc_hooks(_alloc_contig_range(_start, _end, _migratetype, \ + _gfp_mask), int, -ENOMEM) +extern struct page *_alloc_contig_pages(unsigned long nr_pages, gfp_t gfp_mask, + int nid, nodemask_t *nodemask); +#define alloc_contig_pages(_nr_pages, _gfp_mask, _nid, _nodemask) \ + alloc_hooks(_alloc_contig_pages(_nr_pages, _gfp_mask, _nid, \ + _nodemask), struct page *, NULL) #endif void free_contig_range(unsigned long pfn, unsigned long nr_pages); diff --git a/include/linux/page_ext.h b/include/linux/page_ext.h index 67314f648aeb..cff15ee5440e 100644 --- a/include/linux/page_ext.h +++ b/include/linux/page_ext.h @@ -4,7 +4,6 @@ #include #include -#include struct pglist_data; diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index a56308a9d1a4..b2efafa001f8 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -467,14 +467,17 @@ static inline void *detach_page_private(struct page *page) } #ifdef CONFIG_NUMA -struct folio *filemap_alloc_folio(gfp_t gfp, unsigned int order); +struct folio *_filemap_alloc_folio(gfp_t gfp, unsigned int order); #else -static inline struct folio *filemap_alloc_folio(gfp_t gfp, unsigned int order) +static inline struct folio *_filemap_alloc_folio(gfp_t gfp, unsigned int order) { - return folio_alloc(gfp, order); + return _folio_alloc(gfp, order); } #endif +#define filemap_alloc_folio(_gfp, _order) \ + alloc_hooks(_filemap_alloc_folio(_gfp, _order), struct folio *, NULL) + static inline struct page *__page_cache_alloc(gfp_t gfp) { return &filemap_alloc_folio(gfp, 0)->page; diff --git a/include/linux/pgalloc_tag.h b/include/linux/pgalloc_tag.h index f8c7b6ef9c75..567327c1c46f 100644 --- a/include/linux/pgalloc_tag.h +++ b/include/linux/pgalloc_tag.h @@ -6,28 +6,58 @@ #define _LINUX_PGALLOC_TAG_H #include + +#ifdef CONFIG_MEM_ALLOC_PROFILING + #include extern struct page_ext_operations page_alloc_tagging_ops; -struct page_ext *lookup_page_ext(const struct page *page); +extern struct page_ext *page_ext_get(struct page *page); +extern void page_ext_put(struct page_ext *page_ext); + +static inline union codetag_ref *codetag_ref_from_page_ext(struct page_ext *page_ext) +{ + return (void *)page_ext + page_alloc_tagging_ops.offset; +} + +static inline struct page_ext *page_ext_from_codetag_ref(union codetag_ref *ref) +{ + return (void *)ref - page_alloc_tagging_ops.offset; +} static inline union codetag_ref *get_page_tag_ref(struct page *page) { if (page && mem_alloc_profiling_enabled()) { - struct page_ext *page_ext = lookup_page_ext(page); + struct page_ext *page_ext = page_ext_get(page); if (page_ext) - return (void *)page_ext + page_alloc_tagging_ops.offset; + return codetag_ref_from_page_ext(page_ext); } return NULL; } +static inline void put_page_tag_ref(union codetag_ref *ref) +{ + if (ref) + page_ext_put(page_ext_from_codetag_ref(ref)); +} + static inline void pgalloc_tag_dec(struct page *page, unsigned int order) { union codetag_ref *ref = get_page_tag_ref(page); - if (ref) + if (ref) { alloc_tag_sub(ref, PAGE_SIZE << order); + put_page_tag_ref(ref); + } } +#else /* CONFIG_MEM_ALLOC_PROFILING */ + +static inline union codetag_ref *get_page_tag_ref(struct page *page) { return NULL; } +static inline void put_page_tag_ref(union codetag_ref *ref) {} +#define pgalloc_tag_dec(__page, __size) do {} while (0) + +#endif /* CONFIG_MEM_ALLOC_PROFILING */ + #endif /* _LINUX_PGALLOC_TAG_H */ diff --git a/mm/compaction.c b/mm/compaction.c index c8bcdea15f5f..32707fb62495 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -1684,7 +1684,7 @@ static void isolate_freepages(struct compact_control *cc) * This is a migrate-callback that "allocates" freepages by taking pages * from the isolated freelists in the block we are migrating to. */ -static struct page *compaction_alloc(struct page *migratepage, +static struct page *_compaction_alloc(struct page *migratepage, unsigned long data) { struct compact_control *cc = (struct compact_control *)data; @@ -1704,6 +1704,13 @@ static struct page *compaction_alloc(struct page *migratepage, return freepage; } +static struct page *compaction_alloc(struct page *migratepage, + unsigned long data) +{ + return alloc_hooks(_compaction_alloc(migratepage, data), + struct page *, NULL); +} + /* * This is a migrate-callback that "frees" freepages back to the isolated * freelist. All pages on the freelist are from the same zone, so there is no diff --git a/mm/filemap.c b/mm/filemap.c index a34abfe8c654..f0f8b782d172 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -958,7 +958,7 @@ int filemap_add_folio(struct address_space *mapping, struct folio *folio, EXPORT_SYMBOL_GPL(filemap_add_folio); #ifdef CONFIG_NUMA -struct folio *filemap_alloc_folio(gfp_t gfp, unsigned int order) +struct folio *_filemap_alloc_folio(gfp_t gfp, unsigned int order) { int n; struct folio *folio; @@ -973,9 +973,9 @@ struct folio *filemap_alloc_folio(gfp_t gfp, unsigned int order) return folio; } - return folio_alloc(gfp, order); + return _folio_alloc(gfp, order); } -EXPORT_SYMBOL(filemap_alloc_folio); +EXPORT_SYMBOL(_filemap_alloc_folio); #endif /* diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 2068b594dc88..80cd33811641 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -2141,7 +2141,7 @@ static struct page *alloc_pages_preferred_many(gfp_t gfp, unsigned int order, } /** - * vma_alloc_folio - Allocate a folio for a VMA. + * _vma_alloc_folio - Allocate a folio for a VMA. * @gfp: GFP flags. * @order: Order of the folio. * @vma: Pointer to VMA or NULL if not available. @@ -2155,7 +2155,7 @@ static struct page *alloc_pages_preferred_many(gfp_t gfp, unsigned int order, * * Return: The folio on success or NULL if allocation fails. */ -struct folio *vma_alloc_folio(gfp_t gfp, int order, struct vm_area_struct *vma, +struct folio *_vma_alloc_folio(gfp_t gfp, int order, struct vm_area_struct *vma, unsigned long addr, bool hugepage) { struct mempolicy *pol; @@ -2240,10 +2240,10 @@ struct folio *vma_alloc_folio(gfp_t gfp, int order, struct vm_area_struct *vma, out: return folio; } -EXPORT_SYMBOL(vma_alloc_folio); +EXPORT_SYMBOL(_vma_alloc_folio); /** - * alloc_pages - Allocate pages. + * _alloc_pages - Allocate pages. * @gfp: GFP flags. * @order: Power of two of number of pages to allocate. * @@ -2256,7 +2256,7 @@ EXPORT_SYMBOL(vma_alloc_folio); * flags are used. * Return: The page on success or NULL if allocation fails. */ -struct page *alloc_pages(gfp_t gfp, unsigned order) +struct page *_alloc_pages(gfp_t gfp, unsigned int order) { struct mempolicy *pol = &default_policy; struct page *page; @@ -2274,15 +2274,15 @@ struct page *alloc_pages(gfp_t gfp, unsigned order) page = alloc_pages_preferred_many(gfp, order, policy_node(gfp, pol, numa_node_id()), pol); else - page = __alloc_pages(gfp, order, + page = _alloc_pages2(gfp, order, policy_node(gfp, pol, numa_node_id()), policy_nodemask(gfp, pol)); return page; } -EXPORT_SYMBOL(alloc_pages); +EXPORT_SYMBOL(_alloc_pages); -struct folio *folio_alloc(gfp_t gfp, unsigned order) +struct folio *_folio_alloc(gfp_t gfp, unsigned int order) { struct page *page = alloc_pages(gfp | __GFP_COMP, order); @@ -2290,7 +2290,7 @@ struct folio *folio_alloc(gfp_t gfp, unsigned order) prep_transhuge_page(page); return (struct folio *)page; } -EXPORT_SYMBOL(folio_alloc); +EXPORT_SYMBOL(_folio_alloc); static unsigned long alloc_pages_bulk_array_interleave(gfp_t gfp, struct mempolicy *pol, unsigned long nr_pages, @@ -2309,13 +2309,13 @@ static unsigned long alloc_pages_bulk_array_interleave(gfp_t gfp, for (i = 0; i < nodes; i++) { if (delta) { - nr_allocated = __alloc_pages_bulk(gfp, + nr_allocated = _alloc_pages_bulk(gfp, interleave_nodes(pol), NULL, nr_pages_per_node + 1, NULL, page_array); delta--; } else { - nr_allocated = __alloc_pages_bulk(gfp, + nr_allocated = _alloc_pages_bulk(gfp, interleave_nodes(pol), NULL, nr_pages_per_node, NULL, page_array); } @@ -2337,11 +2337,11 @@ static unsigned long alloc_pages_bulk_array_preferred_many(gfp_t gfp, int nid, preferred_gfp = gfp | __GFP_NOWARN; preferred_gfp &= ~(__GFP_DIRECT_RECLAIM | __GFP_NOFAIL); - nr_allocated = __alloc_pages_bulk(preferred_gfp, nid, &pol->nodes, + nr_allocated = _alloc_pages_bulk(preferred_gfp, nid, &pol->nodes, nr_pages, NULL, page_array); if (nr_allocated < nr_pages) - nr_allocated += __alloc_pages_bulk(gfp, numa_node_id(), NULL, + nr_allocated += _alloc_pages_bulk(gfp, numa_node_id(), NULL, nr_pages - nr_allocated, NULL, page_array + nr_allocated); return nr_allocated; @@ -2353,7 +2353,7 @@ static unsigned long alloc_pages_bulk_array_preferred_many(gfp_t gfp, int nid, * It can accelerate memory allocation especially interleaving * allocate memory. */ -unsigned long alloc_pages_bulk_array_mempolicy(gfp_t gfp, +unsigned long _alloc_pages_bulk_array_mempolicy(gfp_t gfp, unsigned long nr_pages, struct page **page_array) { struct mempolicy *pol = &default_policy; @@ -2369,7 +2369,7 @@ unsigned long alloc_pages_bulk_array_mempolicy(gfp_t gfp, return alloc_pages_bulk_array_preferred_many(gfp, numa_node_id(), pol, nr_pages, page_array); - return __alloc_pages_bulk(gfp, policy_node(gfp, pol, numa_node_id()), + return _alloc_pages_bulk(gfp, policy_node(gfp, pol, numa_node_id()), policy_nodemask(gfp, pol), nr_pages, NULL, page_array); } diff --git a/mm/mm_init.c b/mm/mm_init.c index 7f7f9c677854..42135fad4d8a 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -24,6 +24,7 @@ #include #include #include +#include #include #include #include "internal.h" diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 9de2a18519a1..edd35500f7f6 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -74,6 +74,7 @@ #include #include #include +#include #include #include #include @@ -657,6 +658,7 @@ static inline bool pcp_allowed_order(unsigned int order) static inline void free_the_page(struct page *page, unsigned int order) { + if (pcp_allowed_order(order)) /* Via pcp? */ free_unref_page(page, order); else @@ -1259,6 +1261,7 @@ static __always_inline bool free_pages_prepare(struct page *page, __memcg_kmem_uncharge_page(page, order); reset_page_owner(page, order); page_table_check_free(page, order); + pgalloc_tag_dec(page, order); return false; } @@ -1301,6 +1304,7 @@ static __always_inline bool free_pages_prepare(struct page *page, page->flags &= ~PAGE_FLAGS_CHECK_AT_PREP; reset_page_owner(page, order); page_table_check_free(page, order); + pgalloc_tag_dec(page, order); if (!PageHighMem(page)) { debug_check_no_locks_freed(page_address(page), @@ -1669,6 +1673,9 @@ inline void post_alloc_hook(struct page *page, unsigned int order, bool init = !want_init_on_free() && want_init_on_alloc(gfp_flags) && !should_skip_init(gfp_flags); bool zero_tags = init && (gfp_flags & __GFP_ZEROTAGS); +#ifdef CONFIG_MEM_ALLOC_PROFILING + union codetag_ref *ref; +#endif int i; set_page_private(page, 0); @@ -1721,6 +1728,14 @@ inline void post_alloc_hook(struct page *page, unsigned int order, set_page_owner(page, order, gfp_flags); page_table_check_alloc(page, order); + +#ifdef CONFIG_MEM_ALLOC_PROFILING + ref = get_page_tag_ref(page); + if (ref) { + alloc_tag_add(ref, current->alloc_tag, PAGE_SIZE << order); + put_page_tag_ref(ref); + } +#endif } static void prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags, @@ -4568,7 +4583,7 @@ static inline bool prepare_alloc_pages(gfp_t gfp_mask, unsigned int order, * * Returns the number of pages on the list or array. */ -unsigned long __alloc_pages_bulk(gfp_t gfp, int preferred_nid, +unsigned long _alloc_pages_bulk(gfp_t gfp, int preferred_nid, nodemask_t *nodemask, int nr_pages, struct list_head *page_list, struct page **page_array) @@ -4704,7 +4719,7 @@ unsigned long __alloc_pages_bulk(gfp_t gfp, int preferred_nid, pcp_trylock_finish(UP_flags); failed: - page = __alloc_pages(gfp, 0, preferred_nid, nodemask); + page = _alloc_pages2(gfp, 0, preferred_nid, nodemask); if (page) { if (page_list) list_add(&page->lru, page_list); @@ -4715,12 +4730,12 @@ unsigned long __alloc_pages_bulk(gfp_t gfp, int preferred_nid, goto out; } -EXPORT_SYMBOL_GPL(__alloc_pages_bulk); +EXPORT_SYMBOL_GPL(_alloc_pages_bulk); /* * This is the 'heart' of the zoned buddy allocator. */ -struct page *__alloc_pages(gfp_t gfp, unsigned int order, int preferred_nid, +struct page *_alloc_pages2(gfp_t gfp, unsigned int order, int preferred_nid, nodemask_t *nodemask) { struct page *page; @@ -4783,41 +4798,41 @@ struct page *__alloc_pages(gfp_t gfp, unsigned int order, int preferred_nid, return page; } -EXPORT_SYMBOL(__alloc_pages); +EXPORT_SYMBOL(_alloc_pages2); -struct folio *__folio_alloc(gfp_t gfp, unsigned int order, int preferred_nid, +struct folio *_folio_alloc2(gfp_t gfp, unsigned int order, int preferred_nid, nodemask_t *nodemask) { - struct page *page = __alloc_pages(gfp | __GFP_COMP, order, + struct page *page = _alloc_pages2(gfp | __GFP_COMP, order, preferred_nid, nodemask); if (page && order > 1) prep_transhuge_page(page); return (struct folio *)page; } -EXPORT_SYMBOL(__folio_alloc); +EXPORT_SYMBOL(_folio_alloc2); /* * Common helper functions. Never use with __GFP_HIGHMEM because the returned * address cannot represent highmem pages. Use alloc_pages and then kmap if * you need to access high mem. */ -unsigned long __get_free_pages(gfp_t gfp_mask, unsigned int order) +unsigned long _get_free_pages(gfp_t gfp_mask, unsigned int order) { struct page *page; - page = alloc_pages(gfp_mask & ~__GFP_HIGHMEM, order); + page = _alloc_pages(gfp_mask & ~__GFP_HIGHMEM, order); if (!page) return 0; return (unsigned long) page_address(page); } -EXPORT_SYMBOL(__get_free_pages); +EXPORT_SYMBOL(_get_free_pages); -unsigned long get_zeroed_page(gfp_t gfp_mask) +unsigned long _get_zeroed_page(gfp_t gfp_mask) { - return __get_free_page(gfp_mask | __GFP_ZERO); + return _get_free_pages(gfp_mask | __GFP_ZERO, 0); } -EXPORT_SYMBOL(get_zeroed_page); +EXPORT_SYMBOL(_get_zeroed_page); /** * __free_pages - Free pages allocated with alloc_pages(). @@ -5009,7 +5024,7 @@ static void *make_alloc_exact(unsigned long addr, unsigned int order, } /** - * alloc_pages_exact - allocate an exact number physically-contiguous pages. + * _alloc_pages_exact - allocate an exact number physically-contiguous pages. * @size: the number of bytes to allocate * @gfp_mask: GFP flags for the allocation, must not contain __GFP_COMP * @@ -5023,7 +5038,7 @@ static void *make_alloc_exact(unsigned long addr, unsigned int order, * * Return: pointer to the allocated area or %NULL in case of error. */ -void *alloc_pages_exact(size_t size, gfp_t gfp_mask) +void *_alloc_pages_exact(size_t size, gfp_t gfp_mask) { unsigned int order = get_order(size); unsigned long addr; @@ -5031,13 +5046,13 @@ void *alloc_pages_exact(size_t size, gfp_t gfp_mask) if (WARN_ON_ONCE(gfp_mask & (__GFP_COMP | __GFP_HIGHMEM))) gfp_mask &= ~(__GFP_COMP | __GFP_HIGHMEM); - addr = __get_free_pages(gfp_mask, order); + addr = _get_free_pages(gfp_mask, order); return make_alloc_exact(addr, order, size); } -EXPORT_SYMBOL(alloc_pages_exact); +EXPORT_SYMBOL(_alloc_pages_exact); /** - * alloc_pages_exact_nid - allocate an exact number of physically-contiguous + * _alloc_pages_exact_nid - allocate an exact number of physically-contiguous * pages on a node. * @nid: the preferred node ID where memory should be allocated * @size: the number of bytes to allocate @@ -5048,7 +5063,7 @@ EXPORT_SYMBOL(alloc_pages_exact); * * Return: pointer to the allocated area or %NULL in case of error. */ -void * __meminit alloc_pages_exact_nid(int nid, size_t size, gfp_t gfp_mask) +void * __meminit _alloc_pages_exact_nid(int nid, size_t size, gfp_t gfp_mask) { unsigned int order = get_order(size); struct page *p; @@ -5056,7 +5071,7 @@ void * __meminit alloc_pages_exact_nid(int nid, size_t size, gfp_t gfp_mask) if (WARN_ON_ONCE(gfp_mask & (__GFP_COMP | __GFP_HIGHMEM))) gfp_mask &= ~(__GFP_COMP | __GFP_HIGHMEM); - p = alloc_pages_node(nid, gfp_mask, order); + p = _alloc_pages_node(nid, gfp_mask, order); if (!p) return NULL; return make_alloc_exact((unsigned long)page_address(p), order, size); @@ -6729,7 +6744,7 @@ int __alloc_contig_migrate_range(struct compact_control *cc, } /** - * alloc_contig_range() -- tries to allocate given range of pages + * _alloc_contig_range() -- tries to allocate given range of pages * @start: start PFN to allocate * @end: one-past-the-last PFN to allocate * @migratetype: migratetype of the underlying pageblocks (either @@ -6749,7 +6764,7 @@ int __alloc_contig_migrate_range(struct compact_control *cc, * pages which PFN is in [start, end) are allocated for the caller and * need to be freed with free_contig_range(). */ -int alloc_contig_range(unsigned long start, unsigned long end, +int _alloc_contig_range(unsigned long start, unsigned long end, unsigned migratetype, gfp_t gfp_mask) { unsigned long outer_start, outer_end; @@ -6873,15 +6888,15 @@ int alloc_contig_range(unsigned long start, unsigned long end, undo_isolate_page_range(start, end, migratetype); return ret; } -EXPORT_SYMBOL(alloc_contig_range); +EXPORT_SYMBOL(_alloc_contig_range); static int __alloc_contig_pages(unsigned long start_pfn, unsigned long nr_pages, gfp_t gfp_mask) { unsigned long end_pfn = start_pfn + nr_pages; - return alloc_contig_range(start_pfn, end_pfn, MIGRATE_MOVABLE, - gfp_mask); + return _alloc_contig_range(start_pfn, end_pfn, MIGRATE_MOVABLE, + gfp_mask); } static bool pfn_range_valid_contig(struct zone *z, unsigned long start_pfn, @@ -6916,7 +6931,7 @@ static bool zone_spans_last_pfn(const struct zone *zone, } /** - * alloc_contig_pages() -- tries to find and allocate contiguous range of pages + * _alloc_contig_pages() -- tries to find and allocate contiguous range of pages * @nr_pages: Number of contiguous pages to allocate * @gfp_mask: GFP mask to limit search and used during compaction * @nid: Target node @@ -6936,8 +6951,8 @@ static bool zone_spans_last_pfn(const struct zone *zone, * * Return: pointer to contiguous pages on success, or NULL if not successful. */ -struct page *alloc_contig_pages(unsigned long nr_pages, gfp_t gfp_mask, - int nid, nodemask_t *nodemask) +struct page *_alloc_contig_pages(unsigned long nr_pages, gfp_t gfp_mask, + int nid, nodemask_t *nodemask) { unsigned long ret, pfn, flags; struct zonelist *zonelist; From patchwork Mon May 1 16:54:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 89089 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp58093vqo; Mon, 1 May 2023 10:00:40 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ79Corpgcpr7ZcQOgFpYHrI1g1Xan2gGdH9eNcbqSNTVlgDV7LfI+mU+cfIFHEwqZE8lrvc X-Received: by 2002:a17:903:2304:b0:1a1:dd2a:fe6c with SMTP id d4-20020a170903230400b001a1dd2afe6cmr15153831plh.53.1682960440063; Mon, 01 May 2023 10:00:40 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682960440; cv=none; d=google.com; s=arc-20160816; b=cR8S+Hd9aclWRY7BV9NEtI/TIjLCtoBAC9RSuBtrC/Ctah23tNRf3Jz5eO/y9ZswmN CfoMmlN5QkgpMWbbdVtWJFQShq9piFPhpDJa2ft3Kwt51K7NBeL0Yob2tvjNAlp6xUhG nd+XDR87GEQTpUqfDq4oyy2Kw54DOUeh2CUTjc9Co6AgMwReLjQlrXqKpALpS1R8E0Le b3cVLmWWLlI/kQ2HOoYtiOd4LeDrAQtiMrVBSh1V0ljisKwTj/NWRznIHEz93kAFu5vJ YfNYerSKs9x3zgPB03MpzdQtFv4lO8yhclwOP/FBBwbg3S4W1f8OmEpWgyCNoVTfNhfK /feA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:dkim-signature; bh=ia0f0ycGZD+jM5YS+jW1bQxvUpAwQcUQoxrBuJXYMcE=; b=UND8dLf3gE9HqnZOJ1Og5t0kHN8lY+mUCtJqP/x84Ekv5iGyO4bKNz9OfCz6HUau5K uOTq+xkFmDIs0Sj0ubQp0guapkJ8DeSCNnCOzo2BkOKolYik83UL6G0bV6RwMh4DtApu CxIzF+pGbDiTe6EtUzd53bbg4ZOp6qyWPUt7JICfhY5IxLOET69tKjuMN/eF8bCm0xe8 gBZvvrbTTD1oIJFLqLo3Geu2zSv1lrffF/7FEn++GpTeL4BOfLv7Q9DuM8JEShGfy9mn 6Al6voS3eJbzqCgT9j770CWbElfg+Y3GM4V5Ep6yDyzU4HFqF53dQ6jZ2FeYoekFqjAm 11vQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=5JJhV76m; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id q14-20020a170902dace00b0019bf9b4b5f1si29734118plx.629.2023.05.01.10.00.23; Mon, 01 May 2023 10:00:40 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=5JJhV76m; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233111AbjEAQ7u (ORCPT + 99 others); Mon, 1 May 2023 12:59:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33148 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232672AbjEAQ6k (ORCPT ); Mon, 1 May 2023 12:58:40 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E40BA30D2 for ; Mon, 1 May 2023 09:56:10 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id 3f1490d57ef6-b9e2b65f2eeso1390420276.2 for ; Mon, 01 May 2023 09:56:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1682960156; x=1685552156; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=ia0f0ycGZD+jM5YS+jW1bQxvUpAwQcUQoxrBuJXYMcE=; b=5JJhV76m95/YBoZ/xZu42cKOJXhgJORCHh7GQ1vgchybA9Jz2W8JB87J32lzEHqVAD 5MpwShEQTLsgetTJKB81Ya9oYfauOk73A+wkcp3YnAqkuYR2uGLglMtljDk1oXUFeXUF eg/dqf1WTTeod1mNhPnzcjxqEJmrvwX1WQABkeXqhDWaPCFp/KOIQdT6Kd/lNmNqK8uA jO52yQ6iu7pCgMF+oglZnvEtlqmI75FROfYcJYL0lGgLhZXRGGNj8gM5KHULwpV7DXj0 IQGmZzm/JuCCa0BNo8vc+UrvY2fi7yQcbqUa8T0x4PnakvTfaxlxsiA9vMtpl/EDSZYr 7ZHg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682960156; x=1685552156; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=ia0f0ycGZD+jM5YS+jW1bQxvUpAwQcUQoxrBuJXYMcE=; b=Y33147DBAqIbCetybWe+1ZR9XP1sYm6x4rXx/VjaSGoBTKfr3Yg3SHlrmwJ+xlDkSq G3Brd2kyJaiEeNoSFgYrqdGStonRMWQ462ptuHqE0ZCgVIdP9lrfScW3eu4B16zUPKCr OuDDepJtutonyBRARXFGr0sEz5lBl2y9pPN11XIwO26hLF+JWjH5L4y3wGGfo96ejQ7f lBeZ3pw9wzToV61PR+aLJIoHhYche5IDUzCG/DqqdDvQORFUODJxwRBNLPR2ytk0uWoV fySvDeNLVfICPwBIgxGKci65RapkoydaQg5vDGQ7OMdy9TCnEqMvUMKTR9MaBlzcVf95 q++w== X-Gm-Message-State: AC+VfDwyGnqp9PqJ7saX1QwaP82nOlKJ6nYG0jlRtkBMQ5ICEWnGb/uG OoS/BSpXwgLBIpyRZxEh1NuhCYUlepE= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:6d24:3efd:facc:7ac4]) (user=surenb job=sendgmr) by 2002:a25:5d1:0:b0:b9d:52cf:4a6b with SMTP id 200-20020a2505d1000000b00b9d52cf4a6bmr4308920ybf.1.1682960156135; Mon, 01 May 2023 09:55:56 -0700 (PDT) Date: Mon, 1 May 2023 09:54:31 -0700 In-Reply-To: <20230501165450.15352-1-surenb@google.com> Mime-Version: 1.0 References: <20230501165450.15352-1-surenb@google.com> X-Mailer: git-send-email 2.40.1.495.gc816e09b53d-goog Message-ID: <20230501165450.15352-22-surenb@google.com> Subject: [PATCH 21/40] mm/page_ext: enable early_page_ext when CONFIG_MEM_ALLOC_PROFILING_DEBUG=y From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, ldufour@linux.ibm.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org X-Spam-Status: No, score=-9.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1764711926213554656?= X-GMAIL-MSGID: =?utf-8?q?1764711926213554656?= For all page allocations to be tagged, page_ext has to be initialized before the first page allocation. Early tasks allocate their stacks using page allocator before alloc_node_page_ext() initializes page_ext area, unless early_page_ext is enabled. Therefore these allocations will generate a warning when CONFIG_MEM_ALLOC_PROFILING_DEBUG is enabled. Enable early_page_ext whenever CONFIG_MEM_ALLOC_PROFILING_DEBUG=y to ensure page_ext initialization prior to any page allocation. This will have all the negative effects associated with early_page_ext, such as possible longer boot time, therefore we enable it only when debugging with CONFIG_MEM_ALLOC_PROFILING_DEBUG enabled and not universally for CONFIG_MEM_ALLOC_PROFILING. Signed-off-by: Suren Baghdasaryan --- mm/page_ext.c | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/mm/page_ext.c b/mm/page_ext.c index eaf054ec276c..55ba797f8881 100644 --- a/mm/page_ext.c +++ b/mm/page_ext.c @@ -96,7 +96,16 @@ unsigned long page_ext_size; static unsigned long total_usage; struct page_ext *lookup_page_ext(const struct page *page); +#ifdef CONFIG_MEM_ALLOC_PROFILING_DEBUG +/* + * To ensure correct allocation tagging for pages, page_ext should be available + * before the first page allocation. Otherwise early task stacks will be + * allocated before page_ext initialization and missing tags will be flagged. + */ +bool early_page_ext __meminitdata = true; +#else bool early_page_ext __meminitdata; +#endif static int __init setup_early_page_ext(char *str) { early_page_ext = true; From patchwork Mon May 1 16:54:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 89090 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp58183vqo; Mon, 1 May 2023 10:00:46 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ7c+HxIp2GE2HYeXp+vR4l0BaorgohV7Y4vUtTV9w/niZ3P8LuCG14stNlPkBnD6/Y50GZ9 X-Received: by 2002:a17:902:d4c7:b0:1a6:dc3b:9ed2 with SMTP id o7-20020a170902d4c700b001a6dc3b9ed2mr17683090plg.13.1682960446439; Mon, 01 May 2023 10:00:46 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682960446; cv=none; d=google.com; s=arc-20160816; b=vEu6muWMYy5qaIYXHVUVaTDk0qjrmX7xK6x+KXdRnQlVRSnWkVW7Q53mO5DayEdmYE nfqu25vI9uJhwbTMxxDZPCiiAbKRpB5RbcXTDhJzwM9lhDn51VNkhZ+OT0itCGTYNumi WaP5iysE73ihvygA1OKwiJerpPjFyNQBK9bLmM2frUcYogiMf6aoHWk4slP1hkDxSzvg k5Q/rCXI63yehWQQZeJkRo8f0CeWSmQyC8SkyOFN7zOWD9mCID8PQN/ZGIFoz2WlynbO d9bBzebowGtj0HKSx4PDELi8b58GsZMmVUYqiUqe7WGKNIPu7U+V88lpa5McdCb5QU2W DuwA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:dkim-signature; bh=+1Ptlsu3HrvWcynZoOK0ZqIIaOTUmOtwKZY5J1VaLvs=; b=SfzU2q25Osg0KHwskoM8JuwdWrxSn64HvsN3qWO21fC+nbgFaKM8nWB22Qlz/Mh8em ZGaSMKzkWFdj5rH5ytPke8SLy+4hZXaodRAVfiCPtySwBuVhZkAhaTyX1JRd3jLfJXbu q9wA1EdFS6WXHB0BsYjrLv8il6j/jXAQBHZ2qtUsnsCIu8nxdRhcb7YgUSjbcAfG5ZM7 BnlQNx45TsCi9aE4JppKccLvTzFAsOurUB7YxXH96j2A1ccfmyCfrFENsowwTUIW9H9x d/dTVEl35bsaQq9wq4QR1edBYMXNZKRH7ncKHhl3dSSdXpJ6ASVKeUv3eQ+Og+XJN+13 VyKA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b="7gQ7i/dA"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id n9-20020a170902d2c900b001a6eba7583csi29365584plc.633.2023.05.01.10.00.30; Mon, 01 May 2023 10:00:46 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b="7gQ7i/dA"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233171AbjEAQ77 (ORCPT + 99 others); Mon, 1 May 2023 12:59:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33296 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232806AbjEAQ67 (ORCPT ); Mon, 1 May 2023 12:58:59 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A0C5C30ED for ; Mon, 1 May 2023 09:56:16 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id 3f1490d57ef6-b96ee51ee20so3263415276.3 for ; Mon, 01 May 2023 09:56:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1682960158; x=1685552158; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=+1Ptlsu3HrvWcynZoOK0ZqIIaOTUmOtwKZY5J1VaLvs=; b=7gQ7i/dAC1G0h1XxKulN+zlGUyuRlgFLhf3t4U3ldYpJ9ZDjgqW3Y7teulwLUtCfTD /VnlwalgQGiFZbmlj07JbUX6Sp54fAN7SxMoxkItZT8pyryAeS9XmoLet1E+NORYJiar iYyZW8dCc0lMzuXcFL3U4QE+FrlZuAGGckHXyur/pD6//ulYUoFzkf8zDD0aROPp40k/ xmTrCxx3e3E9BLeb2nij/r9kqOJAFZfyDKOq1UGUdyyy0zgGTjg9yGYmpyEbnJHJf0ZV W5qYg5oFjenWEcOuTpD8cYePEcPbJSlpmAYb7YzYWjMWXvtbqAw/EQzvEU6N9zSkiTXv C/RA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682960158; x=1685552158; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=+1Ptlsu3HrvWcynZoOK0ZqIIaOTUmOtwKZY5J1VaLvs=; b=TJzdR5so5xf77XYQrctQe/nQh0jFlrfF9TWCL5v8L7ShQQ01F0rGHowAQ6BXDCUJ6e a6Cf6LOe7X3A39hW6oK1fdojc4AJumH36KBlSjxM6OolnBn0ud2uCnW64ozNXiynnlHD 3sYd9Hm6Ree4d1mzWsdhMrL9vVtM03Z3Eo/LXqtX32ngWz3EJIYW4nOK7195FkPw+vyQ pIxhY3wSAd8jv4RE5Vsl0bOHH5iBxtFJz9gfcZlLvCmOapWVUFERcyMq2EBySVg0Cgeh THnxaUNxO+9jVQjjkIUUqMna+HSbrDBQS4d/UjH7j+n3o0RNp6TfEUmYvYvDRUtSMs2K al6g== X-Gm-Message-State: AC+VfDyPf8OFkVDQ9eEybVvgHziVzg6GgQPA45/VLbjhseVGRhkRYgpI qFWwEVOzztgoj+QFJ3Uf2AQ4jv3R0ac= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:6d24:3efd:facc:7ac4]) (user=surenb job=sendgmr) by 2002:a05:6902:1028:b0:b8c:607:7669 with SMTP id x8-20020a056902102800b00b8c06077669mr8930549ybt.5.1682960158430; Mon, 01 May 2023 09:55:58 -0700 (PDT) Date: Mon, 1 May 2023 09:54:32 -0700 In-Reply-To: <20230501165450.15352-1-surenb@google.com> Mime-Version: 1.0 References: <20230501165450.15352-1-surenb@google.com> X-Mailer: git-send-email 2.40.1.495.gc816e09b53d-goog Message-ID: <20230501165450.15352-23-surenb@google.com> Subject: [PATCH 22/40] mm: create new codetag references during page splitting From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, ldufour@linux.ibm.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org X-Spam-Status: No, score=-9.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1764711932649618995?= X-GMAIL-MSGID: =?utf-8?q?1764711932649618995?= When a high-order page is split into smaller ones, each newly split page should get its codetag. The original codetag is reused for these pages but it's recorded as 0-byte allocation because original codetag already accounts for the original high-order allocated page. Signed-off-by: Suren Baghdasaryan --- include/linux/pgalloc_tag.h | 30 ++++++++++++++++++++++++++++++ mm/huge_memory.c | 2 ++ mm/page_alloc.c | 2 ++ 3 files changed, 34 insertions(+) diff --git a/include/linux/pgalloc_tag.h b/include/linux/pgalloc_tag.h index 567327c1c46f..0cbba13869b5 100644 --- a/include/linux/pgalloc_tag.h +++ b/include/linux/pgalloc_tag.h @@ -52,11 +52,41 @@ static inline void pgalloc_tag_dec(struct page *page, unsigned int order) } } +static inline void pgalloc_tag_split(struct page *page, unsigned int nr) +{ + int i; + struct page_ext *page_ext; + union codetag_ref *ref; + struct alloc_tag *tag; + + if (!mem_alloc_profiling_enabled()) + return; + + page_ext = page_ext_get(page); + if (unlikely(!page_ext)) + return; + + ref = codetag_ref_from_page_ext(page_ext); + if (!ref->ct) + goto out; + + tag = ct_to_alloc_tag(ref->ct); + page_ext = page_ext_next(page_ext); + for (i = 1; i < nr; i++) { + /* New reference with 0 bytes accounted */ + alloc_tag_add(codetag_ref_from_page_ext(page_ext), tag, 0); + page_ext = page_ext_next(page_ext); + } +out: + page_ext_put(page_ext); +} + #else /* CONFIG_MEM_ALLOC_PROFILING */ static inline union codetag_ref *get_page_tag_ref(struct page *page) { return NULL; } static inline void put_page_tag_ref(union codetag_ref *ref) {} #define pgalloc_tag_dec(__page, __size) do {} while (0) +static inline void pgalloc_tag_split(struct page *page, unsigned int nr) {} #endif /* CONFIG_MEM_ALLOC_PROFILING */ diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 624671aaa60d..221cce0052a2 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -37,6 +37,7 @@ #include #include #include +#include #include #include @@ -2557,6 +2558,7 @@ static void __split_huge_page(struct page *page, struct list_head *list, /* Caller disabled irqs, so they are still disabled here */ split_page_owner(head, nr); + pgalloc_tag_split(head, nr); /* See comment in __split_huge_page_tail() */ if (PageAnon(head)) { diff --git a/mm/page_alloc.c b/mm/page_alloc.c index edd35500f7f6..8cf5a835af7f 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -2796,6 +2796,7 @@ void split_page(struct page *page, unsigned int order) for (i = 1; i < (1 << order); i++) set_page_refcounted(page + i); split_page_owner(page, 1 << order); + pgalloc_tag_split(page, 1 << order); split_page_memcg(page, 1 << order); } EXPORT_SYMBOL_GPL(split_page); @@ -5012,6 +5013,7 @@ static void *make_alloc_exact(unsigned long addr, unsigned int order, struct page *last = page + nr; split_page_owner(page, 1 << order); + pgalloc_tag_split(page, 1 << order); split_page_memcg(page, 1 << order); while (page < --last) set_page_refcounted(last); From patchwork Mon May 1 16:54:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 89111 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp71434vqo; Mon, 1 May 2023 10:19:15 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ7jsvN4bOmZfVpy1ktPwskW1gc7wMEPGpaLwRW0Lh6FVlf2Bm3OAH7xUH7CY0vcoxn9u2K3 X-Received: by 2002:a05:6a00:1ac7:b0:63d:32a3:b5f7 with SMTP id f7-20020a056a001ac700b0063d32a3b5f7mr21745735pfv.12.1682961554947; Mon, 01 May 2023 10:19:14 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682961554; cv=none; d=google.com; s=arc-20160816; b=Zaas3egWZb0ZHDhvujFKV0ZLP6QbAAZDb0heNc5kHHsAb29RsAcdQ/rRB0BZgNvZQM KE/s/mCt7cLaVBqrjH2SNhHxIFNroJKKTnrVAVpUVN4+Akq/RJLlcV74WciBA31ZlqQk vMJcsx+VRg92LfRKdPrEHJimTGvenCElBtaeJjNP1VgJC4WWf8hxsSay3JooDnzOsA05 LnNNgKKp57g1Hd3zFgzBsWeDDJ+80j8ZFcf9XMFB7Q/tZrYwnAeQMJIvx6S8ikGu6YGZ 7T36LOb5zhyCb52u/6KYMd4hTH1BJBGzW1f1VFRPBnh/eAwWjOGc96nqW/Ygjq12r9AL v51w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:dkim-signature; bh=3JQAU/dBFDp2j3prOKpx9pu61B2aRzHMn5ijo1mj0wk=; b=cjJ17eUSvVHSJIIONH8LfHS6vt8G5UGnwjxU7t8NkyGGxJiEDvkWo1V48wtK6jhfGM yMrSy0n3joam5bJRfpwxp2fONTIbryyk91VYL+b2OhJjl128/6LKN3u9eTWBxSoc7roE 4JduZaE+G+mcDF3cGcri3AvlKn3amwwdIc0/b6/Q1bsiYDN3GjWAyzdJHq6+7BHT0TVD aNK/K2FRs1Y1XLyaT77eKc5iD9Oh26/lF0ORPZDCnF8JHInX3/mccajKVa3xVus6Flp4 Y9BMsNDu9SFmWtsfc97vAJe10OQ8WEfiOhgZds1caxgJuD/PMEp7PUD0DVpfoK6cxwyu xKdg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b="Nr4QXyj/"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id h190-20020a6253c7000000b0063b5e220edasi28184874pfb.400.2023.05.01.10.19.00; Mon, 01 May 2023 10:19:14 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b="Nr4QXyj/"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232943AbjEAQ7A (ORCPT + 99 others); Mon, 1 May 2023 12:59:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33252 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232356AbjEAQ5q (ORCPT ); Mon, 1 May 2023 12:57:46 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1E73A2D46 for ; Mon, 1 May 2023 09:56:01 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id 3f1490d57ef6-b9a7df507c5so5361614276.1 for ; Mon, 01 May 2023 09:56:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1682960160; x=1685552160; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=3JQAU/dBFDp2j3prOKpx9pu61B2aRzHMn5ijo1mj0wk=; b=Nr4QXyj/etzJlAh8i4e3XnDFrHrok+4KAhChzrGdGYpNpzqnTzNz+b7TDzIFPvjVWJ 9m6bX+drGU9A6fH4AYp0HUvbrb0fCoSPBmCg7BeIb7knPz8EfEfG9cPbzcBJ5VX+RXOe xKxhA1OiVlh84AEwzHBh+cMUHvzUPF37NpWH9RACZgYBnn9nhrwQ+ZtD6LOD5aF3vQAg h9QGbAMavYpJls9fxf3kDm4J6+Qn0mT0BuRWWAvY6omxOHY3anv02SbqbukFuexGdvmI uLtak6s/1IGom0RtuY8qFrrhQXG4Ym0F7I61Qiqk5+0EJAXYPW/dDKZJd/GArGchifG6 ytrA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682960160; x=1685552160; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=3JQAU/dBFDp2j3prOKpx9pu61B2aRzHMn5ijo1mj0wk=; b=Sc1mQ2ck5OfdtWZfgXIirNbAn2BKmbCnL2LoM92sIaHi34kkNt5TRJNlkk3lpLgkoV ytO3lanyFH4YC+ZdM7SA5Q4yTjvSxliSGJGu2+TBtZuSfv9fjtqaUrwgxCLKYxEZPVKX eG9hQTQ25hrMa4MdX3T7OVlHYoa7pHVZBCMjNIokPTYn3gEfYYQ0XuzNIVt5Y4eLpWVC eGGucfLeOXmVetl6H7/YGTjLRY1L8DidkZz6vzD6Dt/7OcGWvb/bYE1WH6vqUeDNEEaq rCWFellHN+6da9xREqueffsvDI23MTdWyEzdLmPOEuDjTJvXN96x3GIRL/CL8OZ/T/QL YbMw== X-Gm-Message-State: AC+VfDyqT4mCfxV4BBzSfKIZeFcgvW+W3YDeWuGpqk7aQ/8ZYPLWNczZ Ll8ofhcqN9WFM7CEPFZKhY/8ttuK8hs= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:6d24:3efd:facc:7ac4]) (user=surenb job=sendgmr) by 2002:a25:2484:0:b0:b95:e649:34b6 with SMTP id k126-20020a252484000000b00b95e64934b6mr8454589ybk.1.1682960160542; Mon, 01 May 2023 09:56:00 -0700 (PDT) Date: Mon, 1 May 2023 09:54:33 -0700 In-Reply-To: <20230501165450.15352-1-surenb@google.com> Mime-Version: 1.0 References: <20230501165450.15352-1-surenb@google.com> X-Mailer: git-send-email 2.40.1.495.gc816e09b53d-goog Message-ID: <20230501165450.15352-24-surenb@google.com> Subject: [PATCH 23/40] lib: add codetag reference into slabobj_ext From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, ldufour@linux.ibm.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org X-Spam-Status: No, score=-9.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1764713095418884877?= X-GMAIL-MSGID: =?utf-8?q?1764713095418884877?= To store code tag for every slab object, a codetag reference is embedded into slabobj_ext when CONFIG_MEM_ALLOC_PROFILING=y. Signed-off-by: Suren Baghdasaryan Co-developed-by: Kent Overstreet Signed-off-by: Kent Overstreet --- include/linux/memcontrol.h | 5 +++++ lib/Kconfig.debug | 1 + mm/slab.h | 4 ++++ 3 files changed, 10 insertions(+) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 5e2da63c525f..c7f21b15b540 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -1626,7 +1626,12 @@ unsigned long mem_cgroup_soft_limit_reclaim(pg_data_t *pgdat, int order, * if MEMCG_DATA_OBJEXTS is set. */ struct slabobj_ext { +#ifdef CONFIG_MEMCG_KMEM struct obj_cgroup *objcg; +#endif +#ifdef CONFIG_MEM_ALLOC_PROFILING + union codetag_ref ref; +#endif } __aligned(8); static inline void __inc_lruvec_kmem_state(void *p, enum node_stat_item idx) diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug index d3aa5ee0bf0d..4157c2251b07 100644 --- a/lib/Kconfig.debug +++ b/lib/Kconfig.debug @@ -968,6 +968,7 @@ config MEM_ALLOC_PROFILING select CODE_TAGGING select LAZY_PERCPU_COUNTER select PAGE_EXTENSION + select SLAB_OBJ_EXT help Track allocation source code and record total allocation size initiated at that code location. The mechanism can be used to track diff --git a/mm/slab.h b/mm/slab.h index bec202bdcfb8..f953e7c81e98 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -418,6 +418,10 @@ int alloc_slab_obj_exts(struct slab *slab, struct kmem_cache *s, static inline bool need_slab_obj_ext(void) { +#ifdef CONFIG_MEM_ALLOC_PROFILING + if (mem_alloc_profiling_enabled()) + return true; +#endif /* * CONFIG_MEMCG_KMEM creates vector of obj_cgroup objects conditionally * inside memcg_slab_post_alloc_hook. No other users for now. From patchwork Mon May 1 16:54:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 89105 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp70635vqo; Mon, 1 May 2023 10:17:55 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ4fPMMn8Hp4EuE+wM5JxoA8HG41vy87ioDqTNOvAcoOQtMBnpyFSiNmwn2/WwX7VUP1yIiV X-Received: by 2002:a05:6a20:d69b:b0:f6:6d15:8b0c with SMTP id it27-20020a056a20d69b00b000f66d158b0cmr14904910pzb.35.1682961475330; Mon, 01 May 2023 10:17:55 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682961475; cv=none; d=google.com; s=arc-20160816; b=MWOlSYadKJSW5BBxS30C3P17Vgf1+vE6vh8kb45JlQbXl0e2etszVJgm60FOsaHaoh 6vbKclpmua9n+Wd+RMaEe2fT1U+VKKpRCxUGzIqOpTyYBirbjJN+oZENxtS/jQDsGUZr 3rRynVbkkHZXECwGkXnXYDtLeRrw+aXXor7JznfDPixjDaD9uYX/0s0nQhWQToAMY4Ug MLvzvPAjMH7YLTtFwbeC/rE5nL2tDyzZ78dFkG0nrvUOGcABY0fji2UY3c2D7lAiNuLc 39kzYSXgQas5/YRmmIcq7PfUVzQO3T6ZjQPbVnYjcrfIEAbWPsoOUj0K4kjbP90NICNq i4ww== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:dkim-signature; bh=UqN7FeV7aPqCmKItkyIx4xBxa3o94vtL5Phvca7QUfw=; b=0qb+2nfpgMBOBehX78YPnnHT0xa8pYjDLF8ECcNsXoOgPtsuayq4FCZjA7mZPAWIUV ao5k0kPRDuEJWfMGYADAyiCRfYc7umEsXR3DeG08Mse2QBDy3M2vb3L86FOtaP4uUFo3 DyauGjA711mQ+3JFwsXVv3KlmT6MaLL1/LuNRmHbdGvHc1pzbHDTpp1bFwPjbiIWJEAi 1fB16MMQIdWPUkwgpEIDBvfsgAGK6vdGMaG9x6YxlCT4dm+OMk4ZK7UjfW8JJm3Vc1Do NLyBGfHhbo/3WeP3ANeS3wUQCTDE+HDNJlapijJ5SyNszkcuTyI/V+5a4Zvg4Nl+EDG8 jlHw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b="E3/DZ7A2"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id k19-20020a63ff13000000b004fb98290dbdsi28435680pgi.50.2023.05.01.10.17.40; Mon, 01 May 2023 10:17:55 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b="E3/DZ7A2"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232718AbjEARAF (ORCPT + 99 others); Mon, 1 May 2023 13:00:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60828 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232819AbjEAQ7D (ORCPT ); Mon, 1 May 2023 12:59:03 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 180D230D5 for ; Mon, 1 May 2023 09:56:19 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id 3f1490d57ef6-b8f324b3ef8so3225286276.0 for ; Mon, 01 May 2023 09:56:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1682960163; x=1685552163; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=UqN7FeV7aPqCmKItkyIx4xBxa3o94vtL5Phvca7QUfw=; b=E3/DZ7A2w5/KA0SPoOgDZl8cJAWCaY3BGBFQeSc/qzjQe0fM9xmbbTDocgwuOOp3LT GK3YJqraePyIrgvlz0C1S6Wec6V4sz3+3alNU4R6cNfgWKAm/KHB5K1fHdSBD6UsYReW OypIaQzRticsOqfq5jwU1kZdYTvr7RLYZbcqb3Ite92bHmcVQajevDqoiuCCwS030v+p Be3SQphLsWQFuGhQq8i0kXYiixrDLOjgisXe2ZSgniNRK9cfGXtFSqdN+Kd8F5/lQa7E /p5ZymC5neHH+BhUfAayBM2rdDwZ8KZfkYIvJtL+L0Ri9FOGTCuyj9nVUWNgD0tlqnQZ Ay6A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682960163; x=1685552163; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=UqN7FeV7aPqCmKItkyIx4xBxa3o94vtL5Phvca7QUfw=; b=LNEZ92sIGOiXwtIV06dDIRpC3VQPchEdrj8764AEK8FwjrHa4XYppOinwGqO9iUMjz oLADlDf4Q7nuglkmBueUpl2Q19E0iAKquLHCEm3HtHf+nUu9wR8Iq2MvtmxVJJQolApo G2GzkGeS3RZ/tLY8fNeCPpV16tC8KTyJlJ/hCeZbYvi5lFHeegV+lUYnxNtJK/iYqW+O czGRu+UGBUVxWCe4yf6trEVTVKntJcaP5kJnG0v2QMhmEnmxJJM2glWet5X9fmGlmyou pHolLsj8WC0g5mRJVCZzkQOzR6tpHedYYmu0EbsPWBX5eaUAppQ2tam9keRBzzIvcrit 4Niw== X-Gm-Message-State: AC+VfDykzFPu+9CoUI42iUe5CB7k6QN96sNRtH4fKkoQuc0uqcpJQXwF jr6qYKpZCHQRVUwvWqkvjF/AbwlTKag= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:6d24:3efd:facc:7ac4]) (user=surenb job=sendgmr) by 2002:a05:6902:18d6:b0:b8f:3647:d757 with SMTP id ck22-20020a05690218d600b00b8f3647d757mr9026699ybb.11.1682960162837; Mon, 01 May 2023 09:56:02 -0700 (PDT) Date: Mon, 1 May 2023 09:54:34 -0700 In-Reply-To: <20230501165450.15352-1-surenb@google.com> Mime-Version: 1.0 References: <20230501165450.15352-1-surenb@google.com> X-Mailer: git-send-email 2.40.1.495.gc816e09b53d-goog Message-ID: <20230501165450.15352-25-surenb@google.com> Subject: [PATCH 24/40] mm/slab: add allocation accounting into slab allocation and free paths From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, ldufour@linux.ibm.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org X-Spam-Status: No, score=-9.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1764713012084873660?= X-GMAIL-MSGID: =?utf-8?q?1764713012084873660?= Account slab allocations using codetag reference embedded into slabobj_ext. Signed-off-by: Suren Baghdasaryan Co-developed-by: Kent Overstreet Signed-off-by: Kent Overstreet --- include/linux/slab_def.h | 2 +- include/linux/slub_def.h | 4 ++-- mm/slab.c | 4 +++- mm/slab.h | 35 +++++++++++++++++++++++++++++++++++ 4 files changed, 41 insertions(+), 4 deletions(-) diff --git a/include/linux/slab_def.h b/include/linux/slab_def.h index a61e7d55d0d3..23f14dcb8d5b 100644 --- a/include/linux/slab_def.h +++ b/include/linux/slab_def.h @@ -107,7 +107,7 @@ static inline void *nearest_obj(struct kmem_cache *cache, const struct slab *sla * reciprocal_divide(offset, cache->reciprocal_buffer_size) */ static inline unsigned int obj_to_index(const struct kmem_cache *cache, - const struct slab *slab, void *obj) + const struct slab *slab, const void *obj) { u32 offset = (obj - slab->s_mem); return reciprocal_divide(offset, cache->reciprocal_buffer_size); diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h index f6df03f934e5..e8be5b368857 100644 --- a/include/linux/slub_def.h +++ b/include/linux/slub_def.h @@ -176,14 +176,14 @@ static inline void *nearest_obj(struct kmem_cache *cache, const struct slab *sla /* Determine object index from a given position */ static inline unsigned int __obj_to_index(const struct kmem_cache *cache, - void *addr, void *obj) + void *addr, const void *obj) { return reciprocal_divide(kasan_reset_tag(obj) - addr, cache->reciprocal_size); } static inline unsigned int obj_to_index(const struct kmem_cache *cache, - const struct slab *slab, void *obj) + const struct slab *slab, const void *obj) { if (is_kfence_address(obj)) return 0; diff --git a/mm/slab.c b/mm/slab.c index ccc76f7455e9..026f0c08708a 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -3367,9 +3367,11 @@ static void cache_flusharray(struct kmem_cache *cachep, struct array_cache *ac) static __always_inline void __cache_free(struct kmem_cache *cachep, void *objp, unsigned long caller) { + struct slab *slab = virt_to_slab(objp); bool init; - memcg_slab_free_hook(cachep, virt_to_slab(objp), &objp, 1); + memcg_slab_free_hook(cachep, slab, &objp, 1); + alloc_tagging_slab_free_hook(cachep, slab, &objp, 1); if (is_kfence_address(objp)) { kmemleak_free_recursive(objp, cachep->flags); diff --git a/mm/slab.h b/mm/slab.h index f953e7c81e98..f9442d3a10b2 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -494,6 +494,35 @@ prepare_slab_obj_exts_hook(struct kmem_cache *s, gfp_t flags, void *p) #endif /* CONFIG_SLAB_OBJ_EXT */ +#ifdef CONFIG_MEM_ALLOC_PROFILING + +static inline void alloc_tagging_slab_free_hook(struct kmem_cache *s, struct slab *slab, + void **p, int objects) +{ + struct slabobj_ext *obj_exts; + int i; + + if (!mem_alloc_profiling_enabled()) + return; + + obj_exts = slab_obj_exts(slab); + if (!obj_exts) + return; + + for (i = 0; i < objects; i++) { + unsigned int off = obj_to_index(s, slab, p[i]); + + alloc_tag_sub(&obj_exts[off].ref, s->size); + } +} + +#else + +static inline void alloc_tagging_slab_free_hook(struct kmem_cache *s, struct slab *slab, + void **p, int objects) {} + +#endif /* CONFIG_MEM_ALLOC_PROFILING */ + #ifdef CONFIG_MEMCG_KMEM void mod_objcg_state(struct obj_cgroup *objcg, struct pglist_data *pgdat, enum node_stat_item idx, int nr); @@ -776,6 +805,12 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s, s->flags, flags); kmsan_slab_alloc(s, p[i], flags); obj_exts = prepare_slab_obj_exts_hook(s, flags, p[i]); + +#ifdef CONFIG_MEM_ALLOC_PROFILING + /* obj_exts can be allocated for other reasons */ + if (likely(obj_exts) && mem_alloc_profiling_enabled()) + alloc_tag_add(&obj_exts->ref, current->alloc_tag, s->size); +#endif } memcg_slab_post_alloc_hook(s, objcg, flags, size, p); From patchwork Mon May 1 16:54:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 89114 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp71634vqo; Mon, 1 May 2023 10:19:37 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6n46SIHMb331kRsbgciTZr6e4bQLP+ztcKU5UllBFIOvg3WuZ3u7JW/LekmzY+pGtNgxYg X-Received: by 2002:a05:6a21:300a:b0:f0:110b:bf9b with SMTP id yd10-20020a056a21300a00b000f0110bbf9bmr13136539pzb.16.1682961577553; Mon, 01 May 2023 10:19:37 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682961577; cv=none; d=google.com; s=arc-20160816; b=LCnbr+u2SxUV+YEZu6gfNZrtZp28Fc96culAjx+1hjEcT5lHayAXm7RPRH2tTTmnQG Ctb2C/4Bip++tOo+tQqv7TpaC8wDesdjLGS3vvMqy01+rK2xYnY2nOVZ/lbvQf9logZa p5EJaxD2p+K6Xp0HRF0BF4h2CsecJk3GoKEt+lohBq2F+Q1RM2t2YgoACKQr95zFqrxF YuAkpnr9H2BvldlYOtb/9nkugaZGAykOxXp7T3hw3JEVILRZjrVfcKCPRJ5f0aOTKrpD /RvDBf8q+OUcLpVkFwlwwbvboBfZoip9Q/qXIIJQI/SYTktOpzvE6NOJqZyEnegRVYpT aC4w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:dkim-signature; bh=AlFKqZpStCoPEOrrPZX1wMSDo35mmKSmNx34G8mCjEo=; b=0oGAVa2Y3roX12Rp0tD5pLc2jyQ0eHBA1u7UzEVagWROL7YEWEwb135cSuR81WZE2J ZlzungdVd4qm3rpHnkCmZaxXPP2IsXgzTFynSXH3O7DtypgoVAfVfh336bisu0ws0AqE FImA70w7Y/FeQE80tqCGA1Do5Vw+QJ6K5Df/lxPLvof2icXZkqKRDWRYW4Rkq23+S1vU vX7BL6XVEr6hRE+g6aKiOKs52FGZ7HYtTK1FML1I5NWMmNbJsuP9D2p//iUknv3Mrk22 rxI9rfSoPBeQGaxl7XS+xLUvXOvVLsVlHmxqmLIsP2wxQbF/+SdjaYdmO7YE1+Y3VhIZ 8sGA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b="Hn84O/Fj"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id z21-20020a630a55000000b0050bd9c7bc1bsi29239645pgk.30.2023.05.01.10.19.24; Mon, 01 May 2023 10:19:37 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b="Hn84O/Fj"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233190AbjEARAK (ORCPT + 99 others); Mon, 1 May 2023 13:00:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60128 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232971AbjEAQ7Y (ORCPT ); Mon, 1 May 2023 12:59:24 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CEFD63595 for ; Mon, 1 May 2023 09:56:20 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id 3f1490d57ef6-b8f32cc8c31so4979847276.2 for ; Mon, 01 May 2023 09:56:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1682960165; x=1685552165; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=AlFKqZpStCoPEOrrPZX1wMSDo35mmKSmNx34G8mCjEo=; b=Hn84O/FjnBQjw81g0VTyh/uHBdrl6I0At7mqU8YKI29J+zwV878Ud7aKtkVFupHujF +P33CHtldPo27h+zuVFhAddAj0A5Kkpw2KfEVH9tijZGDu22S6vwmzU89PxAqJvFSAZJ pngWOgRWLpGBUXF1EyiUqARF3hZwzVXrTynm7hdSHxdP6sPu2MVmUUTk1QEPnZ21ugxP cu7FraBIH82U3o2syZEGdLdo8n9cfMJuxKna3vatgjXqF9IMgQ4o3Y8inTxtaKJYGM5W EpQqsvgJxtK8rynO0iG4tmpcH/aAdCa/3MR3/1NiEMIlFA2NSYb16Me/KIyQUmkmPrNV 5otA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682960165; x=1685552165; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=AlFKqZpStCoPEOrrPZX1wMSDo35mmKSmNx34G8mCjEo=; b=TnANuK3/X6CE1lf97JmmlAJ5rz2xy+N9EEkXEk0kFMKSeXYbHaKZ5bwjtNx3CIHgRK jukqiSdgDg6qQ0wX/GRFRYDAi2TrafMrOAhWuHkrdv1YTrRTYKDaS8L9SqeVQV2w7C6L cPcGbkGnaPnTy4w9on7BKpEZHe9Vu8rZOf+mCkHR7LOVnKbSbJ2q+9pVOJ8mQ6ivVyzT G+F1+wJ/6cOzCxHiQJ+RhI/qBFru+38eCE7WhR9P0mkXjz9NK2ZlBxlEYQOzDTPuJCIW 6DqNXTuFhhVXGzr3WW9LXwEEfMuDrdG42sndqOIKJAsODcBI1uMXWovK383mg7BZYRyn bK8Q== X-Gm-Message-State: AC+VfDw1cTcjLqW8B8p93gqj4WCWhQrmXJUTClHfZSjCr0L4IqlonjlN wRqfyomB1hDRtUWNad345GcRGgb90mo= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:6d24:3efd:facc:7ac4]) (user=surenb job=sendgmr) by 2002:a25:3242:0:b0:b8f:6944:afeb with SMTP id y63-20020a253242000000b00b8f6944afebmr5782469yby.3.1682960165175; Mon, 01 May 2023 09:56:05 -0700 (PDT) Date: Mon, 1 May 2023 09:54:35 -0700 In-Reply-To: <20230501165450.15352-1-surenb@google.com> Mime-Version: 1.0 References: <20230501165450.15352-1-surenb@google.com> X-Mailer: git-send-email 2.40.1.495.gc816e09b53d-goog Message-ID: <20230501165450.15352-26-surenb@google.com> Subject: [PATCH 25/40] mm/slab: enable slab allocation tagging for kmalloc and friends From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, ldufour@linux.ibm.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org X-Spam-Status: No, score=-9.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1764713119471120799?= X-GMAIL-MSGID: =?utf-8?q?1764713119471120799?= Redefine kmalloc, krealloc, kzalloc, kcalloc, etc. to record allocations and deallocations done by these functions. Signed-off-by: Suren Baghdasaryan Co-developed-by: Kent Overstreet Signed-off-by: Kent Overstreet --- include/linux/slab.h | 175 ++++++++++++++++++++++--------------------- mm/slab.c | 16 ++-- mm/slab_common.c | 22 +++--- mm/slub.c | 17 +++-- mm/util.c | 10 +-- 5 files changed, 124 insertions(+), 116 deletions(-) diff --git a/include/linux/slab.h b/include/linux/slab.h index 99a146f3cedf..43c922524081 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -213,7 +213,10 @@ int kmem_cache_shrink(struct kmem_cache *s); /* * Common kmalloc functions provided by all allocators */ -void * __must_check krealloc(const void *objp, size_t new_size, gfp_t flags) __realloc_size(2); +void * __must_check _krealloc(const void *objp, size_t new_size, gfp_t flags) __realloc_size(2); +#define krealloc(_p, _size, _flags) \ + alloc_hooks(_krealloc(_p, _size, _flags), void*, NULL) + void kfree(const void *objp); void kfree_sensitive(const void *objp); size_t __ksize(const void *objp); @@ -451,6 +454,8 @@ static __always_inline unsigned int __kmalloc_index(size_t size, static_assert(PAGE_SHIFT <= 20); #define kmalloc_index(s) __kmalloc_index(s, true) +#include + void *__kmalloc(size_t size, gfp_t flags) __assume_kmalloc_alignment __alloc_size(1); /** @@ -463,9 +468,15 @@ void *__kmalloc(size_t size, gfp_t flags) __assume_kmalloc_alignment __alloc_siz * * Return: pointer to the new object or %NULL in case of error */ -void *kmem_cache_alloc(struct kmem_cache *cachep, gfp_t flags) __assume_slab_alignment __malloc; -void *kmem_cache_alloc_lru(struct kmem_cache *s, struct list_lru *lru, - gfp_t gfpflags) __assume_slab_alignment __malloc; +void *_kmem_cache_alloc(struct kmem_cache *cachep, gfp_t flags) __assume_slab_alignment __malloc; +#define kmem_cache_alloc(_s, _flags) \ + alloc_hooks(_kmem_cache_alloc(_s, _flags), void*, NULL) + +void *_kmem_cache_alloc_lru(struct kmem_cache *s, struct list_lru *lru, + gfp_t gfpflags) __assume_slab_alignment __malloc; +#define kmem_cache_alloc_lru(_s, _lru, _flags) \ + alloc_hooks(_kmem_cache_alloc_lru(_s, _lru, _flags), void*, NULL) + void kmem_cache_free(struct kmem_cache *s, void *objp); /* @@ -476,7 +487,9 @@ void kmem_cache_free(struct kmem_cache *s, void *objp); * Note that interrupts must be enabled when calling these functions. */ void kmem_cache_free_bulk(struct kmem_cache *s, size_t size, void **p); -int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, void **p); +int _kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, void **p); +#define kmem_cache_alloc_bulk(_s, _flags, _size, _p) \ + alloc_hooks(_kmem_cache_alloc_bulk(_s, _flags, _size, _p), int, 0) static __always_inline void kfree_bulk(size_t size, void **p) { @@ -485,20 +498,32 @@ static __always_inline void kfree_bulk(size_t size, void **p) void *__kmalloc_node(size_t size, gfp_t flags, int node) __assume_kmalloc_alignment __alloc_size(1); -void *kmem_cache_alloc_node(struct kmem_cache *s, gfp_t flags, int node) __assume_slab_alignment - __malloc; +void *_kmem_cache_alloc_node(struct kmem_cache *s, gfp_t flags, int node) __assume_slab_alignment + __malloc; +#define kmem_cache_alloc_node(_s, _flags, _node) \ + alloc_hooks(_kmem_cache_alloc_node(_s, _flags, _node), void*, NULL) -void *kmalloc_trace(struct kmem_cache *s, gfp_t flags, size_t size) +void *_kmalloc_trace(struct kmem_cache *s, gfp_t flags, size_t size) __assume_kmalloc_alignment __alloc_size(3); -void *kmalloc_node_trace(struct kmem_cache *s, gfp_t gfpflags, +void *_kmalloc_node_trace(struct kmem_cache *s, gfp_t gfpflags, int node, size_t size) __assume_kmalloc_alignment __alloc_size(4); -void *kmalloc_large(size_t size, gfp_t flags) __assume_page_alignment +#define kmalloc_trace(_s, _flags, _size) \ + alloc_hooks(_kmalloc_trace(_s, _flags, _size), void*, NULL) + +#define kmalloc_node_trace(_s, _gfpflags, _node, _size) \ + alloc_hooks(_kmalloc_node_trace(_s, _gfpflags, _node, _size), void*, NULL) + +void *_kmalloc_large(size_t size, gfp_t flags) __assume_page_alignment __alloc_size(1); +#define kmalloc_large(_size, _flags) \ + alloc_hooks(_kmalloc_large(_size, _flags), void*, NULL) -void *kmalloc_large_node(size_t size, gfp_t flags, int node) __assume_page_alignment +void *_kmalloc_large_node(size_t size, gfp_t flags, int node) __assume_page_alignment __alloc_size(1); +#define kmalloc_large_node(_size, _flags, _node) \ + alloc_hooks(_kmalloc_large_node(_size, _flags, _node), void*, NULL) /** * kmalloc - allocate kernel memory @@ -554,37 +579,40 @@ void *kmalloc_large_node(size_t size, gfp_t flags, int node) __assume_page_align * Try really hard to succeed the allocation but fail * eventually. */ -static __always_inline __alloc_size(1) void *kmalloc(size_t size, gfp_t flags) +static __always_inline __alloc_size(1) void *_kmalloc(size_t size, gfp_t flags) { if (__builtin_constant_p(size) && size) { unsigned int index; if (size > KMALLOC_MAX_CACHE_SIZE) - return kmalloc_large(size, flags); + return _kmalloc_large(size, flags); index = kmalloc_index(size); - return kmalloc_trace( + return _kmalloc_trace( kmalloc_caches[kmalloc_type(flags)][index], flags, size); } return __kmalloc(size, flags); } +#define kmalloc(_size, _flags) alloc_hooks(_kmalloc(_size, _flags), void*, NULL) -static __always_inline __alloc_size(1) void *kmalloc_node(size_t size, gfp_t flags, int node) +static __always_inline __alloc_size(1) void *_kmalloc_node(size_t size, gfp_t flags, int node) { if (__builtin_constant_p(size) && size) { unsigned int index; if (size > KMALLOC_MAX_CACHE_SIZE) - return kmalloc_large_node(size, flags, node); + return _kmalloc_large_node(size, flags, node); index = kmalloc_index(size); - return kmalloc_node_trace( + return _kmalloc_node_trace( kmalloc_caches[kmalloc_type(flags)][index], flags, node, size); } return __kmalloc_node(size, flags, node); } +#define kmalloc_node(_size, _flags, _node) \ + alloc_hooks(_kmalloc_node(_size, _flags, _node), void*, NULL) /** * kmalloc_array - allocate memory for an array. @@ -592,16 +620,18 @@ static __always_inline __alloc_size(1) void *kmalloc_node(size_t size, gfp_t fla * @size: element size. * @flags: the type of memory to allocate (see kmalloc). */ -static inline __alloc_size(1, 2) void *kmalloc_array(size_t n, size_t size, gfp_t flags) +static inline __alloc_size(1, 2) void *_kmalloc_array(size_t n, size_t size, gfp_t flags) { size_t bytes; if (unlikely(check_mul_overflow(n, size, &bytes))) return NULL; if (__builtin_constant_p(n) && __builtin_constant_p(size)) - return kmalloc(bytes, flags); - return __kmalloc(bytes, flags); + return _kmalloc(bytes, flags); + return _kmalloc(bytes, flags); } +#define kmalloc_array(_n, _size, _flags) \ + alloc_hooks(_kmalloc_array(_n, _size, _flags), void*, NULL) /** * krealloc_array - reallocate memory for an array. @@ -610,18 +640,20 @@ static inline __alloc_size(1, 2) void *kmalloc_array(size_t n, size_t size, gfp_ * @new_size: new size of a single member of the array * @flags: the type of memory to allocate (see kmalloc) */ -static inline __realloc_size(2, 3) void * __must_check krealloc_array(void *p, - size_t new_n, - size_t new_size, - gfp_t flags) +static inline __realloc_size(2, 3) void * __must_check _krealloc_array(void *p, + size_t new_n, + size_t new_size, + gfp_t flags) { size_t bytes; if (unlikely(check_mul_overflow(new_n, new_size, &bytes))) return NULL; - return krealloc(p, bytes, flags); + return _krealloc(p, bytes, flags); } +#define krealloc_array(_p, _n, _size, _flags) \ + alloc_hooks(_krealloc_array(_p, _n, _size, _flags), void*, NULL) /** * kcalloc - allocate memory for an array. The memory is set to zero. @@ -629,16 +661,14 @@ static inline __realloc_size(2, 3) void * __must_check krealloc_array(void *p, * @size: element size. * @flags: the type of memory to allocate (see kmalloc). */ -static inline __alloc_size(1, 2) void *kcalloc(size_t n, size_t size, gfp_t flags) -{ - return kmalloc_array(n, size, flags | __GFP_ZERO); -} +#define kcalloc(_n, _size, _flags) \ + kmalloc_array(_n, _size, (_flags) | __GFP_ZERO) void *__kmalloc_node_track_caller(size_t size, gfp_t flags, int node, unsigned long caller) __alloc_size(1); #define kmalloc_node_track_caller(size, flags, node) \ - __kmalloc_node_track_caller(size, flags, node, \ - _RET_IP_) + alloc_hooks(__kmalloc_node_track_caller(size, flags, node, \ + _RET_IP_), void*, NULL) /* * kmalloc_track_caller is a special version of kmalloc that records the @@ -648,11 +678,10 @@ void *__kmalloc_node_track_caller(size_t size, gfp_t flags, int node, * allocator where we care about the real place the memory allocation * request comes from. */ -#define kmalloc_track_caller(size, flags) \ - __kmalloc_node_track_caller(size, flags, \ - NUMA_NO_NODE, _RET_IP_) +#define kmalloc_track_caller(size, flags) \ + kmalloc_node_track_caller(size, flags, NUMA_NO_NODE) -static inline __alloc_size(1, 2) void *kmalloc_array_node(size_t n, size_t size, gfp_t flags, +static inline __alloc_size(1, 2) void *_kmalloc_array_node(size_t n, size_t size, gfp_t flags, int node) { size_t bytes; @@ -660,75 +689,53 @@ static inline __alloc_size(1, 2) void *kmalloc_array_node(size_t n, size_t size, if (unlikely(check_mul_overflow(n, size, &bytes))) return NULL; if (__builtin_constant_p(n) && __builtin_constant_p(size)) - return kmalloc_node(bytes, flags, node); + return _kmalloc_node(bytes, flags, node); return __kmalloc_node(bytes, flags, node); } +#define kmalloc_array_node(_n, _size, _flags, _node) \ + alloc_hooks(_kmalloc_array_node(_n, _size, _flags, _node), void*, NULL) -static inline __alloc_size(1, 2) void *kcalloc_node(size_t n, size_t size, gfp_t flags, int node) -{ - return kmalloc_array_node(n, size, flags | __GFP_ZERO, node); -} +#define kcalloc_node(_n, _size, _flags, _node) \ + kmalloc_array_node(_n, _size, (_flags) | __GFP_ZERO, _node) /* * Shortcuts */ -static inline void *kmem_cache_zalloc(struct kmem_cache *k, gfp_t flags) -{ - return kmem_cache_alloc(k, flags | __GFP_ZERO); -} +#define kmem_cache_zalloc(_k, _flags) \ + kmem_cache_alloc(_k, (_flags)|__GFP_ZERO) /** * kzalloc - allocate memory. The memory is set to zero. * @size: how many bytes of memory are required. * @flags: the type of memory to allocate (see kmalloc). */ -static inline __alloc_size(1) void *kzalloc(size_t size, gfp_t flags) -{ - return kmalloc(size, flags | __GFP_ZERO); -} - -/** - * kzalloc_node - allocate zeroed memory from a particular memory node. - * @size: how many bytes of memory are required. - * @flags: the type of memory to allocate (see kmalloc). - * @node: memory node from which to allocate - */ -static inline __alloc_size(1) void *kzalloc_node(size_t size, gfp_t flags, int node) -{ - return kmalloc_node(size, flags | __GFP_ZERO, node); -} +#define kzalloc(_size, _flags) kmalloc(_size, (_flags)|__GFP_ZERO) +#define kzalloc_node(_size, _flags, _node) kmalloc_node(_size, (_flags)|__GFP_ZERO, _node) -extern void *kvmalloc_node(size_t size, gfp_t flags, int node) __alloc_size(1); -static inline __alloc_size(1) void *kvmalloc(size_t size, gfp_t flags) -{ - return kvmalloc_node(size, flags, NUMA_NO_NODE); -} -static inline __alloc_size(1) void *kvzalloc_node(size_t size, gfp_t flags, int node) -{ - return kvmalloc_node(size, flags | __GFP_ZERO, node); -} -static inline __alloc_size(1) void *kvzalloc(size_t size, gfp_t flags) -{ - return kvmalloc(size, flags | __GFP_ZERO); -} +extern void *_kvmalloc_node(size_t size, gfp_t flags, int node) __alloc_size(1); +#define kvmalloc_node(_size, _flags, _node) \ + alloc_hooks(_kvmalloc_node(_size, _flags, _node), void*, NULL) -static inline __alloc_size(1, 2) void *kvmalloc_array(size_t n, size_t size, gfp_t flags) -{ - size_t bytes; +#define kvmalloc(_size, _flags) kvmalloc_node(_size, _flags, NUMA_NO_NODE) +#define kvzalloc(_size, _flags) kvmalloc(_size, _flags|__GFP_ZERO) - if (unlikely(check_mul_overflow(n, size, &bytes))) - return NULL; +#define kvzalloc_node(_size, _flags, _node) kvmalloc_node(_size, _flags|__GFP_ZERO, _node) - return kvmalloc(bytes, flags); -} +#define kvmalloc_array(_n, _size, _flags) \ +({ \ + size_t _bytes; \ + \ + !check_mul_overflow(_n, _size, &_bytes) ? kvmalloc(_bytes, _flags) : NULL; \ +}) -static inline __alloc_size(1, 2) void *kvcalloc(size_t n, size_t size, gfp_t flags) -{ - return kvmalloc_array(n, size, flags | __GFP_ZERO); -} +#define kvcalloc(_n, _size, _flags) kvmalloc_array(_n, _size, _flags|__GFP_ZERO) -extern void *kvrealloc(const void *p, size_t oldsize, size_t newsize, gfp_t flags) +extern void *_kvrealloc(const void *p, size_t oldsize, size_t newsize, gfp_t flags) __realloc_size(3); + +#define kvrealloc(_p, _oldsize, _newsize, _flags) \ + alloc_hooks(_kvrealloc(_p, _oldsize, _newsize, _flags), void*, NULL) + extern void kvfree(const void *addr); extern void kvfree_sensitive(const void *addr, size_t len); diff --git a/mm/slab.c b/mm/slab.c index 026f0c08708a..e08bd3496f56 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -3448,18 +3448,18 @@ void *__kmem_cache_alloc_lru(struct kmem_cache *cachep, struct list_lru *lru, return ret; } -void *kmem_cache_alloc(struct kmem_cache *cachep, gfp_t flags) +void *_kmem_cache_alloc(struct kmem_cache *cachep, gfp_t flags) { return __kmem_cache_alloc_lru(cachep, NULL, flags); } -EXPORT_SYMBOL(kmem_cache_alloc); +EXPORT_SYMBOL(_kmem_cache_alloc); -void *kmem_cache_alloc_lru(struct kmem_cache *cachep, struct list_lru *lru, +void *_kmem_cache_alloc_lru(struct kmem_cache *cachep, struct list_lru *lru, gfp_t flags) { return __kmem_cache_alloc_lru(cachep, lru, flags); } -EXPORT_SYMBOL(kmem_cache_alloc_lru); +EXPORT_SYMBOL(_kmem_cache_alloc_lru); static __always_inline void cache_alloc_debugcheck_after_bulk(struct kmem_cache *s, gfp_t flags, @@ -3471,7 +3471,7 @@ cache_alloc_debugcheck_after_bulk(struct kmem_cache *s, gfp_t flags, p[i] = cache_alloc_debugcheck_after(s, flags, p[i], caller); } -int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, +int _kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, void **p) { struct obj_cgroup *objcg = NULL; @@ -3510,7 +3510,7 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, kmem_cache_free_bulk(s, i, p); return 0; } -EXPORT_SYMBOL(kmem_cache_alloc_bulk); +EXPORT_SYMBOL(_kmem_cache_alloc_bulk); /** * kmem_cache_alloc_node - Allocate an object on the specified node @@ -3525,7 +3525,7 @@ EXPORT_SYMBOL(kmem_cache_alloc_bulk); * * Return: pointer to the new object or %NULL in case of error */ -void *kmem_cache_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid) +void *_kmem_cache_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid) { void *ret = slab_alloc_node(cachep, NULL, flags, nodeid, cachep->object_size, _RET_IP_); @@ -3533,7 +3533,7 @@ void *kmem_cache_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid) return ret; } -EXPORT_SYMBOL(kmem_cache_alloc_node); +EXPORT_SYMBOL(_kmem_cache_alloc_node); void *__kmem_cache_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid, size_t orig_size, diff --git a/mm/slab_common.c b/mm/slab_common.c index 42777d66d0e3..a05333bbb7f1 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -1101,7 +1101,7 @@ size_t __ksize(const void *object) return slab_ksize(folio_slab(folio)->slab_cache); } -void *kmalloc_trace(struct kmem_cache *s, gfp_t gfpflags, size_t size) +void *_kmalloc_trace(struct kmem_cache *s, gfp_t gfpflags, size_t size) { void *ret = __kmem_cache_alloc_node(s, gfpflags, NUMA_NO_NODE, size, _RET_IP_); @@ -1111,9 +1111,9 @@ void *kmalloc_trace(struct kmem_cache *s, gfp_t gfpflags, size_t size) ret = kasan_kmalloc(s, ret, size, gfpflags); return ret; } -EXPORT_SYMBOL(kmalloc_trace); +EXPORT_SYMBOL(_kmalloc_trace); -void *kmalloc_node_trace(struct kmem_cache *s, gfp_t gfpflags, +void *_kmalloc_node_trace(struct kmem_cache *s, gfp_t gfpflags, int node, size_t size) { void *ret = __kmem_cache_alloc_node(s, gfpflags, node, size, _RET_IP_); @@ -1123,7 +1123,7 @@ void *kmalloc_node_trace(struct kmem_cache *s, gfp_t gfpflags, ret = kasan_kmalloc(s, ret, size, gfpflags); return ret; } -EXPORT_SYMBOL(kmalloc_node_trace); +EXPORT_SYMBOL(_kmalloc_node_trace); gfp_t kmalloc_fix_flags(gfp_t flags) { @@ -1168,7 +1168,7 @@ static void *__kmalloc_large_node(size_t size, gfp_t flags, int node) return ptr; } -void *kmalloc_large(size_t size, gfp_t flags) +void *_kmalloc_large(size_t size, gfp_t flags) { void *ret = __kmalloc_large_node(size, flags, NUMA_NO_NODE); @@ -1176,9 +1176,9 @@ void *kmalloc_large(size_t size, gfp_t flags) flags, NUMA_NO_NODE); return ret; } -EXPORT_SYMBOL(kmalloc_large); +EXPORT_SYMBOL(_kmalloc_large); -void *kmalloc_large_node(size_t size, gfp_t flags, int node) +void *_kmalloc_large_node(size_t size, gfp_t flags, int node) { void *ret = __kmalloc_large_node(size, flags, node); @@ -1186,7 +1186,7 @@ void *kmalloc_large_node(size_t size, gfp_t flags, int node) flags, node); return ret; } -EXPORT_SYMBOL(kmalloc_large_node); +EXPORT_SYMBOL(_kmalloc_large_node); #ifdef CONFIG_SLAB_FREELIST_RANDOM /* Randomize a generic freelist */ @@ -1405,7 +1405,7 @@ __do_krealloc(const void *p, size_t new_size, gfp_t flags) return (void *)p; } - ret = kmalloc_track_caller(new_size, flags); + ret = __kmalloc_node_track_caller(new_size, flags, NUMA_NO_NODE, _RET_IP_); if (ret && p) { /* Disable KASAN checks as the object's redzone is accessed. */ kasan_disable_current(); @@ -1429,7 +1429,7 @@ __do_krealloc(const void *p, size_t new_size, gfp_t flags) * * Return: pointer to the allocated memory or %NULL in case of error */ -void *krealloc(const void *p, size_t new_size, gfp_t flags) +void *_krealloc(const void *p, size_t new_size, gfp_t flags) { void *ret; @@ -1444,7 +1444,7 @@ void *krealloc(const void *p, size_t new_size, gfp_t flags) return ret; } -EXPORT_SYMBOL(krealloc); +EXPORT_SYMBOL(_krealloc); /** * kfree_sensitive - Clear sensitive information in memory before freeing diff --git a/mm/slub.c b/mm/slub.c index 507b71372ee4..8f57fd086f69 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3470,18 +3470,18 @@ void *__kmem_cache_alloc_lru(struct kmem_cache *s, struct list_lru *lru, return ret; } -void *kmem_cache_alloc(struct kmem_cache *s, gfp_t gfpflags) +void *_kmem_cache_alloc(struct kmem_cache *s, gfp_t gfpflags) { return __kmem_cache_alloc_lru(s, NULL, gfpflags); } -EXPORT_SYMBOL(kmem_cache_alloc); +EXPORT_SYMBOL(_kmem_cache_alloc); -void *kmem_cache_alloc_lru(struct kmem_cache *s, struct list_lru *lru, +void *_kmem_cache_alloc_lru(struct kmem_cache *s, struct list_lru *lru, gfp_t gfpflags) { return __kmem_cache_alloc_lru(s, lru, gfpflags); } -EXPORT_SYMBOL(kmem_cache_alloc_lru); +EXPORT_SYMBOL(_kmem_cache_alloc_lru); void *__kmem_cache_alloc_node(struct kmem_cache *s, gfp_t gfpflags, int node, size_t orig_size, @@ -3491,7 +3491,7 @@ void *__kmem_cache_alloc_node(struct kmem_cache *s, gfp_t gfpflags, caller, orig_size); } -void *kmem_cache_alloc_node(struct kmem_cache *s, gfp_t gfpflags, int node) +void *_kmem_cache_alloc_node(struct kmem_cache *s, gfp_t gfpflags, int node) { void *ret = slab_alloc_node(s, NULL, gfpflags, node, _RET_IP_, s->object_size); @@ -3499,7 +3499,7 @@ void *kmem_cache_alloc_node(struct kmem_cache *s, gfp_t gfpflags, int node) return ret; } -EXPORT_SYMBOL(kmem_cache_alloc_node); +EXPORT_SYMBOL(_kmem_cache_alloc_node); static noinline void free_to_partial_list( struct kmem_cache *s, struct slab *slab, @@ -3779,6 +3779,7 @@ static __fastpath_inline void slab_free(struct kmem_cache *s, struct slab *slab, unsigned long addr) { memcg_slab_free_hook(s, slab, p, cnt); + alloc_tagging_slab_free_hook(s, slab, p, cnt); /* * With KASAN enabled slab_free_freelist_hook modifies the freelist * to remove objects, whose reuse must be delayed. @@ -4009,7 +4010,7 @@ static int __kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, #endif /* CONFIG_SLUB_TINY */ /* Note that interrupts must be enabled when calling this function. */ -int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, +int _kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, void **p) { int i; @@ -4034,7 +4035,7 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, slab_want_init_on_alloc(flags, s), s->object_size); return i; } -EXPORT_SYMBOL(kmem_cache_alloc_bulk); +EXPORT_SYMBOL(_kmem_cache_alloc_bulk); /* diff --git a/mm/util.c b/mm/util.c index dd12b9531ac4..e9077d1af676 100644 --- a/mm/util.c +++ b/mm/util.c @@ -579,7 +579,7 @@ EXPORT_SYMBOL(vm_mmap); * * Return: pointer to the allocated memory of %NULL in case of failure */ -void *kvmalloc_node(size_t size, gfp_t flags, int node) +void *_kvmalloc_node(size_t size, gfp_t flags, int node) { gfp_t kmalloc_flags = flags; void *ret; @@ -601,7 +601,7 @@ void *kvmalloc_node(size_t size, gfp_t flags, int node) kmalloc_flags &= ~__GFP_NOFAIL; } - ret = kmalloc_node(size, kmalloc_flags, node); + ret = _kmalloc_node(size, kmalloc_flags, node); /* * It doesn't really make sense to fallback to vmalloc for sub page @@ -630,7 +630,7 @@ void *kvmalloc_node(size_t size, gfp_t flags, int node) flags, PAGE_KERNEL, VM_ALLOW_HUGE_VMAP, node, __builtin_return_address(0)); } -EXPORT_SYMBOL(kvmalloc_node); +EXPORT_SYMBOL(_kvmalloc_node); /** * kvfree() - Free memory. @@ -669,7 +669,7 @@ void kvfree_sensitive(const void *addr, size_t len) } EXPORT_SYMBOL(kvfree_sensitive); -void *kvrealloc(const void *p, size_t oldsize, size_t newsize, gfp_t flags) +void *_kvrealloc(const void *p, size_t oldsize, size_t newsize, gfp_t flags) { void *newp; @@ -682,7 +682,7 @@ void *kvrealloc(const void *p, size_t oldsize, size_t newsize, gfp_t flags) kvfree(p); return newp; } -EXPORT_SYMBOL(kvrealloc); +EXPORT_SYMBOL(_kvrealloc); /** * __vmalloc_array - allocate memory for a virtually contiguous array. From patchwork Mon May 1 16:54:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 89101 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp70300vqo; Mon, 1 May 2023 10:17:25 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ68+EJfV2MYzT4dFtQfSt64JxpXPcs7XEUGZjD4c0CLCBmEbKCETUErcM3avblrYFyQXiQ5 X-Received: by 2002:a05:6a00:1348:b0:63b:19e5:a9ec with SMTP id k8-20020a056a00134800b0063b19e5a9ecmr22714291pfu.33.1682961444917; Mon, 01 May 2023 10:17:24 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682961444; cv=none; d=google.com; s=arc-20160816; b=Lc9mvoUSoKx/eRPun1FzpMTvpEyOIEF22c2bjiWG4KUwQP5rEmKXQgE6UMl1+QQtTN I/ncmD/CLIfkFRC43oTFQu9c7Gx6TNgn5Oi/NRAIQbbrgmT9bYKpzEMbju6D3Y2xgvaN rMfCjIORNWs9cGIkPEjgMWY9am644qKi6vm0x5VbIQbfWZRZK85Xrwv7P5mZvlQ0YDm/ NhKNcP70Hcv8gQ7+t4dKTnXWkW+XJMafcBMT+pLHuXKQ5ogGGtGTggcnLdyhkx8JohgV xnj57aUFja40G7TN0My0hE7Jr07DyxVpHInCfxdGf0Xa5fsupMSM5chJtlz6RLWs6OJ2 D4wg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:dkim-signature; bh=YfoToyHVT1LJlGJVQ6MuRV3hbhaLXh5d5qfc4HkIPik=; b=bJkq0meJDE0fFX6TlknB3psYpX/nSi8u+CfcKWBdM0R+THqwzjYw/dVHe+WvOu424D vOsTrPLp/GB4st175/Tskl0r/Xh50rUi50pJT0uz4Biy8Tx/W9I1Q4QTN7soVP+nBVjI 96JWVltMFOS9vowrOzVYMWFYvN5bmgjjEL7qgYLkHgJCjwHIVn96FcvVaSKxJZoTtt6K C5KhIsu+G6LPWkh+jxFUKn7IPtaZYG/j8vKw3EyDazCJlaEg5LOq0iAEUEmKeGkJvMsj t+9dfX5Z3HRJTwDcYnqH2qpk9tEtQzVo7nQ83EA5FeR98WmSOQHS0AUxdwXvYVHcTWxY nkcw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=Db2dTvDW; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id v5-20020aa799c5000000b0063f1cb928bdsi24557409pfi.313.2023.05.01.10.17.12; Mon, 01 May 2023 10:17:24 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=Db2dTvDW; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232975AbjEARAO (ORCPT + 99 others); Mon, 1 May 2023 13:00:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60200 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232845AbjEAQ7i (ORCPT ); Mon, 1 May 2023 12:59:38 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 34BAF35AB for ; Mon, 1 May 2023 09:56:23 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id 3f1490d57ef6-b9a7c45b8e1so4808883276.3 for ; Mon, 01 May 2023 09:56:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1682960167; x=1685552167; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=YfoToyHVT1LJlGJVQ6MuRV3hbhaLXh5d5qfc4HkIPik=; b=Db2dTvDWbMM6MtFyLXOOyG+p1K651ww8uL+kJDohPMMG6EWYrhfJFGluhHEW0wqXl5 UKPrByNQ1Z4QXnisfDyfKX7dbeY+3mxaMGuFVRBjDUr3JiqQISt6WeUr2I6JPah8BGyR FjdvJFEuIXgNlkkoVLLQxMPMSKPuyrjPesVcRR9te3ugJsI7tT+EXrko3AZ8ov2CcY8R JR28aghvPN6+RkIPURXyrGkWco+xEzOLoyotOADRZq9o/XJ68RcgnDEgHpMVXMMei422 +DGh+VKx3LXxPBgO4eeggvJc6/61DcK7Fvg52O8gRCKPK/nk5YYM+Q4LDqT5D1RFb4LF rBwA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682960167; x=1685552167; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=YfoToyHVT1LJlGJVQ6MuRV3hbhaLXh5d5qfc4HkIPik=; b=cky6WehkOx3SGIKRwuVP7Aj9mn5LN+kVA5WIcrCmOD6+YkvPTtRmtUYeNYDguGksrS DKHsqXBkExaTgWjn0uOClO3ayXD28QNK8reEJWVNlpWu5Qngla3BFATLhBjrKpaaknjb xOWAEPQExfStLn/Lzf0q92JHIgxE8N004vp7HarYYP1g7NLcLHE036UE3dSJKgtEojjC SaVosl7v8Fy61+bHmJXSKqlfsc5dn3Tx9SjFR8w9SzFW8epGEJN8mstM92SZ/UaeERzL /JAS/yMa9ZdnVMlRCmY7m8ZFMzUrDsXGfBi7bFGwjNisgYX60++ySrx5x7PAvK2QWAC3 7Ieg== X-Gm-Message-State: AC+VfDxvvXJ5UYhjVc7xvr1l/f1PLMUio0Yhgtb47rCKRD6QBmHTu2YO 5a7WkY3T4snf2A20oL8jiCxqNHrZTDw= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:6d24:3efd:facc:7ac4]) (user=surenb job=sendgmr) by 2002:a25:c0ca:0:b0:b9a:7cd6:ba7a with SMTP id c193-20020a25c0ca000000b00b9a7cd6ba7amr5449586ybf.12.1682960167516; Mon, 01 May 2023 09:56:07 -0700 (PDT) Date: Mon, 1 May 2023 09:54:36 -0700 In-Reply-To: <20230501165450.15352-1-surenb@google.com> Mime-Version: 1.0 References: <20230501165450.15352-1-surenb@google.com> X-Mailer: git-send-email 2.40.1.495.gc816e09b53d-goog Message-ID: <20230501165450.15352-27-surenb@google.com> Subject: [PATCH 26/40] mm/slub: Mark slab_free_freelist_hook() __always_inline From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, ldufour@linux.ibm.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org X-Spam-Status: No, score=-9.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1764712980225439378?= X-GMAIL-MSGID: =?utf-8?q?1764712980225439378?= From: Kent Overstreet It seems we need to be more forceful with the compiler on this one. Signed-off-by: Kent Overstreet Signed-off-by: Suren Baghdasaryan --- mm/slub.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/slub.c b/mm/slub.c index 8f57fd086f69..9dd57b3384a1 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1781,7 +1781,7 @@ static __always_inline bool slab_free_hook(struct kmem_cache *s, return kasan_slab_free(s, x, init); } -static inline bool slab_free_freelist_hook(struct kmem_cache *s, +static __always_inline bool slab_free_freelist_hook(struct kmem_cache *s, void **head, void **tail, int *cnt) { From patchwork Mon May 1 16:54:37 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 89096 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp67101vqo; Mon, 1 May 2023 10:12:38 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6ZV8aImOFHWBuEqU9HmMTwZPsb6F/2Grzc8X9zkspXI3tH4qfrNgCfHvj+QOpSovKFisGI X-Received: by 2002:aa7:888a:0:b0:639:c88b:c3e0 with SMTP id z10-20020aa7888a000000b00639c88bc3e0mr21390526pfe.22.1682961158433; Mon, 01 May 2023 10:12:38 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682961158; cv=none; d=google.com; s=arc-20160816; b=vou6Kk55uX6MUcekaQKUIq0OpS42GiOVusp3WpbuYDVW8nsAcvhOoCq7NkMlcuUi2L jnuhSKskhB52Pwh12LoKaWXH8qHoKWhJseP5xw7rJDkzzzHiPR4ZZMwECJ3837nn6a5n SbUmaU6rix9plysb0k+HIWBqeU3y1shlvgJbrQnjh4GxWFq6OXIpuchrQ3uumCmCe+II 9jTYvgPsOAnbjDAF1zcFM5p7FublaDj0LLKnQ+bWYJlil9k/sSuMyk5vCtpwBsQmB6nP O0iEweXcqdyQg7vr1KRYE30/DBI9i9bf5OBoYpU4GtTDm6YQjhKCzL92IY+pbc2TP8x2 RdjQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:dkim-signature; bh=sqvz5oeuoJjzyL5d6pTK46j/h0psspf4+63tRdELHAU=; b=cT9B1tpgd0+RzEvYP9x5PS+pHl52C0qgcAe4IbtNfKgenoOnJuXBXfDIZ/7JF97ue6 Hg8SL5QMaas3vLaFpOKr/xkTxpNoUApmbgHKhCkjPseJNFgmgd0HhS4+EinQZ2/HDAeV VvtofBX5v2Z006QoxI/ptf0Pt9sMYUa1iT0vDUmH9O5f6CpS4SKb0gu4I7f/kZT0Jb9r YIa6PSLMctg+/2MDG3Xfr9H732BXALX/LErUxecjyMq9cmz6/s0wknX2BruZ/XZ0+TnL /OnoH/xT4NDQgvS3SKM9bF5yOx2mfDn9MFJKmQNkk9pJ9T50BJE741SzX9RZNGu1O8sD w2ow== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=WvFBiHp4; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id bk13-20020a056a02028d00b0051b9d096f3csi26921821pgb.881.2023.05.01.10.12.22; Mon, 01 May 2023 10:12:38 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=WvFBiHp4; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232985AbjEARAt (ORCPT + 99 others); Mon, 1 May 2023 13:00:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60240 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232996AbjEAQ7l (ORCPT ); Mon, 1 May 2023 12:59:41 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9648E26A6 for ; Mon, 1 May 2023 09:56:26 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id 3f1490d57ef6-b9a7d92d0f7so4769868276.1 for ; Mon, 01 May 2023 09:56:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1682960170; x=1685552170; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=sqvz5oeuoJjzyL5d6pTK46j/h0psspf4+63tRdELHAU=; b=WvFBiHp4GUaavo4X5tfWgjPPIzI4jGTUdYosSfeYytpBQUHvpJznwgCxcgtRl4137n /lBXJJHc248hdfGU59NQ6YgyXWglquQNEf83f6NZrWhqBExIs3wpePGzi3G6GXt6Q4EK YNi+IUVCZ2K+oon9eJeUuOHyA0Dti1Ev+im7AvpjkpEZ+nJa/TIMCcKnkDRhn07rC5v5 U+jUUTR0hv29E56yn3HNUOsZcTbbcZMOOzG+OmYRSipaveAVFeX44ceC3CgOIteuddHZ pJc9MAEnRvahuFY90dt+kNg7+emyX/nC+hRarTb4+wBTNE2Gp4VjzhmNP/RRS0EOHEJ7 5wQg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682960170; x=1685552170; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=sqvz5oeuoJjzyL5d6pTK46j/h0psspf4+63tRdELHAU=; b=VWwqDbgQUpk9A0xZH6MdXCuYknzM8lUjY2jNEb8KqoMpVz47EkXM6aeT2QwfZLagf+ FrcaPkDk2HjfsYHjZMZ9iYkyzKnfFRywyM2MfSmegTxnBk5t5huL3qL03JIC/DJhtETH PjxwWGOaE4OY8e3fKxwL2RrpAtvnxPqOFIu5DiFuxUwcsn8kMKnQlguU3+BbUnKaiPaT B0ajTzPtrmA5oPWA+b2Zu6pp2ze8W+iVvRGsEO4U2nAeUd6XcWHOHLwwrNEHmqz3tesf yK8cc+a3Xuq5RdLlAKUyPNQtDQUWRFcHkRMTz8kU1CPXFZPoLuPvrte/YVf0SDEgJoVN sSMA== X-Gm-Message-State: AC+VfDwQz1j5BGRoj+EIbLuvjYeltDdvv6AVARL0tUh9sfKitqmm8XDn QZ+En3iGireiWpSWHRlaTpnVvWlfolA= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:6d24:3efd:facc:7ac4]) (user=surenb job=sendgmr) by 2002:a25:cd08:0:b0:b9a:7cfe:9bf1 with SMTP id d8-20020a25cd08000000b00b9a7cfe9bf1mr5044873ybf.8.1682960169618; Mon, 01 May 2023 09:56:09 -0700 (PDT) Date: Mon, 1 May 2023 09:54:37 -0700 In-Reply-To: <20230501165450.15352-1-surenb@google.com> Mime-Version: 1.0 References: <20230501165450.15352-1-surenb@google.com> X-Mailer: git-send-email 2.40.1.495.gc816e09b53d-goog Message-ID: <20230501165450.15352-28-surenb@google.com> Subject: [PATCH 27/40] mempool: Hook up to memory allocation profiling From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, ldufour@linux.ibm.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org X-Spam-Status: No, score=-9.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1764712679733643714?= X-GMAIL-MSGID: =?utf-8?q?1764712679733643714?= From: Kent Overstreet This adds hooks to mempools for correctly annotating mempool-backed allocations at the correct source line, so they show up correctly in /sys/kernel/debug/allocations. Various inline functions are converted to wrappers so that we can invoke alloc_hooks() in fewer places. Signed-off-by: Kent Overstreet Signed-off-by: Suren Baghdasaryan --- include/linux/mempool.h | 73 ++++++++++++++++++++--------------------- mm/mempool.c | 28 ++++++---------- 2 files changed, 45 insertions(+), 56 deletions(-) diff --git a/include/linux/mempool.h b/include/linux/mempool.h index 4aae6c06c5f2..aa6e886b01d7 100644 --- a/include/linux/mempool.h +++ b/include/linux/mempool.h @@ -5,6 +5,8 @@ #ifndef _LINUX_MEMPOOL_H #define _LINUX_MEMPOOL_H +#include +#include #include #include @@ -39,18 +41,32 @@ void mempool_exit(mempool_t *pool); int mempool_init_node(mempool_t *pool, int min_nr, mempool_alloc_t *alloc_fn, mempool_free_t *free_fn, void *pool_data, gfp_t gfp_mask, int node_id); -int mempool_init(mempool_t *pool, int min_nr, mempool_alloc_t *alloc_fn, + +int _mempool_init(mempool_t *pool, int min_nr, mempool_alloc_t *alloc_fn, mempool_free_t *free_fn, void *pool_data); +#define mempool_init(...) \ + alloc_hooks(_mempool_init(__VA_ARGS__), int, -ENOMEM) extern mempool_t *mempool_create(int min_nr, mempool_alloc_t *alloc_fn, mempool_free_t *free_fn, void *pool_data); -extern mempool_t *mempool_create_node(int min_nr, mempool_alloc_t *alloc_fn, + +extern mempool_t *_mempool_create_node(int min_nr, mempool_alloc_t *alloc_fn, mempool_free_t *free_fn, void *pool_data, gfp_t gfp_mask, int nid); +#define mempool_create_node(...) \ + alloc_hooks(_mempool_create_node(__VA_ARGS__), mempool_t *, NULL) + +#define mempool_create(_min_nr, _alloc_fn, _free_fn, _pool_data) \ + mempool_create_node(_min_nr, _alloc_fn, _free_fn, _pool_data, \ + GFP_KERNEL, NUMA_NO_NODE) extern int mempool_resize(mempool_t *pool, int new_min_nr); extern void mempool_destroy(mempool_t *pool); -extern void *mempool_alloc(mempool_t *pool, gfp_t gfp_mask) __malloc; + +extern void *_mempool_alloc(mempool_t *pool, gfp_t gfp_mask) __malloc; +#define mempool_alloc(_pool, _gfp) \ + alloc_hooks(_mempool_alloc((_pool), (_gfp)), void *, NULL) + extern void mempool_free(void *element, mempool_t *pool); /* @@ -61,19 +77,10 @@ extern void mempool_free(void *element, mempool_t *pool); void *mempool_alloc_slab(gfp_t gfp_mask, void *pool_data); void mempool_free_slab(void *element, void *pool_data); -static inline int -mempool_init_slab_pool(mempool_t *pool, int min_nr, struct kmem_cache *kc) -{ - return mempool_init(pool, min_nr, mempool_alloc_slab, - mempool_free_slab, (void *) kc); -} - -static inline mempool_t * -mempool_create_slab_pool(int min_nr, struct kmem_cache *kc) -{ - return mempool_create(min_nr, mempool_alloc_slab, mempool_free_slab, - (void *) kc); -} +#define mempool_init_slab_pool(_pool, _min_nr, _kc) \ + mempool_init(_pool, (_min_nr), mempool_alloc_slab, mempool_free_slab, (void *)(_kc)) +#define mempool_create_slab_pool(_min_nr, _kc) \ + mempool_create((_min_nr), mempool_alloc_slab, mempool_free_slab, (void *)(_kc)) /* * a mempool_alloc_t and a mempool_free_t to kmalloc and kfree the @@ -82,17 +89,12 @@ mempool_create_slab_pool(int min_nr, struct kmem_cache *kc) void *mempool_kmalloc(gfp_t gfp_mask, void *pool_data); void mempool_kfree(void *element, void *pool_data); -static inline int mempool_init_kmalloc_pool(mempool_t *pool, int min_nr, size_t size) -{ - return mempool_init(pool, min_nr, mempool_kmalloc, - mempool_kfree, (void *) size); -} - -static inline mempool_t *mempool_create_kmalloc_pool(int min_nr, size_t size) -{ - return mempool_create(min_nr, mempool_kmalloc, mempool_kfree, - (void *) size); -} +#define mempool_init_kmalloc_pool(_pool, _min_nr, _size) \ + mempool_init(_pool, (_min_nr), mempool_kmalloc, mempool_kfree, \ + (void *)(unsigned long)(_size)) +#define mempool_create_kmalloc_pool(_min_nr, _size) \ + mempool_create((_min_nr), mempool_kmalloc, mempool_kfree, \ + (void *)(unsigned long)(_size)) /* * A mempool_alloc_t and mempool_free_t for a simple page allocator that @@ -101,16 +103,11 @@ static inline mempool_t *mempool_create_kmalloc_pool(int min_nr, size_t size) void *mempool_alloc_pages(gfp_t gfp_mask, void *pool_data); void mempool_free_pages(void *element, void *pool_data); -static inline int mempool_init_page_pool(mempool_t *pool, int min_nr, int order) -{ - return mempool_init(pool, min_nr, mempool_alloc_pages, - mempool_free_pages, (void *)(long)order); -} - -static inline mempool_t *mempool_create_page_pool(int min_nr, int order) -{ - return mempool_create(min_nr, mempool_alloc_pages, mempool_free_pages, - (void *)(long)order); -} +#define mempool_init_page_pool(_pool, _min_nr, _order) \ + mempool_init(_pool, (_min_nr), mempool_alloc_pages, \ + mempool_free_pages, (void *)(long)(_order)) +#define mempool_create_page_pool(_min_nr, _order) \ + mempool_create((_min_nr), mempool_alloc_pages, \ + mempool_free_pages, (void *)(long)(_order)) #endif /* _LINUX_MEMPOOL_H */ diff --git a/mm/mempool.c b/mm/mempool.c index 734bcf5afbb7..4fc90735853c 100644 --- a/mm/mempool.c +++ b/mm/mempool.c @@ -230,17 +230,17 @@ EXPORT_SYMBOL(mempool_init_node); * * Return: %0 on success, negative error code otherwise. */ -int mempool_init(mempool_t *pool, int min_nr, mempool_alloc_t *alloc_fn, +int _mempool_init(mempool_t *pool, int min_nr, mempool_alloc_t *alloc_fn, mempool_free_t *free_fn, void *pool_data) { return mempool_init_node(pool, min_nr, alloc_fn, free_fn, pool_data, GFP_KERNEL, NUMA_NO_NODE); } -EXPORT_SYMBOL(mempool_init); +EXPORT_SYMBOL(_mempool_init); /** - * mempool_create - create a memory pool + * mempool_create_node - create a memory pool * @min_nr: the minimum number of elements guaranteed to be * allocated for this pool. * @alloc_fn: user-defined element-allocation function. @@ -255,15 +255,7 @@ EXPORT_SYMBOL(mempool_init); * * Return: pointer to the created memory pool object or %NULL on error. */ -mempool_t *mempool_create(int min_nr, mempool_alloc_t *alloc_fn, - mempool_free_t *free_fn, void *pool_data) -{ - return mempool_create_node(min_nr, alloc_fn, free_fn, pool_data, - GFP_KERNEL, NUMA_NO_NODE); -} -EXPORT_SYMBOL(mempool_create); - -mempool_t *mempool_create_node(int min_nr, mempool_alloc_t *alloc_fn, +mempool_t *_mempool_create_node(int min_nr, mempool_alloc_t *alloc_fn, mempool_free_t *free_fn, void *pool_data, gfp_t gfp_mask, int node_id) { @@ -281,7 +273,7 @@ mempool_t *mempool_create_node(int min_nr, mempool_alloc_t *alloc_fn, return pool; } -EXPORT_SYMBOL(mempool_create_node); +EXPORT_SYMBOL(_mempool_create_node); /** * mempool_resize - resize an existing memory pool @@ -377,7 +369,7 @@ EXPORT_SYMBOL(mempool_resize); * * Return: pointer to the allocated element or %NULL on error. */ -void *mempool_alloc(mempool_t *pool, gfp_t gfp_mask) +void *_mempool_alloc(mempool_t *pool, gfp_t gfp_mask) { void *element; unsigned long flags; @@ -444,7 +436,7 @@ void *mempool_alloc(mempool_t *pool, gfp_t gfp_mask) finish_wait(&pool->wait, &wait); goto repeat_alloc; } -EXPORT_SYMBOL(mempool_alloc); +EXPORT_SYMBOL(_mempool_alloc); /** * mempool_free - return an element to the pool. @@ -515,7 +507,7 @@ void *mempool_alloc_slab(gfp_t gfp_mask, void *pool_data) { struct kmem_cache *mem = pool_data; VM_BUG_ON(mem->ctor); - return kmem_cache_alloc(mem, gfp_mask); + return _kmem_cache_alloc(mem, gfp_mask); } EXPORT_SYMBOL(mempool_alloc_slab); @@ -533,7 +525,7 @@ EXPORT_SYMBOL(mempool_free_slab); void *mempool_kmalloc(gfp_t gfp_mask, void *pool_data) { size_t size = (size_t)pool_data; - return kmalloc(size, gfp_mask); + return _kmalloc(size, gfp_mask); } EXPORT_SYMBOL(mempool_kmalloc); @@ -550,7 +542,7 @@ EXPORT_SYMBOL(mempool_kfree); void *mempool_alloc_pages(gfp_t gfp_mask, void *pool_data) { int order = (int)(long)pool_data; - return alloc_pages(gfp_mask, order); + return _alloc_pages(gfp_mask, order); } EXPORT_SYMBOL(mempool_alloc_pages); From patchwork Mon May 1 16:54:38 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 89091 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp59396vqo; Mon, 1 May 2023 10:02:07 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6fZNhS6g3SC+WV76m2pz1brTdtBk1bJej18Hr5vxZht16E0Ioudu0GtXeUV9fnurOiRiKT X-Received: by 2002:a17:90b:3a8d:b0:23f:2661:f94c with SMTP id om13-20020a17090b3a8d00b0023f2661f94cmr14913529pjb.47.1682960527107; Mon, 01 May 2023 10:02:07 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682960527; cv=none; d=google.com; s=arc-20160816; b=K4iTPLlsSdTdmI1xHCu0fOiXWkfljRxWErGX5/MoWOCAfJyBIOICZUXU2YOvnNpDb0 ZPCpiLCaIjEMxSklMpjJrT9h72jrkf3f79Cd/9grvLA7mTTMaiRyGcENLCKhkO7rBBtd z4ZNH2qDIqh0z3W6V10MD5N/OD5bjzirlgwlX9xH4/tpM6EPsFbT8lj8L+AkDFwY4uUY Q4XHr6BigBrWjPaVSHIJ/2xxtxABDQurVw4yi5Fx6pci06aZXiMhJ0ed91q0lXaYvqGF ACdj3WnhmuHBSqJDIQSAUVMReCHrdiHzpbqxGY2U9gUb+NccHT95FX1FopjlQIXmyJ7n vdiA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:dkim-signature; bh=jz3+CZtZwJ9/tSYfHBM/2MHA9DEGGjB4zlxzO096/9Q=; b=kk5dfNIGcbOKC4SO17DJ7zaO0sWcuJUSSWLstPAJiOopQX66JFzdUMllra/qTlIj7k 7wwacnL2/NA1g9KXVF5Zh12ZgNK3rqepq3OVqhslg+sjRq8tWqjsebGQZGCRP4/pmrFk 3KH2aR+5e7Sd1kj5856uIxCE/8+o2rG0wMWGf+9NdK0eCnDy7FfHQr5gUoamrwxUb+Jl NcewT3YGButrPvlVwOf9c6TcIEh+v/4Ao9Wrx1n6ZGxnCaNlckWWyE7CBlC4HPyRZLKY 5CfiiNQxTCC7hJzhq1jzJnCN376T0Zy2D44wEvKKlc0xsFDDBLQMQk5eCP/VTssTuOl5 H3sA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=Glr4JSHs; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id n7-20020a17090a2bc700b0023d4d532a7csi9376336pje.101.2023.05.01.10.01.47; Mon, 01 May 2023 10:02:07 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=Glr4JSHs; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233075AbjEARBP (ORCPT + 99 others); Mon, 1 May 2023 13:01:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60328 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233005AbjEAQ7n (ORCPT ); Mon, 1 May 2023 12:59:43 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5998135B1 for ; Mon, 1 May 2023 09:56:31 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id 3f1490d57ef6-b9a7d92d0f7so4769979276.1 for ; Mon, 01 May 2023 09:56:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1682960172; x=1685552172; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=jz3+CZtZwJ9/tSYfHBM/2MHA9DEGGjB4zlxzO096/9Q=; b=Glr4JSHsTmaAPI0AWP0PWNaIttgzeyDzPQx4tjZhzhm0zZ4p+1GCcCC3c3r4aXKXSp WIW7l4ONjqrOBXvjUIVll03e3w9s+YRWlCSxmsKNK9xzzAcM54D0O3+CWQoKF/ytzIvC kOX+0OA2rpltLDjWT1I86SdUtJ7gM1/iT7nXNjbfE5eIm0GIboQlBljhXdjDloTQNmLw 70IARjApWsmBCF7eSF3LKPA2bTZVurXLcycCJn2u8FXoQc4EQGTI1K+d32OJcow714/u 3R8ajXD58HPiHypeEQaSKmOGWRsxZiDT+trPafMrQwqkk9Lr9LEGmo4kscoUpmICusqi Lbwg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682960172; x=1685552172; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=jz3+CZtZwJ9/tSYfHBM/2MHA9DEGGjB4zlxzO096/9Q=; b=MPBcb+WmLSLJBTvJuFsmjsP4Jz1h1ORBvXxbiCnyRs0pBSEEWiQu7F47N7zhSOUY/j ldyOezzRnOoxCu++ZGwo6ipr4z+VUjA46LFSKaeLwCbsuSBa6g0L+nbrT5wn44/3AUNB AcO5M7jOxE+lvuStUAFFET/60amf6oI9rLJEOXfFAVcbEhoF+yhsU8XfHQQn8RPM2HsF 47Nx8hYzy6+vogNtTspntsugr7oPw8oQt/moEK4xUIlmi80Gp4jfxs6eCIFQcNP888d3 5+Hsl606qLpDtCHCUxViVdD8UCBxQi1LUybQGQ77VoZ3pyET4VXy+yKmk8cPWKLJ28Uv 5KSg== X-Gm-Message-State: AC+VfDxlyD8XQYsIHdwbKciVpg4Otw6YuNWmFPm0iCzCM40YATaY1VFB p8r+rf+NC7iZUTH/t1xVb36vRVzS/so= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:6d24:3efd:facc:7ac4]) (user=surenb job=sendgmr) by 2002:a25:1388:0:b0:b95:ecc5:5796 with SMTP id 130-20020a251388000000b00b95ecc55796mr5071137ybt.12.1682960171977; Mon, 01 May 2023 09:56:11 -0700 (PDT) Date: Mon, 1 May 2023 09:54:38 -0700 In-Reply-To: <20230501165450.15352-1-surenb@google.com> Mime-Version: 1.0 References: <20230501165450.15352-1-surenb@google.com> X-Mailer: git-send-email 2.40.1.495.gc816e09b53d-goog Message-ID: <20230501165450.15352-29-surenb@google.com> Subject: [PATCH 28/40] timekeeping: Fix a circular include dependency From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, ldufour@linux.ibm.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org X-Spam-Status: No, score=-9.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1764712017487377303?= X-GMAIL-MSGID: =?utf-8?q?1764712017487377303?= From: Kent Overstreet This avoids a circular header dependency in an upcoming patch by only making hrtimer.h depend on percpu-defs.h Signed-off-by: Kent Overstreet Signed-off-by: Suren Baghdasaryan Cc: Thomas Gleixner Reviewed-by: Thomas Gleixner --- include/linux/hrtimer.h | 2 +- include/linux/time_namespace.h | 2 ++ 2 files changed, 3 insertions(+), 1 deletion(-) diff --git a/include/linux/hrtimer.h b/include/linux/hrtimer.h index 0ee140176f10..e67349e84364 100644 --- a/include/linux/hrtimer.h +++ b/include/linux/hrtimer.h @@ -16,7 +16,7 @@ #include #include #include -#include +#include #include #include #include diff --git a/include/linux/time_namespace.h b/include/linux/time_namespace.h index bb9d3f5542f8..d8e0cacfcae5 100644 --- a/include/linux/time_namespace.h +++ b/include/linux/time_namespace.h @@ -11,6 +11,8 @@ struct user_namespace; extern struct user_namespace init_user_ns; +struct vm_area_struct; + struct timens_offsets { struct timespec64 monotonic; struct timespec64 boottime; From patchwork Mon May 1 16:54:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 89121 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp72639vqo; Mon, 1 May 2023 10:21:16 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6sWEEYwk0RNRcVNp8uwBQGTpcg68i0jR7R4j5ZJbwtoPwBlJq01yE6dkTRdQSbZIuKzUAl X-Received: by 2002:a05:6a20:144e:b0:f3:532a:7150 with SMTP id a14-20020a056a20144e00b000f3532a7150mr18992747pzi.0.1682961676329; Mon, 01 May 2023 10:21:16 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682961676; cv=none; d=google.com; s=arc-20160816; b=oD8AH429ovJQ5sTfoHXeDQrGQxXst864ISZ1otODH9Nzk2YIbJv/gzeTVQwempmVoH tjWy+7pfwa2xrxlUONLE1RlK5i/vL675n/StYRQtVANZBFhySj9y3N4Hcw2bYJW5ARpP HsC8FxheNWda3fgWxlNhXwHM79A0UXK4jqcXSXZLg2khoho4KWZbV0W07s4CjVtax5iQ MR++p/6Ac7Vm2gNeHHo1CMLorKxvsx9WrY6VLRR9V+ClIxnddAXqVyGBFm7cAJzmpg+4 kIGOO9WHofYyBbY1FiosMb9/hxkBtO8s//IOEyWE+2YQs1r1eOmizc+MdYWdB0QeHxrf 87uQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:dkim-signature; bh=k4gyiXZ+rLWKPzhf5TCPslUr+ioI4/2asgnCyiPdOEo=; b=x47DBORej7i5I8KNWlJv10o7nnTzIch+NySs/qJ4ePZyzT9hrzs4QHvIBmk5nyaaSx 6nPSQ3QiH8ikFSzOBum0gREac5t7vHkiflXUG2Zu3gFLyXmTYmlv0eLtfxlz2zI4RLyD Ot0viP3OKJBSkKiXJsZiJP4JdyVR9takKqvgLSri4M9OtJkQSdPd5sUXngNK1fFM75HQ COFpaCp/X6o80KqwKREXQrzyjhBkmXht7DX0EIETbj1JhMIdPWRqh8gI7a96F1lN1e+O FeD81MMwoTB03GCmaEu++Ry16dl3HtvBQQyGBSqwocAd2dXSkYZOkd6keiqf1WwhkDC3 8L7g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=DmSOIqXp; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id a15-20020a631a0f000000b0051892919b7csi27680186pga.420.2023.05.01.10.21.03; Mon, 01 May 2023 10:21:16 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=DmSOIqXp; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233137AbjEARB3 (ORCPT + 99 others); Mon, 1 May 2023 13:01:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33252 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233125AbjEAQ7w (ORCPT ); Mon, 1 May 2023 12:59:52 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B03BA1BC7 for ; Mon, 1 May 2023 09:56:34 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id 3f1490d57ef6-b9a7553f95dso5489809276.2 for ; Mon, 01 May 2023 09:56:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1682960174; x=1685552174; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=k4gyiXZ+rLWKPzhf5TCPslUr+ioI4/2asgnCyiPdOEo=; b=DmSOIqXpDpgOhsbYBtSJd0pE8NBIuLLzTwqet8jr5NLvFjfEu90BXeWtYv7qlPqaG7 S7+D6TPAoWHLs26LhS/oYb03UoMwG4HhBUCufMxi2OXT8Cg0Wg5ZW9rp2ejUU7/dM0E+ g9Ptf/gGhu2ayAN03NAMSR+d1VG5nCdJ9H6WMvt6TJ0j29Cpmun2V6f3j+OoydA/EMqH hhtOdKXFKMT9O6F/RJeaiymHmIAjupnQaY2DJyokZEtCEBhHqOPrpU0cVWh3zhhtOv4w vPFJ1809YCMUcOTdA9cBgXt/0HwdTb2v9IdEptjtwIgtSwcOESs6yl5SKx9kBZVUQ0L8 /sDw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682960174; x=1685552174; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=k4gyiXZ+rLWKPzhf5TCPslUr+ioI4/2asgnCyiPdOEo=; b=ZQrqF01Je4YyAvU0Q6LitHotKdi7Erdh+taFlAwQXbGXdThd8f+uuHh5AY/iCYcCRI mtUcqKaAoV28KWGkYIQFGR1vFgQ9qwkcBB01w4CTMmfnkV3yWGrU1GHc1QIqdOWHkwWu f59+PoDQ/UE67Z4J8P7MuB/6N9Hz+ZJgcdr9Hrd5Nw96Wg9QWrtNfX1IJeeBkRRrzeYJ q30PweALsetB0u7vmpHGuJyKZPruUkTNpDrG6jz4uBUlcJ2uGHffh/XaGMS44tBTyThm 7HX+UD6/7EQKqEkCC+Yw+A/oeoBeXpCWmaM9QwWSgrRY6cGAVyiSRe6g10lq+ukebMkD 7I2Q== X-Gm-Message-State: AC+VfDwCRGwt7/9XAg9uuURt+IJgm7WmQvJvTIAhe8KxGW4ZQIEpk3Tk YNZOCIAM3hVXiwNZXE/UVYZPpomVB5w= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:6d24:3efd:facc:7ac4]) (user=surenb job=sendgmr) by 2002:a05:6902:1081:b0:b9d:d5dc:5971 with SMTP id v1-20020a056902108100b00b9dd5dc5971mr3225339ybu.2.1682960174070; Mon, 01 May 2023 09:56:14 -0700 (PDT) Date: Mon, 1 May 2023 09:54:39 -0700 In-Reply-To: <20230501165450.15352-1-surenb@google.com> Mime-Version: 1.0 References: <20230501165450.15352-1-surenb@google.com> X-Mailer: git-send-email 2.40.1.495.gc816e09b53d-goog Message-ID: <20230501165450.15352-30-surenb@google.com> Subject: [PATCH 29/40] mm: percpu: Introduce pcpuobj_ext From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, ldufour@linux.ibm.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org X-Spam-Status: No, score=-9.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1764713222890184002?= X-GMAIL-MSGID: =?utf-8?q?1764713222890184002?= From: Kent Overstreet Upcoming alloc tagging patches require a place to stash per-allocation metadata. We already do this when memcg is enabled, so this patch generalizes the obj_cgroup * vector in struct pcpu_chunk by creating a pcpu_obj_ext type, which we will be adding to in an upcoming patch - similarly to the previous slabobj_ext patch. Signed-off-by: Kent Overstreet Signed-off-by: Suren Baghdasaryan Cc: Andrew Morton Cc: Dennis Zhou Cc: Tejun Heo Cc: Christoph Lameter Cc: linux-mm@kvack.org --- mm/percpu-internal.h | 19 +++++++++++++++++-- mm/percpu.c | 30 +++++++++++++++--------------- 2 files changed, 32 insertions(+), 17 deletions(-) diff --git a/mm/percpu-internal.h b/mm/percpu-internal.h index f9847c131998..2433e7b24172 100644 --- a/mm/percpu-internal.h +++ b/mm/percpu-internal.h @@ -32,6 +32,16 @@ struct pcpu_block_md { int nr_bits; /* total bits responsible for */ }; +struct pcpuobj_ext { +#ifdef CONFIG_MEMCG_KMEM + struct obj_cgroup *cgroup; +#endif +}; + +#ifdef CONFIG_MEMCG_KMEM +#define NEED_PCPUOBJ_EXT +#endif + struct pcpu_chunk { #ifdef CONFIG_PERCPU_STATS int nr_alloc; /* # of allocations */ @@ -57,8 +67,8 @@ struct pcpu_chunk { int end_offset; /* additional area required to have the region end page aligned */ -#ifdef CONFIG_MEMCG_KMEM - struct obj_cgroup **obj_cgroups; /* vector of object cgroups */ +#ifdef NEED_PCPUOBJ_EXT + struct pcpuobj_ext *obj_exts; /* vector of object cgroups */ #endif int nr_pages; /* # of pages served by this chunk */ @@ -67,6 +77,11 @@ struct pcpu_chunk { unsigned long populated[]; /* populated bitmap */ }; +static inline bool need_pcpuobj_ext(void) +{ + return !mem_cgroup_kmem_disabled(); +} + extern spinlock_t pcpu_lock; extern struct list_head *pcpu_chunk_lists; diff --git a/mm/percpu.c b/mm/percpu.c index 28e07ede46f6..95b26a6b718d 100644 --- a/mm/percpu.c +++ b/mm/percpu.c @@ -1392,9 +1392,9 @@ static struct pcpu_chunk * __init pcpu_alloc_first_chunk(unsigned long tmp_addr, panic("%s: Failed to allocate %zu bytes\n", __func__, alloc_size); -#ifdef CONFIG_MEMCG_KMEM +#ifdef NEED_PCPUOBJ_EXT /* first chunk is free to use */ - chunk->obj_cgroups = NULL; + chunk->obj_exts = NULL; #endif pcpu_init_md_blocks(chunk); @@ -1463,12 +1463,12 @@ static struct pcpu_chunk *pcpu_alloc_chunk(gfp_t gfp) if (!chunk->md_blocks) goto md_blocks_fail; -#ifdef CONFIG_MEMCG_KMEM - if (!mem_cgroup_kmem_disabled()) { - chunk->obj_cgroups = +#ifdef NEED_PCPUOBJ_EXT + if (need_pcpuobj_ext()) { + chunk->obj_exts = pcpu_mem_zalloc(pcpu_chunk_map_bits(chunk) * - sizeof(struct obj_cgroup *), gfp); - if (!chunk->obj_cgroups) + sizeof(struct pcpuobj_ext), gfp); + if (!chunk->obj_exts) goto objcg_fail; } #endif @@ -1480,7 +1480,7 @@ static struct pcpu_chunk *pcpu_alloc_chunk(gfp_t gfp) return chunk; -#ifdef CONFIG_MEMCG_KMEM +#ifdef NEED_PCPUOBJ_EXT objcg_fail: pcpu_mem_free(chunk->md_blocks); #endif @@ -1498,8 +1498,8 @@ static void pcpu_free_chunk(struct pcpu_chunk *chunk) { if (!chunk) return; -#ifdef CONFIG_MEMCG_KMEM - pcpu_mem_free(chunk->obj_cgroups); +#ifdef NEED_PCPUOBJ_EXT + pcpu_mem_free(chunk->obj_exts); #endif pcpu_mem_free(chunk->md_blocks); pcpu_mem_free(chunk->bound_map); @@ -1648,8 +1648,8 @@ static void pcpu_memcg_post_alloc_hook(struct obj_cgroup *objcg, if (!objcg) return; - if (likely(chunk && chunk->obj_cgroups)) { - chunk->obj_cgroups[off >> PCPU_MIN_ALLOC_SHIFT] = objcg; + if (likely(chunk && chunk->obj_exts)) { + chunk->obj_exts[off >> PCPU_MIN_ALLOC_SHIFT].cgroup = objcg; rcu_read_lock(); mod_memcg_state(obj_cgroup_memcg(objcg), MEMCG_PERCPU_B, @@ -1665,13 +1665,13 @@ static void pcpu_memcg_free_hook(struct pcpu_chunk *chunk, int off, size_t size) { struct obj_cgroup *objcg; - if (unlikely(!chunk->obj_cgroups)) + if (unlikely(!chunk->obj_exts)) return; - objcg = chunk->obj_cgroups[off >> PCPU_MIN_ALLOC_SHIFT]; + objcg = chunk->obj_exts[off >> PCPU_MIN_ALLOC_SHIFT].cgroup; if (!objcg) return; - chunk->obj_cgroups[off >> PCPU_MIN_ALLOC_SHIFT] = NULL; + chunk->obj_exts[off >> PCPU_MIN_ALLOC_SHIFT].cgroup = NULL; obj_cgroup_uncharge(objcg, pcpu_obj_full_size(size)); From patchwork Mon May 1 16:54:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 89118 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp72324vqo; Mon, 1 May 2023 10:20:48 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ5Ycr/KWPy01AJVlNxE9aLSdssC8N0U+NPGrtalkmK01qUnHRLvinpUQoeUuWM0ojp2Z9lQ X-Received: by 2002:a05:6a20:c1a4:b0:eb:b8:bdc8 with SMTP id bg36-20020a056a20c1a400b000eb00b8bdc8mr15424386pzb.57.1682961648524; Mon, 01 May 2023 10:20:48 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682961648; cv=none; d=google.com; s=arc-20160816; b=KiJc5cEss2BabnYlK8Sm5KPi3aP1LQnreQoafPtduW8ByAMEJrhxy6VvIQRDmy/nnL fw2+fxNx25KwTneLQ9kzf5yAhgvEy80amMyVNjjggYfFI6czx4gh9qr6l+bMjLz4ADd8 z9Luh9wRStVHDhGahQOJY0jNmYKxxpe60hSUddhNpNgpGXHsKo8EmyoA1OUX6Ig8GG/b D2HqDDKYWIsI0/5XKfQuwze111npXh+pe9dHXGRUBFZvWm2fijh+119Zpx9ILI+55uWV UEN2KfxOAjt+nt0FapfgnS5Zibj6HtfQOFZRz1nAwuJnXHPz6RHclLOohUjY8VxzOcwD m7Jw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:dkim-signature; bh=utmd4o6Wxf9dKKkm5+p28Q6keQxRmEnIpppKzQNbjdk=; b=cVVN0ufBUBUXH/x0SKMIJ9c68jCYxxLcTop8wFSelZCyqqv0BtygmepMdWGACTD35u IJvjvkLOSZK3xh4H9sdgeUk467izPU4pbBXBrJqV1sBi+Ss4jmlk+alJhnXiEU8zUmWc YwKM4WokTk1QZ2NjHqzdtNXD0slgV06j8F4XBh/MzuREV1b2UGaqo0SFUqYcs/J611t7 iqnseQCdWM5p5Cx/NqlUEB1odgK4jYFvJRMAH94ZlNuR8m+abkH2EYP6yO4WV/YBJkxf iwYWmPsb5J5na+CfoFi9i70CzPMtNxdINgWVnT/6iBSdu24qvcOJqpfpp+Y9JhhX76FJ tcfQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=yzTHI1C2; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id v5-20020aa799c5000000b0063f1cb928bdsi24557409pfi.313.2023.05.01.10.20.35; Mon, 01 May 2023 10:20:48 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=yzTHI1C2; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232450AbjEARCJ (ORCPT + 99 others); Mon, 1 May 2023 13:02:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60894 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232672AbjEAQ75 (ORCPT ); Mon, 1 May 2023 12:59:57 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 522213C14 for ; Mon, 1 May 2023 09:56:44 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id 3f1490d57ef6-b9a7e65b34aso4868710276.0 for ; Mon, 01 May 2023 09:56:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1682960176; x=1685552176; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=utmd4o6Wxf9dKKkm5+p28Q6keQxRmEnIpppKzQNbjdk=; b=yzTHI1C2zJou5AH/p8lPs0VsHNeDHmv3kTFdxuBKDy961uXVpU9fU3i66ZX+sGZxop K9tHzqQGK2fyGSX0IUIKHt297nLCAVUUQ9uE+ycaL5j8vF1Uq5qjYO9IBdKFeRBRz+JV h0XCiQt2rQy/vjGuvtAMwYO/Ef8AO0zgdlUV7UkjU/4j8KZorjx6YpJycy3NjV+U1jmW 8YeCYGEoAGbwFJFnuS6l7DE88NDAQo1cCW0vxXj2MMAEUI8zWICBIQThmt4kh65kKXHV qjMw8B7CVE+Aq4G5oCD+SKPBqvWYSqi0ooDgoVvg+pFWLmMj9ApTsyrB4ZzuvS1F+m6r BcdQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682960176; x=1685552176; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=utmd4o6Wxf9dKKkm5+p28Q6keQxRmEnIpppKzQNbjdk=; b=Ar5ZwodVAwAOLZ5370o3PWHJ3YbwPW1LLF5gVi8ULnSQdyvj6GCyAjRJgW+m/+Vi6H ILXVWmgKH/VtJjkJbg4Eb77eO62/R25MO1H1K7OCM/bxgcPCxoxb9bWo0ZD6/99iWV6D 0JbhZg2vtExG+ZcoMCOHVojcNM9x6RLRl+cDSiPbCVaeKlQzRH2v6LkoYs8QugkF7x4W rQjy12BWEt5sE0XViyyrEovc+oeU838n1EjBPk3tcwiEt4k2srmJFnf1HqWlX6W3N4h+ vNvrCJqW9rQrn4hXUcTc7V5f7N4ft/Mjad4REOPo4B7xuBu+IWbzdnHWzHJ8aHkIkaXo /Q0w== X-Gm-Message-State: AC+VfDxOiB8ccyERQg2nGvQiACpKsaF6et0lpsGrkxUlY23BF1dZd5yk B+GbdnKyBD1fSmdeQ0O53K7Z+emcxVQ= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:6d24:3efd:facc:7ac4]) (user=surenb job=sendgmr) by 2002:a05:6902:100e:b0:b8b:f584:6b73 with SMTP id w14-20020a056902100e00b00b8bf5846b73mr5602392ybt.10.1682960176468; Mon, 01 May 2023 09:56:16 -0700 (PDT) Date: Mon, 1 May 2023 09:54:40 -0700 In-Reply-To: <20230501165450.15352-1-surenb@google.com> Mime-Version: 1.0 References: <20230501165450.15352-1-surenb@google.com> X-Mailer: git-send-email 2.40.1.495.gc816e09b53d-goog Message-ID: <20230501165450.15352-31-surenb@google.com> Subject: [PATCH 30/40] mm: percpu: Add codetag reference into pcpuobj_ext From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, ldufour@linux.ibm.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org X-Spam-Status: No, score=-9.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1764713193570602694?= X-GMAIL-MSGID: =?utf-8?q?1764713193570602694?= From: Kent Overstreet To store codetag for every per-cpu allocation, a codetag reference is embedded into pcpuobj_ext when CONFIG_MEM_ALLOC_PROFILING=y. Hooks to use the newly introduced codetag are added. Signed-off-by: Kent Overstreet Signed-off-by: Suren Baghdasaryan --- mm/percpu-internal.h | 11 +++++++++-- mm/percpu.c | 26 ++++++++++++++++++++++++++ 2 files changed, 35 insertions(+), 2 deletions(-) diff --git a/mm/percpu-internal.h b/mm/percpu-internal.h index 2433e7b24172..c5d1d6723a66 100644 --- a/mm/percpu-internal.h +++ b/mm/percpu-internal.h @@ -36,9 +36,12 @@ struct pcpuobj_ext { #ifdef CONFIG_MEMCG_KMEM struct obj_cgroup *cgroup; #endif +#ifdef CONFIG_MEM_ALLOC_PROFILING + union codetag_ref tag; +#endif }; -#ifdef CONFIG_MEMCG_KMEM +#if defined(CONFIG_MEMCG_KMEM) || defined(CONFIG_MEM_ALLOC_PROFILING) #define NEED_PCPUOBJ_EXT #endif @@ -79,7 +82,11 @@ struct pcpu_chunk { static inline bool need_pcpuobj_ext(void) { - return !mem_cgroup_kmem_disabled(); + if (IS_ENABLED(CONFIG_MEM_ALLOC_PROFILING)) + return true; + if (!mem_cgroup_kmem_disabled()) + return true; + return false; } extern spinlock_t pcpu_lock; diff --git a/mm/percpu.c b/mm/percpu.c index 95b26a6b718d..4e2592f2e58f 100644 --- a/mm/percpu.c +++ b/mm/percpu.c @@ -1701,6 +1701,32 @@ static void pcpu_memcg_free_hook(struct pcpu_chunk *chunk, int off, size_t size) } #endif /* CONFIG_MEMCG_KMEM */ +#ifdef CONFIG_MEM_ALLOC_PROFILING +static void pcpu_alloc_tag_alloc_hook(struct pcpu_chunk *chunk, int off, + size_t size) +{ + if (mem_alloc_profiling_enabled() && likely(chunk->obj_exts)) { + alloc_tag_add(&chunk->obj_exts[off >> PCPU_MIN_ALLOC_SHIFT].tag, + current->alloc_tag, size); + } +} + +static void pcpu_alloc_tag_free_hook(struct pcpu_chunk *chunk, int off, size_t size) +{ + if (mem_alloc_profiling_enabled() && likely(chunk->obj_exts)) + alloc_tag_sub_noalloc(&chunk->obj_exts[off >> PCPU_MIN_ALLOC_SHIFT].tag, size); +} +#else +static void pcpu_alloc_tag_alloc_hook(struct pcpu_chunk *chunk, int off, + size_t size) +{ +} + +static void pcpu_alloc_tag_free_hook(struct pcpu_chunk *chunk, int off, size_t size) +{ +} +#endif + /** * pcpu_alloc - the percpu allocator * @size: size of area to allocate in bytes From patchwork Mon May 1 16:54:41 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 89109 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp70958vqo; Mon, 1 May 2023 10:18:24 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ5rS0djQFBejPYpCkaWrWhWXxFThAPRZtNsrpfaZz0rHQHeISpdE3573qPecSpi4mM4SIod X-Received: by 2002:a05:6a00:c8c:b0:641:1f51:bae2 with SMTP id a12-20020a056a000c8c00b006411f51bae2mr19300168pfv.6.1682961504029; Mon, 01 May 2023 10:18:24 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682961504; cv=none; d=google.com; s=arc-20160816; b=C2OiDsP+L12MT3ofH7G/iIC5LkbqbWaaJn3EcDZv/aWNztk/bieB6nMxRD2ai5EaCR ityA1nqFYYhjxy4vC2AY1TR1q0aXmvYuZTLD+2bph/5E+uAy4SniEtAM/G6N9KMjwUO7 LSMpF9bsFbwSatiWsqtLj49exVpoT0gFqbU3ahl5EX1sqMXDNX53qWtgxF89z+zHSJ8A W/dTOZo6err91gRU/gKH9Uoo2L7hl7k5ft7Ug0c45FP5xg8RSNAeByktI23N+Q4pk1MY g+XiD22S7ghgCEilpwubM00Q25nUteHmQyjCc6ITvbMzFjjAK2EL0YAvmaHUwO/tx21J 2Xjg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:dkim-signature; bh=FLzZ9YV8+8xWjeDeRDWFnLJt6gby5ahlup+t8unODj4=; b=BRaBqB/bThxBc5af1ohWlpom58tAUaIljfOxxsPLcymQtaEPxEHX3VNdmUhyLX3CwY GrVFSAv8tEGbqm6DhSjPhkzYV0UYTqXizTTSPnKebxn2t8b8XkLR5pBCfcBQtgEVxDaa 4E685iYHKnBu72NQCm7OSx3MFHR9FYeYnPHOrXp0kRZpyvZCEEioHqOCHBJ1MMz+fxHr IyE22aAwLYCgRkiT0D2NS7GU/edAWtx+fxf9/ZUXgni4oIDhq8c0uz+9npR8xldm1u5Z G4GcpyV9fxGD5UDhHlur5UDm+Cji9CwUo7lmeu+zQE3R2HxvPDoQ3XCDr49+DrUqKDkX /Lqg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=6MUphdHN; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id t6-20020a625f06000000b006251fb701a6si28167162pfb.285.2023.05.01.10.18.11; Mon, 01 May 2023 10:18:24 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=6MUphdHN; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233163AbjEARCV (ORCPT + 99 others); Mon, 1 May 2023 13:02:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32828 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232388AbjEARAX (ORCPT ); Mon, 1 May 2023 13:00:23 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0D5133C1B for ; Mon, 1 May 2023 09:56:45 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id 3f1490d57ef6-b9a7553f95dso5489987276.2 for ; Mon, 01 May 2023 09:56:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1682960178; x=1685552178; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=FLzZ9YV8+8xWjeDeRDWFnLJt6gby5ahlup+t8unODj4=; b=6MUphdHNce/Gp+joNLguOb4c4BxZ2K95cOfq0noWa5+zgK3o6mMkxvgmYTxKopfE12 gvzB4L7ekld5KXS4qJrG0iyIY1OJTcBOr1jnrj5Qj+XSjPc9d8l46YT4nC5jTNEYkgOa Ftsse48iJ0f92YHed3wWA/J0eqwt07vyXvq1REgD9bK10yNIvJKHoqfnJDmh6CoLNpLi pUPv1EUVjPqy0lNNdb2UUSVTJnBt6kuBEfOfDhudMx+qNc4MyWYtoUsdbUWBDNV9FGAp xUasOVCUh854FOcecSCfYemAj9x0uxOHD+fuFHr9oLGXAgOTof3XgPYghDgqHqQy10zL pGdA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682960178; x=1685552178; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=FLzZ9YV8+8xWjeDeRDWFnLJt6gby5ahlup+t8unODj4=; b=ZZzv80f0pv7/pT7b+0OI2ihtCpSO8vTHcDfY9tVyCPC60LsSCq3Nl9nNT6t0Ks/9JV HIlLJYcU55CwedOzZxmf7Vx3GHMDhXK7UDHx92ww1likt9YeejBCk3E67I8UCzufpF7v yIwfKOwNaoWTm5zVtT35/D6ByPmPZ6teMsluOT0UvJ1tK9nPLqSg3IzoYf62HrH4NWTa Ja7kJjufu2zATxTPLP2UyhuiqxX1k2X4neBKVj27mU+nLEkbngsUDZNLH6+kpHVFykog kQUaGpOM7ffMW740iKCFfhCtHivJex4iyYX59+kGHUq4GVet4kKdUjV7y43ww+xRrwgh 39EQ== X-Gm-Message-State: AC+VfDzFRPWKUjzj3Ay7YD5hj9I703J3JK1u+uAEdoL6vDBh9q8oddbw EkYVdZWlyjnS92dcqFgEhwIMo6P09Nw= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:6d24:3efd:facc:7ac4]) (user=surenb job=sendgmr) by 2002:a25:d18e:0:b0:b9e:5008:1770 with SMTP id i136-20020a25d18e000000b00b9e50081770mr393470ybg.8.1682960178602; Mon, 01 May 2023 09:56:18 -0700 (PDT) Date: Mon, 1 May 2023 09:54:41 -0700 In-Reply-To: <20230501165450.15352-1-surenb@google.com> Mime-Version: 1.0 References: <20230501165450.15352-1-surenb@google.com> X-Mailer: git-send-email 2.40.1.495.gc816e09b53d-goog Message-ID: <20230501165450.15352-32-surenb@google.com> Subject: [PATCH 31/40] mm: percpu: enable per-cpu allocation tagging From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, ldufour@linux.ibm.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org X-Spam-Status: No, score=-9.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1764713042057369180?= X-GMAIL-MSGID: =?utf-8?q?1764713042057369180?= Redefine __alloc_percpu, __alloc_percpu_gfp and __alloc_reserved_percpu to record allocations and deallocations done by these functions. Signed-off-by: Kent Overstreet Signed-off-by: Suren Baghdasaryan --- include/linux/percpu.h | 19 ++++++++---- mm/percpu.c | 66 +++++------------------------------------- 2 files changed, 22 insertions(+), 63 deletions(-) diff --git a/include/linux/percpu.h b/include/linux/percpu.h index 1338ea2aa720..51ec257379af 100644 --- a/include/linux/percpu.h +++ b/include/linux/percpu.h @@ -2,12 +2,14 @@ #ifndef __LINUX_PERCPU_H #define __LINUX_PERCPU_H +#include #include #include #include #include #include #include +#include #include @@ -116,7 +118,6 @@ extern int __init pcpu_page_first_chunk(size_t reserved_size, pcpu_fc_cpu_to_node_fn_t cpu_to_nd_fn); #endif -extern void __percpu *__alloc_reserved_percpu(size_t size, size_t align) __alloc_size(1); extern bool __is_kernel_percpu_address(unsigned long addr, unsigned long *can_addr); extern bool is_kernel_percpu_address(unsigned long addr); @@ -124,10 +125,15 @@ extern bool is_kernel_percpu_address(unsigned long addr); extern void __init setup_per_cpu_areas(void); #endif -extern void __percpu *__alloc_percpu_gfp(size_t size, size_t align, gfp_t gfp) __alloc_size(1); -extern void __percpu *__alloc_percpu(size_t size, size_t align) __alloc_size(1); -extern void free_percpu(void __percpu *__pdata); -extern phys_addr_t per_cpu_ptr_to_phys(void *addr); +extern void __percpu *__pcpu_alloc(size_t size, size_t align, bool reserved, + gfp_t gfp) __alloc_size(1); + +#define __alloc_percpu_gfp(_size, _align, _gfp) alloc_hooks( \ + __pcpu_alloc(_size, _align, false, _gfp), void __percpu *, NULL) +#define __alloc_percpu(_size, _align) alloc_hooks( \ + __pcpu_alloc(_size, _align, false, GFP_KERNEL), void __percpu *, NULL) +#define __alloc_reserved_percpu(_size, _align) alloc_hooks( \ + __pcpu_alloc(_size, _align, true, GFP_KERNEL), void __percpu *, NULL) #define alloc_percpu_gfp(type, gfp) \ (typeof(type) __percpu *)__alloc_percpu_gfp(sizeof(type), \ @@ -136,6 +142,9 @@ extern phys_addr_t per_cpu_ptr_to_phys(void *addr); (typeof(type) __percpu *)__alloc_percpu(sizeof(type), \ __alignof__(type)) +extern void free_percpu(void __percpu *__pdata); +extern phys_addr_t per_cpu_ptr_to_phys(void *addr); + extern unsigned long pcpu_nr_pages(void); #endif /* __LINUX_PERCPU_H */ diff --git a/mm/percpu.c b/mm/percpu.c index 4e2592f2e58f..4b5cf260d8e0 100644 --- a/mm/percpu.c +++ b/mm/percpu.c @@ -1728,7 +1728,7 @@ static void pcpu_alloc_tag_free_hook(struct pcpu_chunk *chunk, int off, size_t s #endif /** - * pcpu_alloc - the percpu allocator + * __pcpu_alloc - the percpu allocator * @size: size of area to allocate in bytes * @align: alignment of area (max PAGE_SIZE) * @reserved: allocate from the reserved chunk if available @@ -1742,8 +1742,8 @@ static void pcpu_alloc_tag_free_hook(struct pcpu_chunk *chunk, int off, size_t s * RETURNS: * Percpu pointer to the allocated area on success, NULL on failure. */ -static void __percpu *pcpu_alloc(size_t size, size_t align, bool reserved, - gfp_t gfp) +void __percpu *__pcpu_alloc(size_t size, size_t align, bool reserved, + gfp_t gfp) { gfp_t pcpu_gfp; bool is_atomic; @@ -1909,6 +1909,8 @@ static void __percpu *pcpu_alloc(size_t size, size_t align, bool reserved, pcpu_memcg_post_alloc_hook(objcg, chunk, off, size); + pcpu_alloc_tag_alloc_hook(chunk, off, size); + return ptr; fail_unlock: @@ -1935,61 +1937,7 @@ static void __percpu *pcpu_alloc(size_t size, size_t align, bool reserved, return NULL; } - -/** - * __alloc_percpu_gfp - allocate dynamic percpu area - * @size: size of area to allocate in bytes - * @align: alignment of area (max PAGE_SIZE) - * @gfp: allocation flags - * - * Allocate zero-filled percpu area of @size bytes aligned at @align. If - * @gfp doesn't contain %GFP_KERNEL, the allocation doesn't block and can - * be called from any context but is a lot more likely to fail. If @gfp - * has __GFP_NOWARN then no warning will be triggered on invalid or failed - * allocation requests. - * - * RETURNS: - * Percpu pointer to the allocated area on success, NULL on failure. - */ -void __percpu *__alloc_percpu_gfp(size_t size, size_t align, gfp_t gfp) -{ - return pcpu_alloc(size, align, false, gfp); -} -EXPORT_SYMBOL_GPL(__alloc_percpu_gfp); - -/** - * __alloc_percpu - allocate dynamic percpu area - * @size: size of area to allocate in bytes - * @align: alignment of area (max PAGE_SIZE) - * - * Equivalent to __alloc_percpu_gfp(size, align, %GFP_KERNEL). - */ -void __percpu *__alloc_percpu(size_t size, size_t align) -{ - return pcpu_alloc(size, align, false, GFP_KERNEL); -} -EXPORT_SYMBOL_GPL(__alloc_percpu); - -/** - * __alloc_reserved_percpu - allocate reserved percpu area - * @size: size of area to allocate in bytes - * @align: alignment of area (max PAGE_SIZE) - * - * Allocate zero-filled percpu area of @size bytes aligned at @align - * from reserved percpu area if arch has set it up; otherwise, - * allocation is served from the same dynamic area. Might sleep. - * Might trigger writeouts. - * - * CONTEXT: - * Does GFP_KERNEL allocation. - * - * RETURNS: - * Percpu pointer to the allocated area on success, NULL on failure. - */ -void __percpu *__alloc_reserved_percpu(size_t size, size_t align) -{ - return pcpu_alloc(size, align, true, GFP_KERNEL); -} +EXPORT_SYMBOL_GPL(__pcpu_alloc); /** * pcpu_balance_free - manage the amount of free chunks @@ -2299,6 +2247,8 @@ void free_percpu(void __percpu *ptr) size = pcpu_free_area(chunk, off); + pcpu_alloc_tag_free_hook(chunk, off, size); + pcpu_memcg_free_hook(chunk, off, size); /* From patchwork Mon May 1 16:54:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 89113 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp71543vqo; Mon, 1 May 2023 10:19:27 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ5Tpsn/nWsBA72tUm7uGA8ExX3c6dWEgme4BVlgKIOLOkM6Lf11Wuuy1Q9y3Et2+NredmXl X-Received: by 2002:a17:90b:17c3:b0:247:bab1:d901 with SMTP id me3-20020a17090b17c300b00247bab1d901mr14118649pjb.17.1682961566757; Mon, 01 May 2023 10:19:26 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682961566; cv=none; d=google.com; s=arc-20160816; b=IhMauLBgrQtk/3AJJBY0j8hBRY9xvERw5Q/uEWY38HoP5mmH8TJScCzYWihwpy3umU G8lXbaR3uYevOUmbLgPDM6kQ48ANHCkP2N0jV73OeJGDEqkfhM5tfGloKTzhSerTZ6cM 7WgQ0rirjb0K3wQmsBSwPjaSqWEhNPBXWCsyYhbxWXM1xKBH+gzgjDWdns5xit8fennQ aJy2HJCc2lF5eSHcfb/g5npzv8f9C8KCAChsl0Ul7feLHLXJ2X/2gei8XgrirLmFJdNx w1hxdwtDfGAXHVs+yC+/iFMopuWMsQxdVh/n7ZUbbgrI0FKATSVIpu5GdBTj6IU5iZj8 ccbg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:dkim-signature; bh=jY4Why07vSNVPpYg+gbjry+wnVwWuxvaUCerSTimV/s=; b=uVGrdnydHhOgzEoCd3czlcXFPPrCOZxWJCMO8MQ3myrDBOZvwIibnJzmrHMXcXT6sK C6reQbVqKY6qBi2OX63ERVJ2wCgjRiLq+ekinYpyMxidoJtoAU0K7eSST/KJBzxKIpl8 k/MDcadM+zqrZTtx/cgZXs/xzA8S0mlPboEuuUkc2ixnyj3hprcWNJkbt7pBEJcocgH9 QG/EBBXEwvOHu/hX4JmFDGAp4gaS3svOXoASec6adioZcnxTcJ2BkIUrW704q/QKyq5B DBg79mLUt6VSrZyMRvqSZt4KhesL4Kg+yWiWbFjPxvBadK19xVELh1JPzZBKjSJEwReR g6VQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=Aq78soAa; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id o13-20020a17090ac70d00b0024711d63febsi30966279pjt.173.2023.05.01.10.19.14; Mon, 01 May 2023 10:19:26 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=Aq78soAa; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233207AbjEARCe (ORCPT + 99 others); Mon, 1 May 2023 13:02:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38644 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232739AbjEARBM (ORCPT ); Mon, 1 May 2023 13:01:12 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3760D40CB for ; Mon, 1 May 2023 09:56:51 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id 3f1490d57ef6-b9e50081556so491583276.3 for ; Mon, 01 May 2023 09:56:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1682960181; x=1685552181; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=jY4Why07vSNVPpYg+gbjry+wnVwWuxvaUCerSTimV/s=; b=Aq78soAaAyMief+cDBMeso8QgziJr1UcaoZbGj9scpfWVuA1jJXbc7+tVVMa5n+pXD zJNNMs5P7WUHVBw683ECTpg5npVx6QBf0bUN7DdPu6j/C+MRzKsa8URcyNS8bC7atQCY 9SigXecTat+KM4eyT+hIxO14+/p9qQaXDCEC1rzj2GobJRxENTqWjSQsycHoHxjax/gY eWPPPBE6ZeI91QsbK/YSznn2yYLzHfxgEG9YEj/uwkJeJF6xe27cleLqDewTpXr/rX+O cI/7bSXViuC98rFsCexDBf6uZdpJdij1JBRms+fpyJvFKEAl56eecSWlzCcPWSAQrlLn rFyQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682960181; x=1685552181; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=jY4Why07vSNVPpYg+gbjry+wnVwWuxvaUCerSTimV/s=; b=YYVV0UGVpL8fGr6GNbzHJTTw7fYzekzRnuiip94QogQp/MF1dZETQ0fTyVnEKc2H+1 MPnEYo+hUslwO83fAExkquFBjOYVJs+oRpRBNgoLwZJFsBQiySqTmI46Zkvz1ou7xJ5Z k5ywaBO4NSTZvtaOag8MIVbHFfr0dcZMb78KXWKmWj24S/AviKg5sAxPjhYKlKuzTAhM olWGhpkueKHGWyHbZlJPFTh+0zPI7hNo+WNobHCxkJZnM+KCamawY2LxYCVhN6dNk674 DICbFB4J+mv84dMZLABC7CiA4HeI3G8qey6xf9c9wUQXqWx42PtdmMtJiXj1QccbajCJ gRhQ== X-Gm-Message-State: AC+VfDyRW09uav+35wgDmow9G+FNedSdAEUjUtkYuQ4OnNFv6kPBwWt0 A2h2siee/2zBUtrek4+vbRNaSdQXdmM= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:6d24:3efd:facc:7ac4]) (user=surenb job=sendgmr) by 2002:a25:24c4:0:b0:997:c919:4484 with SMTP id k187-20020a2524c4000000b00997c9194484mr5789976ybk.6.1682960180824; Mon, 01 May 2023 09:56:20 -0700 (PDT) Date: Mon, 1 May 2023 09:54:42 -0700 In-Reply-To: <20230501165450.15352-1-surenb@google.com> Mime-Version: 1.0 References: <20230501165450.15352-1-surenb@google.com> X-Mailer: git-send-email 2.40.1.495.gc816e09b53d-goog Message-ID: <20230501165450.15352-33-surenb@google.com> Subject: [PATCH 32/40] arm64: Fix circular header dependency From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, ldufour@linux.ibm.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org X-Spam-Status: No, score=-9.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1764713108236138628?= X-GMAIL-MSGID: =?utf-8?q?1764713108236138628?= From: Kent Overstreet Replace linux/percpu.h include with asm/percpu.h to avoid circular dependency. Signed-off-by: Kent Overstreet Signed-off-by: Suren Baghdasaryan --- arch/arm64/include/asm/spectre.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/arm64/include/asm/spectre.h b/arch/arm64/include/asm/spectre.h index db7b371b367c..31823d9715ab 100644 --- a/arch/arm64/include/asm/spectre.h +++ b/arch/arm64/include/asm/spectre.h @@ -13,8 +13,8 @@ #define __BP_HARDEN_HYP_VECS_SZ ((BP_HARDEN_EL2_SLOTS - 1) * SZ_2K) #ifndef __ASSEMBLY__ - -#include +#include +#include #include #include From patchwork Mon May 1 16:54:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 89095 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp64142vqo; Mon, 1 May 2023 10:08:09 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ479KD5zX8FoI+2N3XHjTnpWm+VrlBIn73p8X95YnDVYH9a+6MO/By/ZQd30xX7vlhz4RvD X-Received: by 2002:a05:6a00:1890:b0:63b:22e7:6ee6 with SMTP id x16-20020a056a00189000b0063b22e76ee6mr22136012pfh.31.1682960888733; Mon, 01 May 2023 10:08:08 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682960888; cv=none; d=google.com; s=arc-20160816; b=nXuC70zDvrNb3AxBVxKoYxGiEkvIHAziwChhTONIRESqsklGChtyPn+wM0rUq0BKVJ 5Pet5k98GTb0Q4kly4fIfpM1qgn1ciHCCRUGdFTkSHnhqKcE4BTT0G/mY1zZiDvjvAby Tw5IeEbgfSD2KWzmB5bsLQoJ1ABHaNHnb1Tj1VkWqQos53bSrgGqZy3V/PMbtSTVZAS/ prCxFMHt1pHXjqSthITRdZkKt0VozxHL1MTE8Rb3GRBINCkQltLKOUYGVc2Ah5tR2uiB 0n729Ng6lOEFb8hoLi6tUx4DjX2iRIIFyxSWOTPQxbQMMB9M4Fs7YdwA7i6yrcLvx3AA IUdg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:dkim-signature; bh=bfMxO5ATd5n4S/fh5B/Nudv/nxDkBrDhDPER315pa0U=; b=kNWFd/8WFrAdAHfSS10AiVDAkx7VlIT/SvodHpI+0zmz8Eu6S0wlXvPtvENAVzlnZ3 8V1I6+YkRl7wF5lgx0QDDgUXAfZZME4EZP1JGhgwBo3REtUi+5riwlD3i4jzZapxfXBG mUfrMslqndT4Uro/q39sPgqde4GfAHg7vfeYixdtMoIdHW9p/erSf9bBeCGaP+aq9Fsw FYQgV/DWN3ST743kbjSz5u5dSNVOnK2Xr7frFKGYQ4aiY5bfPK/dD0kSQt8j35hb7T+M vCkDxljm0MwRCZ9L7fGPzX6LITh2SCmyNMACM7BGP6fbJQ9+ShAEAu9LBJqGqQVXgBRu ovlQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=ebS+MuTL; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id h18-20020aa796d2000000b00640f1dda998si15408646pfq.282.2023.05.01.10.07.55; Mon, 01 May 2023 10:08:08 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=ebS+MuTL; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233077AbjEARCh (ORCPT + 99 others); Mon, 1 May 2023 13:02:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33144 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232602AbjEARBN (ORCPT ); Mon, 1 May 2023 13:01:13 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1F0C83C10 for ; Mon, 1 May 2023 09:56:50 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id 3f1490d57ef6-b9a7ddd9aceso5483559276.3 for ; Mon, 01 May 2023 09:56:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1682960183; x=1685552183; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=bfMxO5ATd5n4S/fh5B/Nudv/nxDkBrDhDPER315pa0U=; b=ebS+MuTL+cQNWT4eZEJvTQOcTARn+QnzuJah4/SMgcb7+qkoUNzC7ex8MrpVKhZRXc KgeXlio4zrfdxmOppnmcCQe6TR+hOuBdIc9ihcKiKIEVRbsGHe8ObZbrrM4utujRh9wG TqBTS5rCEI/Jizsrf15UbPFacomNmio9VSdaY4o+amTMj91xth7X0uCHsGdXYXQ52AxU vnIxBaA8u69160qtwfedxuqqybJKyd5xlJm5XUDfPQCWKc/CaKvS4YCnPYvKKCkeigKq aazNcodiJ2//C8yradLEHTJBg1grANJmustwDAaYwDmuE541j3ivI8Z4B8/7N+gU15xY 5tZw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682960183; x=1685552183; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=bfMxO5ATd5n4S/fh5B/Nudv/nxDkBrDhDPER315pa0U=; b=Ys3Gzi3GhzOC60sWEze/eGiH4/Swub1HTkNWy5Gep0p0iK3cbupR2yT1ljKQhLQQGr TkL4jJ1W2jI89RPuqq9J8pHQIraCgKbRs3ov4UMTAnXTQbzV/S3u/fGVFgZu863UGc+y pM83c/ycCgIwqVEINwHf67FU7eouoWdLNjnWglzKajBYQNw5/M4/8C6Ownf0cYOYaTfY /KZORz6DrnPUzLKl4wxZWJthXpIC5mvU7WQZoAh5wOOyKFoaM38RNJ5x2LwUDimQkO3r 1U3Xg1Mu6DlRBAMhiwwvKEPWlURXxSYiOaQK+ZNwqaDhz4AF23Vhf8Hi2oueb32IxXIQ q+eg== X-Gm-Message-State: AC+VfDxgwQM8M5wArD0cf3zH1jVlcUXwaWJx6FJG+DXWmgJ4GED74W5a rRUg4a+T3tItUR+8FkJmONqFPqw0wNY= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:6d24:3efd:facc:7ac4]) (user=surenb job=sendgmr) by 2002:a25:8046:0:b0:b9d:c866:d92d with SMTP id a6-20020a258046000000b00b9dc866d92dmr3485899ybn.1.1682960183189; Mon, 01 May 2023 09:56:23 -0700 (PDT) Date: Mon, 1 May 2023 09:54:43 -0700 In-Reply-To: <20230501165450.15352-1-surenb@google.com> Mime-Version: 1.0 References: <20230501165450.15352-1-surenb@google.com> X-Mailer: git-send-email 2.40.1.495.gc816e09b53d-goog Message-ID: <20230501165450.15352-34-surenb@google.com> Subject: [PATCH 33/40] move stack capture functionality into a separate function for reuse From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, ldufour@linux.ibm.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org X-Spam-Status: No, score=-9.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1764712396933554857?= X-GMAIL-MSGID: =?utf-8?q?1764712396933554857?= Make save_stack() function part of stackdepot API to be used outside of page_owner. Also rename task_struct's in_page_owner to in_capture_stack flag to better convey the wider use of this flag. Signed-off-by: Suren Baghdasaryan --- include/linux/sched.h | 6 ++-- include/linux/stackdepot.h | 16 +++++++++ lib/stackdepot.c | 68 ++++++++++++++++++++++++++++++++++++++ mm/page_owner.c | 52 ++--------------------------- 4 files changed, 90 insertions(+), 52 deletions(-) diff --git a/include/linux/sched.h b/include/linux/sched.h index 33708bf8f191..6eca46ab6d78 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -942,9 +942,9 @@ struct task_struct { /* Stalled due to lack of memory */ unsigned in_memstall:1; #endif -#ifdef CONFIG_PAGE_OWNER - /* Used by page_owner=on to detect recursion in page tracking. */ - unsigned in_page_owner:1; +#ifdef CONFIG_STACKDEPOT + /* Used by stack_depot_capture_stack to detect recursion. */ + unsigned in_capture_stack:1; #endif #ifdef CONFIG_EVENTFD /* Recursion prevention for eventfd_signal() */ diff --git a/include/linux/stackdepot.h b/include/linux/stackdepot.h index e58306783d8e..baf7e80cf449 100644 --- a/include/linux/stackdepot.h +++ b/include/linux/stackdepot.h @@ -164,4 +164,20 @@ depot_stack_handle_t __must_check stack_depot_set_extra_bits( */ unsigned int stack_depot_get_extra_bits(depot_stack_handle_t handle); +/** + * stack_depot_capture_init - Initialize stack depot capture mechanism + * + * Return: Stack depot initialization status + */ +bool stack_depot_capture_init(void); + +/** + * stack_depot_capture_stack - Capture current stack trace into stack depot + * + * @flags: Allocation GFP flags + * + * Return: Handle of the stack trace stored in depot, 0 on failure + */ +depot_stack_handle_t stack_depot_capture_stack(gfp_t flags); + #endif diff --git a/lib/stackdepot.c b/lib/stackdepot.c index 2f5aa851834e..c7e5e22fcb16 100644 --- a/lib/stackdepot.c +++ b/lib/stackdepot.c @@ -539,3 +539,71 @@ unsigned int stack_depot_get_extra_bits(depot_stack_handle_t handle) return parts.extra; } EXPORT_SYMBOL(stack_depot_get_extra_bits); + +static depot_stack_handle_t recursion_handle; +static depot_stack_handle_t failure_handle; + +static __always_inline depot_stack_handle_t create_custom_stack(void) +{ + unsigned long entries[4]; + unsigned int nr_entries; + + nr_entries = stack_trace_save(entries, ARRAY_SIZE(entries), 0); + return stack_depot_save(entries, nr_entries, GFP_KERNEL); +} + +static noinline void register_recursion_stack(void) +{ + recursion_handle = create_custom_stack(); +} + +static noinline void register_failure_stack(void) +{ + failure_handle = create_custom_stack(); +} + +bool stack_depot_capture_init(void) +{ + static DEFINE_MUTEX(stack_depot_capture_init_mutex); + static bool utility_stacks_ready; + + mutex_lock(&stack_depot_capture_init_mutex); + if (!utility_stacks_ready) { + register_recursion_stack(); + register_failure_stack(); + utility_stacks_ready = true; + } + mutex_unlock(&stack_depot_capture_init_mutex); + + return utility_stacks_ready; +} + +/* TODO: teach stack_depot_capture_stack to use off stack temporal storage */ +#define CAPTURE_STACK_DEPTH (16) + +depot_stack_handle_t stack_depot_capture_stack(gfp_t flags) +{ + unsigned long entries[CAPTURE_STACK_DEPTH]; + depot_stack_handle_t handle; + unsigned int nr_entries; + + /* + * Avoid recursion. + * + * Sometimes page metadata allocation tracking requires more + * memory to be allocated: + * - when new stack trace is saved to stack depot + * - when backtrace itself is calculated (ia64) + */ + if (current->in_capture_stack) + return recursion_handle; + current->in_capture_stack = 1; + + nr_entries = stack_trace_save(entries, ARRAY_SIZE(entries), 2); + handle = stack_depot_save(entries, nr_entries, flags); + if (!handle) + handle = failure_handle; + + current->in_capture_stack = 0; + return handle; +} diff --git a/mm/page_owner.c b/mm/page_owner.c index 8b6086c666e6..9fafbc290d5b 100644 --- a/mm/page_owner.c +++ b/mm/page_owner.c @@ -15,12 +15,6 @@ #include "internal.h" -/* - * TODO: teach PAGE_OWNER_STACK_DEPTH (__dump_page_owner and save_stack) - * to use off stack temporal storage - */ -#define PAGE_OWNER_STACK_DEPTH (16) - struct page_owner { unsigned short order; short last_migrate_reason; @@ -37,8 +31,6 @@ struct page_owner { static bool page_owner_enabled __initdata; DEFINE_STATIC_KEY_FALSE(page_owner_inited); -static depot_stack_handle_t dummy_handle; -static depot_stack_handle_t failure_handle; static depot_stack_handle_t early_handle; static void init_early_allocated_pages(void); @@ -68,16 +60,6 @@ static __always_inline depot_stack_handle_t create_dummy_stack(void) return stack_depot_save(entries, nr_entries, GFP_KERNEL); } -static noinline void register_dummy_stack(void) -{ - dummy_handle = create_dummy_stack(); -} - -static noinline void register_failure_stack(void) -{ - failure_handle = create_dummy_stack(); -} - static noinline void register_early_stack(void) { early_handle = create_dummy_stack(); @@ -88,8 +70,7 @@ static __init void init_page_owner(void) if (!page_owner_enabled) return; - register_dummy_stack(); - register_failure_stack(); + stack_depot_capture_init(); register_early_stack(); static_branch_enable(&page_owner_inited); init_early_allocated_pages(); @@ -107,33 +88,6 @@ static inline struct page_owner *get_page_owner(struct page_ext *page_ext) return (void *)page_ext + page_owner_ops.offset; } -static noinline depot_stack_handle_t save_stack(gfp_t flags) -{ - unsigned long entries[PAGE_OWNER_STACK_DEPTH]; - depot_stack_handle_t handle; - unsigned int nr_entries; - - /* - * Avoid recursion. - * - * Sometimes page metadata allocation tracking requires more - * memory to be allocated: - * - when new stack trace is saved to stack depot - * - when backtrace itself is calculated (ia64) - */ - if (current->in_page_owner) - return dummy_handle; - current->in_page_owner = 1; - - nr_entries = stack_trace_save(entries, ARRAY_SIZE(entries), 2); - handle = stack_depot_save(entries, nr_entries, flags); - if (!handle) - handle = failure_handle; - - current->in_page_owner = 0; - return handle; -} - void __reset_page_owner(struct page *page, unsigned short order) { int i; @@ -146,7 +100,7 @@ void __reset_page_owner(struct page *page, unsigned short order) if (unlikely(!page_ext)) return; - handle = save_stack(GFP_NOWAIT | __GFP_NOWARN); + handle = stack_depot_capture_stack(GFP_NOWAIT | __GFP_NOWARN); for (i = 0; i < (1 << order); i++) { __clear_bit(PAGE_EXT_OWNER_ALLOCATED, &page_ext->flags); page_owner = get_page_owner(page_ext); @@ -189,7 +143,7 @@ noinline void __set_page_owner(struct page *page, unsigned short order, struct page_ext *page_ext; depot_stack_handle_t handle; - handle = save_stack(gfp_mask); + handle = stack_depot_capture_stack(gfp_mask); page_ext = page_ext_get(page); if (unlikely(!page_ext)) From patchwork Mon May 1 16:54:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 89117 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp72273vqo; Mon, 1 May 2023 10:20:46 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ4YaRsIO7UfwmjLXC2ChA19aUWAw8wRSb1QomK/UDhrLZkfN2l0IegTLYzOCv1XXYjXiwis X-Received: by 2002:a05:6a21:3397:b0:f2:bb3f:3b15 with SMTP id yy23-20020a056a21339700b000f2bb3f3b15mr18570123pzb.43.1682961645746; Mon, 01 May 2023 10:20:45 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682961645; cv=none; d=google.com; s=arc-20160816; b=nkuFxFCuV/p1QZI4zURijX8Ko96Fy+bmOgcrgvoWcjajoj2ikPZ8dqt7AV4ZPRtcqX VCqG5F9qWeekoXv2LtnXDcDeO/21/vjfBxJF5J+OzlxRq1NG5rEU5rD7pEgTHPcTO3+U F3LSn6AlULM90EmDgCCnPvWZxV7Fp+SjFv6lhcSMXVRAuqjnxq+rnracojIciK0xuYX4 bv/TZ4+6FJh6FwBryrGVEBAbazMxcfpFpnaHK+mlCfAtriPBSM0aBLvPXh6sPzusyWMI h1XVwiNwuA0M/1Oj7arKWRDVsQHXZx1Q1LwUXbrMfDVq+ZFV71R3gtZ1bi9SdEWP3NA+ Mz5Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:dkim-signature; bh=tlYphv1i8nYyRhT88mmnNNep9iAUv9BwOK+gAme7uDQ=; b=KgS3IcqG9o7ogaesE6ECE8nn+DRNvtSgcEerSaHUkwwIq7d7dZASkP6tAsXXkEQbI+ JH1hcqGf0F0XPElUQTyN08F+3lhnGx1cyAzG5je4o5wK1KCxK3Rq8zeTkpd4V4jJQUxi tnTDrFVQVob3yil9oaTb5IkynVRASyPZpdeyH77VTpHgyizJIwcldIvhdJYcstdh079/ 75JsVo4FFl/QXq9W5r8a+mySjjOi5uBy845G58FzuXaVH4SYp9dEfRsseNS/BXeCYrnW 3ooLLdkdAkxMEisBwdpLrpnaI4d2mkmAyKAUquusDvXfoE9dgckAVS7as5d9rJ5Xudox 9Y9Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b="2XuV/LXF"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id o13-20020a17090ac70d00b0024711d63febsi30966279pjt.173.2023.05.01.10.20.32; Mon, 01 May 2023 10:20:45 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b="2XuV/LXF"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232523AbjEARCk (ORCPT + 99 others); Mon, 1 May 2023 13:02:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33310 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233114AbjEARBW (ORCPT ); Mon, 1 May 2023 13:01:22 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 29F8C3C30 for ; Mon, 1 May 2023 09:56:55 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id 3f1490d57ef6-b9a79db4e7fso3326316276.0 for ; Mon, 01 May 2023 09:56:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1682960185; x=1685552185; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=tlYphv1i8nYyRhT88mmnNNep9iAUv9BwOK+gAme7uDQ=; b=2XuV/LXFPS9qv6kO1OAu52yKSEfT1BC8Uyc51PtqvyTLOw6UNebCYkZhZP2v9FO8oS hEHZdD7KUMfSEQtQdWUpzCKOMJYHSpoe3376M6dYUuLCm/nLCzqTViK3tDrzmpk69PsF C9DS0KqR2dDtDGP7hZX5XLhu1WFJacbmtNn9E47IgAq8uRv0BvffVAbtCHATYs2yY0Ed RwD1ZD9bXshFGg1hMmTye+Eo2Jv3yEhTf/4wFmEVqV85hSy0daDAaaNAG8CJUm6nMdJU XzZ/3I4BMbhLiO6ONlK15VVriVh+DhXOoojulz1/HRxZdei9as7clRNqchBWLsgCOWe8 k8ng== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682960185; x=1685552185; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=tlYphv1i8nYyRhT88mmnNNep9iAUv9BwOK+gAme7uDQ=; b=Bdh3aG5cNH8Vi1qFbv7oRXUN8EoVzNdJE15IrWFvHJnr5TBNl9/fZguUEjKo40YdLS 2pkJkNyFIBGA3R16KnpVnAhN3FPtGUDM9sKflD1cYEyYQHwmIPNGRCw/39Ho1kFItr59 QBeO7ckgN/lI9aSzQ/Q5oYIbF9xE787VexQGNC9Sth83AzuuzpoBfr7O3nV1sKZy4bz4 ZUKzfuaveuJqyy4eWX+kGYAKEcW4tTHnvlnid8N7woW8PAG0tFAVHlb5bO5fP1mezrFc 8dTZxCobfajOpovXLQR1P09ZP04ba6cJ/VGLJnDEKgNC8Q14AN4Smt/dR6pqrSq6jvm1 ZMkw== X-Gm-Message-State: AC+VfDwIasSbux4s+MACnUO6HyTqqixfF+XbxZuWx7YEACfs6217Ewa4 vAL8YqY7YpZqbtnSeGrekE0X8/5qYZs= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:6d24:3efd:facc:7ac4]) (user=surenb job=sendgmr) by 2002:a05:6902:1003:b0:b8f:54f5:89ff with SMTP id w3-20020a056902100300b00b8f54f589ffmr9086045ybt.11.1682960185390; Mon, 01 May 2023 09:56:25 -0700 (PDT) Date: Mon, 1 May 2023 09:54:44 -0700 In-Reply-To: <20230501165450.15352-1-surenb@google.com> Mime-Version: 1.0 References: <20230501165450.15352-1-surenb@google.com> X-Mailer: git-send-email 2.40.1.495.gc816e09b53d-goog Message-ID: <20230501165450.15352-35-surenb@google.com> Subject: [PATCH 34/40] lib: code tagging context capture support From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, ldufour@linux.ibm.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org X-Spam-Status: No, score=-9.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1764713190936389850?= X-GMAIL-MSGID: =?utf-8?q?1764713190936389850?= Add support for code tag context capture when registering a new code tag type. When context capture for a specific code tag is enabled, codetag_ref will point to a codetag_ctx object which can be attached to an application-specific object storing code invocation context. codetag_ctx has a pointer to its codetag_with_ctx object with embedded codetag object in it. All context objects of the same code tag are placed into codetag_with_ctx.ctx_head linked list. codetag.flag is used to indicate when a context capture for the associated code tag is initialized and enabled. Signed-off-by: Suren Baghdasaryan --- include/linux/codetag.h | 50 +++++++++++++- include/linux/codetag_ctx.h | 48 +++++++++++++ lib/codetag.c | 134 ++++++++++++++++++++++++++++++++++++ 3 files changed, 231 insertions(+), 1 deletion(-) create mode 100644 include/linux/codetag_ctx.h diff --git a/include/linux/codetag.h b/include/linux/codetag.h index 87207f199ac9..9ab2f017e845 100644 --- a/include/linux/codetag.h +++ b/include/linux/codetag.h @@ -5,8 +5,12 @@ #ifndef _LINUX_CODETAG_H #define _LINUX_CODETAG_H +#include +#include #include +struct kref; +struct codetag_ctx; struct codetag_iterator; struct codetag_type; struct seq_buf; @@ -18,15 +22,38 @@ struct module; * an array of these. */ struct codetag { - unsigned int flags; /* used in later patches */ + unsigned int flags; /* has to be the first member shared with codetag_ctx */ unsigned int lineno; const char *modname; const char *function; const char *filename; } __aligned(8); +/* codetag_with_ctx flags */ +#define CTC_FLAG_CTX_PTR (1 << 0) +#define CTC_FLAG_CTX_READY (1 << 1) +#define CTC_FLAG_CTX_ENABLED (1 << 2) + +/* + * Code tag with context capture support. Contains a list to store context for + * each tag hit, a lock protecting the list and a flag to indicate whether + * context capture is enabled for the tag. + */ +struct codetag_with_ctx { + struct codetag ct; + struct list_head ctx_head; + spinlock_t ctx_lock; +} __aligned(8); + +/* + * Tag reference can point to codetag directly or indirectly via codetag_ctx. + * Direct codetag pointer is used when context capture is disabled or not + * supported. When context capture for the tag is used, the reference points + * to the codetag_ctx through which the codetag can be reached. + */ union codetag_ref { struct codetag *ct; + struct codetag_ctx *ctx; }; struct codetag_range { @@ -46,6 +73,7 @@ struct codetag_type_desc { struct codetag_module *cmod); bool (*module_unload)(struct codetag_type *cttype, struct codetag_module *cmod); + void (*free_ctx)(struct kref *ref); }; struct codetag_iterator { @@ -53,6 +81,7 @@ struct codetag_iterator { struct codetag_module *cmod; unsigned long mod_id; struct codetag *ct; + struct codetag_ctx *ctx; }; #define CODE_TAG_INIT { \ @@ -63,9 +92,28 @@ struct codetag_iterator { .flags = 0, \ } +static inline bool is_codetag_ctx_ref(union codetag_ref *ref) +{ + return !!(ref->ct->flags & CTC_FLAG_CTX_PTR); +} + +static inline +struct codetag_with_ctx *ct_to_ctc(struct codetag *ct) +{ + return container_of(ct, struct codetag_with_ctx, ct); +} + void codetag_lock_module_list(struct codetag_type *cttype, bool lock); struct codetag_iterator codetag_get_ct_iter(struct codetag_type *cttype); struct codetag *codetag_next_ct(struct codetag_iterator *iter); +struct codetag_ctx *codetag_next_ctx(struct codetag_iterator *iter); + +bool codetag_enable_ctx(struct codetag_with_ctx *ctc, bool enable); +static inline bool codetag_ctx_enabled(struct codetag_with_ctx *ctc) +{ + return !!(ctc->ct.flags & CTC_FLAG_CTX_ENABLED); +} +bool codetag_has_ctx(struct codetag_with_ctx *ctc); void codetag_to_text(struct seq_buf *out, struct codetag *ct); diff --git a/include/linux/codetag_ctx.h b/include/linux/codetag_ctx.h new file mode 100644 index 000000000000..e741484f0e08 --- /dev/null +++ b/include/linux/codetag_ctx.h @@ -0,0 +1,48 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * code tag context + */ +#ifndef _LINUX_CODETAG_CTX_H +#define _LINUX_CODETAG_CTX_H + +#include +#include + +/* Code tag hit context. */ +struct codetag_ctx { + unsigned int flags; /* has to be the first member shared with codetag */ + struct codetag_with_ctx *ctc; + struct list_head node; + struct kref refcount; +} __aligned(8); + +static inline struct codetag_ctx *kref_to_ctx(struct kref *refcount) +{ + return container_of(refcount, struct codetag_ctx, refcount); +} + +static inline void add_ctx(struct codetag_ctx *ctx, + struct codetag_with_ctx *ctc) +{ + kref_init(&ctx->refcount); + spin_lock(&ctc->ctx_lock); + ctx->flags = CTC_FLAG_CTX_PTR; + ctx->ctc = ctc; + list_add_tail(&ctx->node, &ctc->ctx_head); + spin_unlock(&ctc->ctx_lock); +} + +static inline void rem_ctx(struct codetag_ctx *ctx, + void (*free_ctx)(struct kref *refcount)) +{ + struct codetag_with_ctx *ctc = ctx->ctc; + + spin_lock(&ctc->ctx_lock); + /* ctx might have been removed while we were using it */ + if (!list_empty(&ctx->node)) + list_del_init(&ctx->node); + spin_unlock(&ctc->ctx_lock); + kref_put(&ctx->refcount, free_ctx); +} + +#endif /* _LINUX_CODETAG_CTX_H */ diff --git a/lib/codetag.c b/lib/codetag.c index 84f90f3b922c..d891bbe4481d 100644 --- a/lib/codetag.c +++ b/lib/codetag.c @@ -1,5 +1,6 @@ // SPDX-License-Identifier: GPL-2.0-only #include +#include #include #include #include @@ -92,6 +93,139 @@ struct codetag *codetag_next_ct(struct codetag_iterator *iter) return ct; } +static struct codetag_ctx *next_ctx_from_ct(struct codetag_iterator *iter) +{ + struct codetag_with_ctx *ctc; + struct codetag_ctx *ctx = NULL; + struct codetag *ct = iter->ct; + + while (ct) { + if (!(ct->flags & CTC_FLAG_CTX_READY)) + goto next; + + ctc = ct_to_ctc(ct); + spin_lock(&ctc->ctx_lock); + if (!list_empty(&ctc->ctx_head)) { + ctx = list_first_entry(&ctc->ctx_head, + struct codetag_ctx, node); + kref_get(&ctx->refcount); + } + spin_unlock(&ctc->ctx_lock); + if (ctx) + break; +next: + ct = codetag_next_ct(iter); + } + + iter->ctx = ctx; + return ctx; +} + +struct codetag_ctx *codetag_next_ctx(struct codetag_iterator *iter) +{ + struct codetag_ctx *ctx = iter->ctx; + struct codetag_ctx *found = NULL; + + lockdep_assert_held(&iter->cttype->mod_lock); + + if (!ctx) + return next_ctx_from_ct(iter); + + spin_lock(&ctx->ctc->ctx_lock); + /* + * Do not advance if the object was isolated, restart at the same tag. + */ + if (!list_empty(&ctx->node)) { + if (list_is_last(&ctx->node, &ctx->ctc->ctx_head)) { + /* Finished with this tag, advance to the next */ + codetag_next_ct(iter); + } else { + found = list_next_entry(ctx, node); + kref_get(&found->refcount); + } + } + spin_unlock(&ctx->ctc->ctx_lock); + kref_put(&ctx->refcount, iter->cttype->desc.free_ctx); + + if (!found) + return next_ctx_from_ct(iter); + + iter->ctx = found; + return found; +} + +static struct codetag_type *find_cttype(struct codetag *ct) +{ + struct codetag_module *cmod; + struct codetag_type *cttype; + unsigned long mod_id; + unsigned long tmp; + + mutex_lock(&codetag_lock); + list_for_each_entry(cttype, &codetag_types, link) { + down_read(&cttype->mod_lock); + idr_for_each_entry_ul(&cttype->mod_idr, cmod, tmp, mod_id) { + if (ct >= cmod->range.start && ct < cmod->range.stop) { + up_read(&cttype->mod_lock); + goto found; + } + } + up_read(&cttype->mod_lock); + } + cttype = NULL; +found: + mutex_unlock(&codetag_lock); + + return cttype; +} + +bool codetag_enable_ctx(struct codetag_with_ctx *ctc, bool enable) +{ + struct codetag_type *cttype = find_cttype(&ctc->ct); + + if (!cttype || !cttype->desc.free_ctx) + return false; + + lockdep_assert_held(&cttype->mod_lock); + BUG_ON(!rwsem_is_locked(&cttype->mod_lock)); + + if (codetag_ctx_enabled(ctc) == enable) + return false; + + if (enable) { + /* Initialize context capture fields only once */ + if (!(ctc->ct.flags & CTC_FLAG_CTX_READY)) { + spin_lock_init(&ctc->ctx_lock); + INIT_LIST_HEAD(&ctc->ctx_head); + ctc->ct.flags |= CTC_FLAG_CTX_READY; + } + ctc->ct.flags |= CTC_FLAG_CTX_ENABLED; + } else { + /* + * The list of context objects is intentionally left untouched. + * It can be read back and if context capture is re-enablied it + * will append new objects. + */ + ctc->ct.flags &= ~CTC_FLAG_CTX_ENABLED; + } + + return true; +} + +bool codetag_has_ctx(struct codetag_with_ctx *ctc) +{ + bool no_ctx; + + if (!(ctc->ct.flags & CTC_FLAG_CTX_READY)) + return false; + + spin_lock(&ctc->ctx_lock); + no_ctx = list_empty(&ctc->ctx_head); + spin_unlock(&ctc->ctx_lock); + + return !no_ctx; +} + void codetag_to_text(struct seq_buf *out, struct codetag *ct) { seq_buf_printf(out, "%s:%u module:%s func:%s", From patchwork Mon May 1 16:54:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 89120 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp72448vqo; Mon, 1 May 2023 10:20:58 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ5jtfqwwoUwMB2J2F5IiRPfY5YJOvj5Y6No3/2FhDl9LVzc0dAND9H4H8A6QshEIsnkPmrE X-Received: by 2002:a17:90a:de12:b0:247:42bf:380e with SMTP id m18-20020a17090ade1200b0024742bf380emr15138304pjv.4.1682961658448; Mon, 01 May 2023 10:20:58 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682961658; cv=none; d=google.com; s=arc-20160816; b=wCgxWNADaQGvMY54uwPIBgofsIX3oOMrRGwMoV9H3c8O6nPZzhFe9xCAO6VPVSjBR3 kdgFH9h3q/rrNY5sD/Kc0iIcP/BWHTwYxpM+rEyeXtVLzCV+HuRLor5apa8QuCFLc2RI zbC+JUJwdAfdxUm1THPtDmKxiuS+MC1KNlGeUTOkqK5fExcWQNHC04HEIkr8PEnffwUj DcfZNcQrwOrxDydziDmcJXuiYUXZMXz1sRM3ZPg1avwEi2gP6gWfOoTbKmT+6Ea71qyH M6V/uy69ku6CO95wyGKd82KDXcVRa4GhxLqqf6IyIgL1WTLJUZ+lBzNjyLP2ioNQq4JO YQaQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:dkim-signature; bh=lw2pXEBNfkStSvJmyTF2I1dH+bsBIyZ0OpKs2HHzIO8=; b=abHGTLU42AB+Jd8VUH5mVwYpdag3z22/qBxJdgUGe+mJsh9UHqr2WhO5JEh3b2mVmm LAw6+OL+EP6c0lCQmOjPlnyfaRTURlKbB0Mim0PdqcnPj97/rgc8Lv2CTgwtLJwdqcRu 8qC0xkR8Q9uY1HwZapE05EYy5fV/7ETluaSnRRDRozeAbU1zqYABUxg0WNEPh2A3r5Lz mD4Q1lT7/5rtiIhrXMBRwR7M4Kp1ABbEuR5qax9r/TZ6edDqoFLgCrrFqoTQiAIhTRIy 1tc21Vf11z26yzKwd4If7d8UH2gzW3LyV85BGtVsd4J7ak/i25YNBB/9/5LaeXVIjon5 zj3g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=6G5OUnFq; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id c20-20020a6566d4000000b00513a9c6060bsi27920140pgw.676.2023.05.01.10.20.45; Mon, 01 May 2023 10:20:58 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=6G5OUnFq; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233151AbjEARCp (ORCPT + 99 others); Mon, 1 May 2023 13:02:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38788 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233135AbjEARBX (ORCPT ); Mon, 1 May 2023 13:01:23 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A6C1740DE for ; Mon, 1 May 2023 09:56:56 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-559c416b024so28960057b3.1 for ; Mon, 01 May 2023 09:56:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1682960188; x=1685552188; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=lw2pXEBNfkStSvJmyTF2I1dH+bsBIyZ0OpKs2HHzIO8=; b=6G5OUnFqmhYzObxhYE56pVPUPb5/Z5fFLyg/xv1k1ELzqtOEByasEUKmpVLdsnIlKz +HpmKICNn1SXskm4W84Z1YnQTS1wv+HkfBClJgQjOTklB/JflZpG6ZVI9CkbEHZZ4VtX +VU0ocTLcHc6zEEkMfHwEXLiatB8YSa4vtZi59KquEiZLQ8PGIiDrQ9uUD2CTQfE2G7r IqwFOlnQ0w6JmNavUX7vfGowmYc1Kgs8nx0nF3FhUsDCRqWZuUIsUlCL/KEMxoHEQqWr IxEVdl0XX/ZNM2Oo2w3beOsDv/rdq05Vn6EfHRE3NqnRnISnGzk4Nz7tXYADK5fB+Pd5 tQVQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682960188; x=1685552188; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=lw2pXEBNfkStSvJmyTF2I1dH+bsBIyZ0OpKs2HHzIO8=; b=OLVmnC2hUbT1hpMHUVzLBpOvRW0o12xp3HjFTXf8Od/QTlYBmUihwu+W2g2pXDIDEj hadbEtFdR2OkkZ6AK3SI4eySuX0Q5B+vUoUzWgOyY9jDak6jQ6Ez5jHMk8neILFyXnMP /YpQJyCSDBox3nIphHTB0FDH7uNtQ9vqwMsM+ixqZGqQRGMIOzP4NI5RDbVfhp2bipze uCxb2Z4W0WguOhEvEmLG8a0Mq+L563yPS2tD9royjjsFHa2rd59BfFL+HO8ZM+bK0h0q R8tvwuqYOAJGFpOrMVhJxkl4zpHENf0Y7jYonVyoOY5LvX/c+le5xlUCpH+OjfCD0qqP fswA== X-Gm-Message-State: AC+VfDxZwecYwRNtUKXaH3eh8nSvqPL663aDFV0vyrXOOs2Up+FVhgdr 3HCkRLw2DognbeI6xkMBgqveOi75Hmk= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:6d24:3efd:facc:7ac4]) (user=surenb job=sendgmr) by 2002:a81:4304:0:b0:55a:c44:6151 with SMTP id q4-20020a814304000000b0055a0c446151mr3660331ywa.5.1682960187794; Mon, 01 May 2023 09:56:27 -0700 (PDT) Date: Mon, 1 May 2023 09:54:45 -0700 In-Reply-To: <20230501165450.15352-1-surenb@google.com> Mime-Version: 1.0 References: <20230501165450.15352-1-surenb@google.com> X-Mailer: git-send-email 2.40.1.495.gc816e09b53d-goog Message-ID: <20230501165450.15352-36-surenb@google.com> Subject: [PATCH 35/40] lib: implement context capture support for tagged allocations From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, ldufour@linux.ibm.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org X-Spam-Status: No, score=-9.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1764713204220381843?= X-GMAIL-MSGID: =?utf-8?q?1764713204220381843?= Implement mechanisms for capturing allocation call context which consists of: - allocation size - pid, tgid and name of the allocating task - allocation timestamp - allocation call stack The patch creates allocations.ctx file which can be written to enable/disable context capture for a specific code tag. Captured context can be obtained by reading allocations.ctx file. Usage example: echo "file include/asm-generic/pgalloc.h line 63 enable" > \ /sys/kernel/debug/allocations.ctx cat allocations.ctx 91.0MiB 212 include/asm-generic/pgalloc.h:63 module:pgtable func:__pte_alloc_one size: 4096 pid: 1551 tgid: 1551 comm: cat ts: 670109646361 call stack: pte_alloc_one+0xfe/0x130 __pte_alloc+0x22/0x90 move_page_tables.part.0+0x994/0xa60 shift_arg_pages+0xa4/0x180 setup_arg_pages+0x286/0x2d0 load_elf_binary+0x4e1/0x18d0 bprm_execve+0x26b/0x660 do_execveat_common.isra.0+0x19d/0x220 __x64_sys_execve+0x2e/0x40 do_syscall_64+0x38/0x90 entry_SYSCALL_64_after_hwframe+0x63/0xcd size: 4096 pid: 1551 tgid: 1551 comm: cat ts: 670109711801 call stack: pte_alloc_one+0xfe/0x130 __do_fault+0x52/0xc0 __handle_mm_fault+0x7d9/0xdd0 handle_mm_fault+0xc0/0x2b0 do_user_addr_fault+0x1c3/0x660 exc_page_fault+0x62/0x150 asm_exc_page_fault+0x22/0x30 ... echo "file include/asm-generic/pgalloc.h line 63 disable" > \ /sys/kernel/debug/alloc_tags.ctx Note that disabling context capture will not clear already captured context but no new context will be captured. Signed-off-by: Suren Baghdasaryan --- include/linux/alloc_tag.h | 25 +++- include/linux/codetag.h | 3 +- include/linux/pgalloc_tag.h | 4 +- lib/Kconfig.debug | 1 + lib/alloc_tag.c | 238 +++++++++++++++++++++++++++++++++++- lib/codetag.c | 20 +-- 6 files changed, 272 insertions(+), 19 deletions(-) diff --git a/include/linux/alloc_tag.h b/include/linux/alloc_tag.h index 07922d81b641..2a3d248aae10 100644 --- a/include/linux/alloc_tag.h +++ b/include/linux/alloc_tag.h @@ -17,20 +17,29 @@ * an array of these. Embedded codetag utilizes codetag framework. */ struct alloc_tag { - struct codetag ct; + struct codetag_with_ctx ctc; struct lazy_percpu_counter bytes_allocated; } __aligned(8); #ifdef CONFIG_MEM_ALLOC_PROFILING +static inline struct alloc_tag *ctc_to_alloc_tag(struct codetag_with_ctx *ctc) +{ + return container_of(ctc, struct alloc_tag, ctc); +} + static inline struct alloc_tag *ct_to_alloc_tag(struct codetag *ct) { - return container_of(ct, struct alloc_tag, ct); + return container_of(ct_to_ctc(ct), struct alloc_tag, ctc); } +struct codetag_ctx *alloc_tag_create_ctx(struct alloc_tag *tag, size_t size); +void alloc_tag_free_ctx(struct codetag_ctx *ctx, struct alloc_tag **ptag); +bool alloc_tag_enable_ctx(struct alloc_tag *tag, bool enable); + #define DEFINE_ALLOC_TAG(_alloc_tag, _old) \ static struct alloc_tag _alloc_tag __used __aligned(8) \ - __section("alloc_tags") = { .ct = CODE_TAG_INIT }; \ + __section("alloc_tags") = { .ctc.ct = CODE_TAG_INIT }; \ struct alloc_tag * __maybe_unused _old = alloc_tag_save(&_alloc_tag) extern struct static_key_true mem_alloc_profiling_key; @@ -54,7 +63,10 @@ static inline void __alloc_tag_sub(union codetag_ref *ref, size_t bytes, if (!ref || !ref->ct) return; - tag = ct_to_alloc_tag(ref->ct); + if (is_codetag_ctx_ref(ref)) + alloc_tag_free_ctx(ref->ctx, &tag); + else + tag = ct_to_alloc_tag(ref->ct); if (may_allocate) lazy_percpu_counter_add(&tag->bytes_allocated, -bytes); @@ -88,7 +100,10 @@ static inline void alloc_tag_add(union codetag_ref *ref, struct alloc_tag *tag, if (!ref || !tag) return; - ref->ct = &tag->ct; + if (codetag_ctx_enabled(&tag->ctc)) + ref->ctx = alloc_tag_create_ctx(tag, bytes); + else + ref->ct = &tag->ctc.ct; lazy_percpu_counter_add(&tag->bytes_allocated, bytes); } diff --git a/include/linux/codetag.h b/include/linux/codetag.h index 9ab2f017e845..b6a2f0287a83 100644 --- a/include/linux/codetag.h +++ b/include/linux/codetag.h @@ -104,7 +104,8 @@ struct codetag_with_ctx *ct_to_ctc(struct codetag *ct) } void codetag_lock_module_list(struct codetag_type *cttype, bool lock); -struct codetag_iterator codetag_get_ct_iter(struct codetag_type *cttype); +void codetag_init_iter(struct codetag_iterator *iter, + struct codetag_type *cttype); struct codetag *codetag_next_ct(struct codetag_iterator *iter); struct codetag_ctx *codetag_next_ctx(struct codetag_iterator *iter); diff --git a/include/linux/pgalloc_tag.h b/include/linux/pgalloc_tag.h index 0cbba13869b5..e4661bbd40c6 100644 --- a/include/linux/pgalloc_tag.h +++ b/include/linux/pgalloc_tag.h @@ -6,6 +6,7 @@ #define _LINUX_PGALLOC_TAG_H #include +#include #ifdef CONFIG_MEM_ALLOC_PROFILING @@ -70,7 +71,8 @@ static inline void pgalloc_tag_split(struct page *page, unsigned int nr) if (!ref->ct) goto out; - tag = ct_to_alloc_tag(ref->ct); + tag = is_codetag_ctx_ref(ref) ? ctc_to_alloc_tag(ref->ctx->ctc) + : ct_to_alloc_tag(ref->ct); page_ext = page_ext_next(page_ext); for (i = 1; i < nr; i++) { /* New reference with 0 bytes accounted */ diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug index 4157c2251b07..1b83ef17d232 100644 --- a/lib/Kconfig.debug +++ b/lib/Kconfig.debug @@ -969,6 +969,7 @@ config MEM_ALLOC_PROFILING select LAZY_PERCPU_COUNTER select PAGE_EXTENSION select SLAB_OBJ_EXT + select STACKDEPOT help Track allocation source code and record total allocation size initiated at that code location. The mechanism can be used to track diff --git a/lib/alloc_tag.c b/lib/alloc_tag.c index 4a0b95a46b2e..675c7a08e38b 100644 --- a/lib/alloc_tag.c +++ b/lib/alloc_tag.c @@ -1,13 +1,18 @@ // SPDX-License-Identifier: GPL-2.0-only #include +#include #include #include #include #include #include +#include #include +#include #include +#define STACK_BUF_SIZE 1024 + DEFINE_STATIC_KEY_TRUE(mem_alloc_profiling_key); /* @@ -23,6 +28,16 @@ static int __init mem_alloc_profiling_disable(char *s) } __setup("nomem_profiling", mem_alloc_profiling_disable); +struct alloc_call_ctx { + struct codetag_ctx ctx; + size_t size; + pid_t pid; + pid_t tgid; + char comm[TASK_COMM_LEN]; + u64 ts_nsec; + depot_stack_handle_t stack_handle; +} __aligned(8); + struct alloc_tag_file_iterator { struct codetag_iterator ct_iter; struct seq_buf buf; @@ -64,7 +79,7 @@ static int allocations_file_open(struct inode *inode, struct file *file) return -ENOMEM; codetag_lock_module_list(cttype, true); - iter->ct_iter = codetag_get_ct_iter(cttype); + codetag_init_iter(&iter->ct_iter, cttype); codetag_lock_module_list(cttype, false); seq_buf_init(&iter->buf, iter->rawbuf, sizeof(iter->rawbuf)); file->private_data = iter; @@ -125,24 +140,240 @@ static const struct file_operations allocations_file_ops = { .read = allocations_file_read, }; +static void alloc_tag_ops_free_ctx(struct kref *refcount) +{ + kfree(container_of(kref_to_ctx(refcount), struct alloc_call_ctx, ctx)); +} + +struct codetag_ctx *alloc_tag_create_ctx(struct alloc_tag *tag, size_t size) +{ + struct alloc_call_ctx *ac_ctx; + + /* TODO: use a dedicated kmem_cache */ + ac_ctx = kmalloc(sizeof(struct alloc_call_ctx), GFP_KERNEL); + if (WARN_ON(!ac_ctx)) + return NULL; + + ac_ctx->size = size; + ac_ctx->pid = current->pid; + ac_ctx->tgid = current->tgid; + strscpy(ac_ctx->comm, current->comm, sizeof(ac_ctx->comm)); + ac_ctx->ts_nsec = local_clock(); + ac_ctx->stack_handle = + stack_depot_capture_stack(GFP_NOWAIT | __GFP_NOWARN); + add_ctx(&ac_ctx->ctx, &tag->ctc); + + return &ac_ctx->ctx; +} +EXPORT_SYMBOL_GPL(alloc_tag_create_ctx); + +void alloc_tag_free_ctx(struct codetag_ctx *ctx, struct alloc_tag **ptag) +{ + *ptag = ctc_to_alloc_tag(ctx->ctc); + rem_ctx(ctx, alloc_tag_ops_free_ctx); +} +EXPORT_SYMBOL_GPL(alloc_tag_free_ctx); + +bool alloc_tag_enable_ctx(struct alloc_tag *tag, bool enable) +{ + static bool stack_depot_ready; + + if (enable && !stack_depot_ready) { + stack_depot_init(); + stack_depot_capture_init(); + stack_depot_ready = true; + } + + return codetag_enable_ctx(&tag->ctc, enable); +} + +static void alloc_tag_ctx_to_text(struct seq_buf *out, struct codetag_ctx *ctx) +{ + struct alloc_call_ctx *ac_ctx; + char *buf; + + ac_ctx = container_of(ctx, struct alloc_call_ctx, ctx); + seq_buf_printf(out, " size: %zu\n", ac_ctx->size); + seq_buf_printf(out, " pid: %d\n", ac_ctx->pid); + seq_buf_printf(out, " tgid: %d\n", ac_ctx->tgid); + seq_buf_printf(out, " comm: %s\n", ac_ctx->comm); + seq_buf_printf(out, " ts: %llu\n", ac_ctx->ts_nsec); + + buf = kmalloc(STACK_BUF_SIZE, GFP_KERNEL); + if (buf) { + int bytes_read = stack_depot_snprint(ac_ctx->stack_handle, buf, + STACK_BUF_SIZE - 1, 8); + buf[bytes_read] = '\0'; + seq_buf_printf(out, " call stack:\n%s\n", buf); + } + kfree(buf); +} + +static ssize_t allocations_ctx_file_read(struct file *file, char __user *ubuf, + size_t size, loff_t *ppos) +{ + struct alloc_tag_file_iterator *iter = file->private_data; + struct codetag_iterator *ct_iter = &iter->ct_iter; + struct user_buf buf = { .buf = ubuf, .size = size }; + struct codetag_ctx *ctx; + struct codetag *prev_ct; + int err = 0; + + codetag_lock_module_list(ct_iter->cttype, true); + while (1) { + err = flush_ubuf(&buf, &iter->buf); + if (err || !buf.size) + break; + + prev_ct = ct_iter->ct; + ctx = codetag_next_ctx(ct_iter); + if (!ctx) + break; + + if (prev_ct != &ctx->ctc->ct) + alloc_tag_to_text(&iter->buf, &ctx->ctc->ct); + alloc_tag_ctx_to_text(&iter->buf, ctx); + } + codetag_lock_module_list(ct_iter->cttype, false); + + return err ? : buf.ret; +} + +#define CTX_CAPTURE_TOKENS() \ + x(disable, 0) \ + x(enable, 0) + +static const char * const ctx_capture_token_strs[] = { +#define x(name, nr_args) #name, + CTX_CAPTURE_TOKENS() +#undef x + NULL +}; + +enum ctx_capture_token { +#define x(name, nr_args) TOK_##name, + CTX_CAPTURE_TOKENS() +#undef x +}; + +static int enable_ctx_capture(struct codetag_type *cttype, + struct codetag_query *query, bool enable) +{ + struct codetag_iterator ct_iter; + struct codetag_with_ctx *ctc; + struct codetag *ct; + unsigned int nfound = 0; + + codetag_lock_module_list(cttype, true); + + codetag_init_iter(&ct_iter, cttype); + while ((ct = codetag_next_ct(&ct_iter))) { + if (!codetag_matches_query(query, ct, ct_iter.cmod, NULL)) + continue; + + ctc = ct_to_ctc(ct); + if (codetag_ctx_enabled(ctc) == enable) + continue; + + if (!alloc_tag_enable_ctx(ctc_to_alloc_tag(ctc), enable)) { + pr_warn("Failed to toggle context capture\n"); + continue; + } + + nfound++; + } + + codetag_lock_module_list(cttype, false); + + return nfound ? 0 : -ENOENT; +} + +static int parse_command(struct codetag_type *cttype, char *buf) +{ + struct codetag_query query = { NULL }; + char *cmd; + int ret; + int tok; + + buf = codetag_query_parse(&query, buf); + if (IS_ERR(buf)) + return PTR_ERR(buf); + + cmd = strsep_no_empty(&buf, " \t\r\n"); + if (!cmd) + return -EINVAL; /* no command */ + + tok = match_string(ctx_capture_token_strs, + ARRAY_SIZE(ctx_capture_token_strs), cmd); + if (tok < 0) + return -EINVAL; /* unknown command */ + + ret = enable_ctx_capture(cttype, &query, tok == TOK_enable); + if (ret < 0) + return ret; + + return 0; +} + +static ssize_t allocations_ctx_file_write(struct file *file, const char __user *ubuf, + size_t len, loff_t *offp) +{ + struct alloc_tag_file_iterator *iter = file->private_data; + char tmpbuf[256]; + + if (len == 0) + return 0; + /* we don't check *offp -- multiple writes() are allowed */ + if (len > sizeof(tmpbuf) - 1) + return -E2BIG; + + if (copy_from_user(tmpbuf, ubuf, len)) + return -EFAULT; + + tmpbuf[len] = '\0'; + parse_command(iter->ct_iter.cttype, tmpbuf); + + *offp += len; + return len; +} + +static const struct file_operations allocations_ctx_file_ops = { + .owner = THIS_MODULE, + .open = allocations_file_open, + .release = allocations_file_release, + .read = allocations_ctx_file_read, + .write = allocations_ctx_file_write, +}; + static int __init dbgfs_init(struct codetag_type *cttype) { struct dentry *file; + struct dentry *ctx_file; file = debugfs_create_file("allocations", 0444, NULL, cttype, &allocations_file_ops); + if (IS_ERR(file)) + return PTR_ERR(file); + + ctx_file = debugfs_create_file("allocations.ctx", 0666, NULL, cttype, + &allocations_ctx_file_ops); + if (IS_ERR(ctx_file)) { + debugfs_remove(file); + return PTR_ERR(ctx_file); + } - return IS_ERR(file) ? PTR_ERR(file) : 0; + return 0; } static bool alloc_tag_module_unload(struct codetag_type *cttype, struct codetag_module *cmod) { - struct codetag_iterator iter = codetag_get_ct_iter(cttype); + struct codetag_iterator iter; bool module_unused = true; struct alloc_tag *tag; struct codetag *ct; size_t bytes; + codetag_init_iter(&iter, cttype); for (ct = codetag_next_ct(&iter); ct; ct = codetag_next_ct(&iter)) { if (iter.cmod != cmod) continue; @@ -183,6 +414,7 @@ static int __init alloc_tag_init(void) .section = "alloc_tags", .tag_size = sizeof(struct alloc_tag), .module_unload = alloc_tag_module_unload, + .free_ctx = alloc_tag_ops_free_ctx, }; cttype = codetag_register_type(&desc); diff --git a/lib/codetag.c b/lib/codetag.c index d891bbe4481d..cbff146b3fe8 100644 --- a/lib/codetag.c +++ b/lib/codetag.c @@ -27,16 +27,14 @@ void codetag_lock_module_list(struct codetag_type *cttype, bool lock) up_read(&cttype->mod_lock); } -struct codetag_iterator codetag_get_ct_iter(struct codetag_type *cttype) +void codetag_init_iter(struct codetag_iterator *iter, + struct codetag_type *cttype) { - struct codetag_iterator iter = { - .cttype = cttype, - .cmod = NULL, - .mod_id = 0, - .ct = NULL, - }; - - return iter; + iter->cttype = cttype; + iter->cmod = NULL; + iter->mod_id = 0; + iter->ct = NULL; + iter->ctx = NULL; } static inline struct codetag *get_first_module_ct(struct codetag_module *cmod) @@ -128,6 +126,10 @@ struct codetag_ctx *codetag_next_ctx(struct codetag_iterator *iter) lockdep_assert_held(&iter->cttype->mod_lock); + /* Move to the first codetag if search just started */ + if (!iter->ct) + codetag_next_ct(iter); + if (!ctx) return next_ctx_from_ct(iter); From patchwork Mon May 1 16:54:46 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 89104 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp70495vqo; Mon, 1 May 2023 10:17:44 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ43OG3R75nNkkkHFfB0u1ounEW5zt8HY4/FfhVQ8VRUVwsK38vSmPbvjLxb5yGlp+Ul/LcE X-Received: by 2002:a17:902:db03:b0:1a9:7707:80b1 with SMTP id m3-20020a170902db0300b001a9770780b1mr18057160plx.67.1682961464272; Mon, 01 May 2023 10:17:44 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682961464; cv=none; d=google.com; s=arc-20160816; b=OpXyDQwPH3S/0WkpxUPKjUo3Vm+FfeJTWhiflyTTYXfdEIUPXsmRugndirV75nb5pS M0jTnkgIgbSz/oZo9Ksa0qyNNPmxl8jdOl1tl57V3z8uQHEtvHQMDjePlDsJavbuRllM OeixolItYfOE9Vhg2n3ZDf6avoetVxZsZOXPJAxWknHk4UAOUQCkD9Cbhknh9BBDAtwg 7ZNgllMJCYLL3CUeRHmrtWoh7xi2Alm1H4LzYwFykPb0Yu8ZLIlA4Huq+2yxA6xda66M uJNHBs5hyYlcBWRrQRRC2fg56OVjGaZX5ZUOyGY6PhAdgDE3cLXPFGIAOacr6bph7hR1 Fipg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:dkim-signature; bh=5F2YZ+zSEbu/k1s4F5etygangPIkg0EznnOkCCfoew4=; b=0GHYbXYMH4RuUqaDK1/Z9f2EY+lRiqkfdWEqMcyUKbrH95B3WtEoBDtqNxmfeREadY zCX/gXRjuPq8R0RPUBOdyAW8FC2LX7nYU4K4hOQgkEZ5g5NzGRLIq6rv/4625i4BeiHd SV1aPnM0G1LvmHsF/5XWBeFZS6WzNXVP8SFDbNYqkzDSVwYLPgbhapmQZcJn7OyFRi5Z 0MludqjUJ6llU8/KeZgo5pA4JgLAIGgk7mhZzaPQC5yOxdujBJ8ijmzkYcd8Wy54zBo+ uv6xe3L+sCehd+fSDwlZU2i0ADrVdp8lbQ1rcPAyaHiqy3BYYosNtBU7YmLDIyQuek5J 4OQg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=y9PADbKN; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id d10-20020a170902ceca00b001a68986a3d6si27665697plg.408.2023.05.01.10.17.31; Mon, 01 May 2023 10:17:44 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=y9PADbKN; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233268AbjEARDC (ORCPT + 99 others); Mon, 1 May 2023 13:03:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38924 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233153AbjEARBe (ORCPT ); Mon, 1 May 2023 13:01:34 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EAFF540C1 for ; Mon, 1 May 2023 09:56:59 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-55a1427cbacso32880887b3.3 for ; Mon, 01 May 2023 09:56:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1682960190; x=1685552190; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=5F2YZ+zSEbu/k1s4F5etygangPIkg0EznnOkCCfoew4=; b=y9PADbKNRkycgFljwwW4rqucG3eA68YCKeeZo3WcsU1FmoEQqcIFYkjuX0vuBDZ+Ql NTwaESSbp1RZc7LHc0KY8IZ9ABdUTaz7iD+fqU6hTkv+OjleRGkRYXEn3mmMGDZeb37K RR5c6bta6TNyvXnvanAVd7N5usS73yNUZBprdVddhAocVImGlGKImrhAPkg7FdlsDO1g 0/gV7SzEHyQ+UDygmg90PVZ1tDJNusamWgID4ov2qqJT8Mlk7B+3jkcThGeG+RtvCTCf JdDB5SJtODP6opHYk8cvNwCb/U4pdSF9HDKWf6Vbz1ntZy0fwly+Flt/OMhilMEsPt5F Cb4g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682960190; x=1685552190; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=5F2YZ+zSEbu/k1s4F5etygangPIkg0EznnOkCCfoew4=; b=FtC6LlwYceAZybQw4keZfchn8ZrAo7YWDHT1KYAx6N6csO12bO7EgOJ1gUW8e7gGwn ZS7K1ki3EJSPDwqSLzDJtAO6xQL7tV7ECDrFTvex8vno3W/+s1hRWj7518Fekh0rzPZt 35O9HcyQvym4W7smZFSGFGz4G4YIVQ9qD8Jy3W7V5vZ/Xcu6D6mecV8xjkblf3IYeXIB qhhJ4abI3eRdUZceZSy+jNLKftQxlnHC3Q+AioX0UEs8Av8YMN6/M3YTBdlCNowj9tRj ygPUspRGXOxuQ7iiQyRYySfkpT2zeK/Dh86pZuz9ovhzJ0lDMOhv8pu3WdCGLynk7a12 VKBg== X-Gm-Message-State: AC+VfDwZXR1KPJdSZNa0VyKRhZ61vqniuo1L6BnmO89gV3hZd/qdyTyn SROIt1S+j6eqJIEMFZQQOyQNAizOwwk= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:6d24:3efd:facc:7ac4]) (user=surenb job=sendgmr) by 2002:a81:a8c4:0:b0:54d:3afc:d503 with SMTP id f187-20020a81a8c4000000b0054d3afcd503mr8631819ywh.8.1682960190091; Mon, 01 May 2023 09:56:30 -0700 (PDT) Date: Mon, 1 May 2023 09:54:46 -0700 In-Reply-To: <20230501165450.15352-1-surenb@google.com> Mime-Version: 1.0 References: <20230501165450.15352-1-surenb@google.com> X-Mailer: git-send-email 2.40.1.495.gc816e09b53d-goog Message-ID: <20230501165450.15352-37-surenb@google.com> Subject: [PATCH 36/40] lib: add memory allocations report in show_mem() From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, ldufour@linux.ibm.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org X-Spam-Status: No, score=-9.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1764713000216853070?= X-GMAIL-MSGID: =?utf-8?q?1764713000216853070?= Include allocations in show_mem reports. Signed-off-by: Kent Overstreet Signed-off-by: Suren Baghdasaryan --- include/linux/alloc_tag.h | 2 ++ lib/alloc_tag.c | 48 +++++++++++++++++++++++++++++++++++---- lib/show_mem.c | 15 ++++++++++++ 3 files changed, 60 insertions(+), 5 deletions(-) diff --git a/include/linux/alloc_tag.h b/include/linux/alloc_tag.h index 2a3d248aae10..190ab793f7e5 100644 --- a/include/linux/alloc_tag.h +++ b/include/linux/alloc_tag.h @@ -23,6 +23,8 @@ struct alloc_tag { #ifdef CONFIG_MEM_ALLOC_PROFILING +void alloc_tags_show_mem_report(struct seq_buf *s); + static inline struct alloc_tag *ctc_to_alloc_tag(struct codetag_with_ctx *ctc) { return container_of(ctc, struct alloc_tag, ctc); diff --git a/lib/alloc_tag.c b/lib/alloc_tag.c index 675c7a08e38b..e2ebab8999a9 100644 --- a/lib/alloc_tag.c +++ b/lib/alloc_tag.c @@ -13,6 +13,8 @@ #define STACK_BUF_SIZE 1024 +static struct codetag_type *alloc_tag_cttype; + DEFINE_STATIC_KEY_TRUE(mem_alloc_profiling_key); /* @@ -133,6 +135,43 @@ static ssize_t allocations_file_read(struct file *file, char __user *ubuf, return err ? : buf.ret; } +void alloc_tags_show_mem_report(struct seq_buf *s) +{ + struct codetag_iterator iter; + struct codetag *ct; + struct { + struct codetag *tag; + size_t bytes; + } tags[10], n; + unsigned int i, nr = 0; + + codetag_init_iter(&iter, alloc_tag_cttype); + + codetag_lock_module_list(alloc_tag_cttype, true); + while ((ct = codetag_next_ct(&iter))) { + n.tag = ct; + n.bytes = lazy_percpu_counter_read(&ct_to_alloc_tag(ct)->bytes_allocated); + + for (i = 0; i < nr; i++) + if (n.bytes > tags[i].bytes) + break; + + if (i < ARRAY_SIZE(tags)) { + nr -= nr == ARRAY_SIZE(tags); + memmove(&tags[i + 1], + &tags[i], + sizeof(tags[0]) * (nr - i)); + nr++; + tags[i] = n; + } + } + + for (i = 0; i < nr; i++) + alloc_tag_to_text(s, tags[i].tag); + + codetag_lock_module_list(alloc_tag_cttype, false); +} + static const struct file_operations allocations_file_ops = { .owner = THIS_MODULE, .open = allocations_file_open, @@ -409,7 +448,6 @@ EXPORT_SYMBOL(page_alloc_tagging_ops); static int __init alloc_tag_init(void) { - struct codetag_type *cttype; const struct codetag_type_desc desc = { .section = "alloc_tags", .tag_size = sizeof(struct alloc_tag), @@ -417,10 +455,10 @@ static int __init alloc_tag_init(void) .free_ctx = alloc_tag_ops_free_ctx, }; - cttype = codetag_register_type(&desc); - if (IS_ERR_OR_NULL(cttype)) - return PTR_ERR(cttype); + alloc_tag_cttype = codetag_register_type(&desc); + if (IS_ERR_OR_NULL(alloc_tag_cttype)) + return PTR_ERR(alloc_tag_cttype); - return dbgfs_init(cttype); + return dbgfs_init(alloc_tag_cttype); } module_init(alloc_tag_init); diff --git a/lib/show_mem.c b/lib/show_mem.c index 1485c87be935..5c82f29168e3 100644 --- a/lib/show_mem.c +++ b/lib/show_mem.c @@ -7,6 +7,7 @@ #include #include +#include void __show_mem(unsigned int filter, nodemask_t *nodemask, int max_zone_idx) { @@ -34,4 +35,18 @@ void __show_mem(unsigned int filter, nodemask_t *nodemask, int max_zone_idx) #ifdef CONFIG_MEMORY_FAILURE printk("%lu pages hwpoisoned\n", atomic_long_read(&num_poisoned_pages)); #endif +#ifdef CONFIG_MEM_ALLOC_PROFILING + { + struct seq_buf s; + char *buf = kmalloc(4096, GFP_ATOMIC); + + if (buf) { + printk("Memory allocations:\n"); + seq_buf_init(&s, buf, 4096); + alloc_tags_show_mem_report(&s); + printk("%s", buf); + kfree(buf); + } + } +#endif } From patchwork Mon May 1 16:54:47 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 89106 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp70759vqo; Mon, 1 May 2023 10:18:08 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ7UDbrHFRDaVsJIJTO6S+w4iNhcW3horr30+zO/SGVoFzIijUVxwBLLfVhHCTK5loYgpJ3d X-Received: by 2002:a05:6a20:d398:b0:f5:4ab9:2fde with SMTP id iq24-20020a056a20d39800b000f54ab92fdemr15969880pzb.48.1682961488126; Mon, 01 May 2023 10:18:08 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682961488; cv=none; d=google.com; s=arc-20160816; b=hwQBAExABjm6hFnWiDOF+pBkGpim/iUu49hTp0KMJFG1Pip+OrKswdMg2fx9LAhVS0 qt59E+UcVd1OpV/3Nj/IU3By8Fvu1YJ4Onpuz4m0UxtwNzBLTG4cSlUfCQU99hgeN6GG 1b52SOwE5hYNDTuemLGzOHOYvjQnMp5Lkd6PfpGJ1JzZSIq2HrAedyNc4p23oagH+FRE SiJATa817CpcxRfoUdIXL52c8wG+nJ5jITH2qW6K3NRHlv0msvyoS6tLoMcuK59RqeHE Nz8Umrr7/Z6ksJ/mkLZlPAcxfBWDLT33y1Z9VLqNRFBlcPWEJlWoSJyXJBH21tSWMdXx ayMw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:dkim-signature; bh=nH88FGoAACjgZmQtkl43JvHLPN3RafkQwEsrc5bfN4s=; b=r2PvDbwXvSdmH3TfFZFdTrtKaNVXoLPR81MZMtYp6X4s3kbbJkeuqJYI+KOv/Jci7g DVrUFSFqoV8YBIC8SOCdfWBwuLTIrh9ZWebyxaJA4LMf/ilLZV8+cpG9zVbkp9vybhvY Z+XyRxbXIJphmkzrPgUrFdD1EZmS5Vw/i+RUrnmVCuVtiKX3/OzlDkEuSybyhqwmnsqx D/DryfDb4oUxTzKBCHMKoc/M+FVgp3zA64aulUrANmynP2ag1vnhJ/ZwmzxJXDG1JuUJ P1+ZVvK0Z86h73cqA4kbvfBqg6PqHiamJsJD2ziKM3ZM/nV5Ar3+yHpFI8qbqFwSv0PC f/AA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=lLa+vy9Y; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id e185-20020a6369c2000000b0052c2061447dsi2977511pgc.522.2023.05.01.10.17.55; Mon, 01 May 2023 10:18:08 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=lLa+vy9Y; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233100AbjEARBW (ORCPT + 99 others); Mon, 1 May 2023 13:01:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59652 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233095AbjEAQ7q (ORCPT ); Mon, 1 May 2023 12:59:46 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D936C1BF5 for ; Mon, 1 May 2023 09:56:32 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-559d35837bbso53117137b3.3 for ; Mon, 01 May 2023 09:56:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1682960192; x=1685552192; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=nH88FGoAACjgZmQtkl43JvHLPN3RafkQwEsrc5bfN4s=; b=lLa+vy9YS9JYxnsS/wzOxOEwwt8wSy5Uuy+3HdA356nRicp9ut8/ucfpMAH5tzjtpk TZcWKstbLNfc+ExqK3JufnNMMSOfGAmXHFXOkaH7uvY2JNRcUhOQ+mziqCIge56WJNZ/ gi3cQ2XcWFl2PGrQeodns68az+EoIqlViDNY/5PRPUy6or731mNfftlLkYP0wViktDEH g0WRqa3axpcg1emeJdFyEEwnjL1yqR7Zf3pRutm7Z9+tW9GBqkt749c0MDSZR/CpVzak uesUCRPASh4BPez2LMwWVpOzpXLQw5JYdzG+OUSaei3lVr1p9uTTjV6lehd+V/Xsz/IX /oSA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682960192; x=1685552192; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=nH88FGoAACjgZmQtkl43JvHLPN3RafkQwEsrc5bfN4s=; b=YUgbPjOKi6V4o7PoaH0o3kT0IvwEWX8MZtnEx+pkxfVl6+73A1XKLVawgvcqz1eT95 e8bWmT2tCoN5lWm2axkhKuxs2eEs5wf98VeVS0tMocXtPrDA2b1qiZ6+E2H+SkgBn5yM fh2aoLBsjQiVmxnfeUGVi0w086pFOSsQLyIxJ9RU5kdmGH44DZJKdc8Cj5tbAEwfYUUo 5861+sGFQg1Gg2pdVDMVUDz3khed90SNZLwvW+hbm64NkuNOjES7muZO/wewfEkzAEe6 CKDD9scEnUDzYmem7dgdA2ekdsgHkW5uPHGT28c+qgGj7YyAwOGbQhy5M+/tH1jXdTib jyAQ== X-Gm-Message-State: AC+VfDxscAeUJ7Gyqdhdtih1Xm1FLdPwQF1ag3xENJ9ti1ocQOJmjFsL 48CaTivRecQ+O4CPpZj2fQBHEOp+q7Q= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:6d24:3efd:facc:7ac4]) (user=surenb job=sendgmr) by 2002:a81:4011:0:b0:54f:9e1b:971c with SMTP id l17-20020a814011000000b0054f9e1b971cmr8801791ywn.1.1682960192486; Mon, 01 May 2023 09:56:32 -0700 (PDT) Date: Mon, 1 May 2023 09:54:47 -0700 In-Reply-To: <20230501165450.15352-1-surenb@google.com> Mime-Version: 1.0 References: <20230501165450.15352-1-surenb@google.com> X-Mailer: git-send-email 2.40.1.495.gc816e09b53d-goog Message-ID: <20230501165450.15352-38-surenb@google.com> Subject: [PATCH 37/40] codetag: debug: skip objext checking when it's for objext itself From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, ldufour@linux.ibm.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org X-Spam-Status: No, score=-9.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1764713025268182261?= X-GMAIL-MSGID: =?utf-8?q?1764713025268182261?= objext objects are created with __GFP_NO_OBJ_EXT flag and therefore have no corresponding objext themselves (otherwise we would get an infinite recursion). When freeing these objects their codetag will be empty and when CONFIG_MEM_ALLOC_PROFILING_DEBUG is enabled this will lead to false warnings. Introduce CODETAG_EMPTY special codetag value to mark allocations which intentionally lack codetag to avoid these warnings. Set objext codetags to CODETAG_EMPTY before freeing to indicate that the codetag is expected to be empty. Signed-off-by: Suren Baghdasaryan --- include/linux/alloc_tag.h | 28 ++++++++++++++++++++++++++++ mm/slab.h | 33 +++++++++++++++++++++++++++++++++ mm/slab_common.c | 1 + 3 files changed, 62 insertions(+) diff --git a/include/linux/alloc_tag.h b/include/linux/alloc_tag.h index 190ab793f7e5..2c3f4f3a8c93 100644 --- a/include/linux/alloc_tag.h +++ b/include/linux/alloc_tag.h @@ -51,6 +51,28 @@ static inline bool mem_alloc_profiling_enabled(void) return static_branch_likely(&mem_alloc_profiling_key); } +#ifdef CONFIG_MEM_ALLOC_PROFILING_DEBUG + +#define CODETAG_EMPTY (void *)1 + +static inline bool is_codetag_empty(union codetag_ref *ref) +{ + return ref->ct == CODETAG_EMPTY; +} + +static inline void set_codetag_empty(union codetag_ref *ref) +{ + if (ref) + ref->ct = CODETAG_EMPTY; +} + +#else /* CONFIG_MEM_ALLOC_PROFILING_DEBUG */ + +static inline bool is_codetag_empty(union codetag_ref *ref) { return false; } +static inline void set_codetag_empty(union codetag_ref *ref) {} + +#endif /* CONFIG_MEM_ALLOC_PROFILING_DEBUG */ + static inline void __alloc_tag_sub(union codetag_ref *ref, size_t bytes, bool may_allocate) { @@ -65,6 +87,11 @@ static inline void __alloc_tag_sub(union codetag_ref *ref, size_t bytes, if (!ref || !ref->ct) return; + if (is_codetag_empty(ref)) { + ref->ct = NULL; + return; + } + if (is_codetag_ctx_ref(ref)) alloc_tag_free_ctx(ref->ctx, &tag); else @@ -112,6 +139,7 @@ static inline void alloc_tag_add(union codetag_ref *ref, struct alloc_tag *tag, #else #define DEFINE_ALLOC_TAG(_alloc_tag, _old) +static inline void set_codetag_empty(union codetag_ref *ref) {} static inline void alloc_tag_sub(union codetag_ref *ref, size_t bytes) {} static inline void alloc_tag_sub_noalloc(union codetag_ref *ref, size_t bytes) {} static inline void alloc_tag_add(union codetag_ref *ref, struct alloc_tag *tag, diff --git a/mm/slab.h b/mm/slab.h index f9442d3a10b2..50d86008a86a 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -416,6 +416,31 @@ static inline struct slabobj_ext *slab_obj_exts(struct slab *slab) int alloc_slab_obj_exts(struct slab *slab, struct kmem_cache *s, gfp_t gfp, bool new_slab); + +#ifdef CONFIG_MEM_ALLOC_PROFILING_DEBUG + +static inline void mark_objexts_empty(struct slabobj_ext *obj_exts) +{ + struct slabobj_ext *slab_exts; + struct slab *obj_exts_slab; + + obj_exts_slab = virt_to_slab(obj_exts); + slab_exts = slab_obj_exts(obj_exts_slab); + if (slab_exts) { + unsigned int offs = obj_to_index(obj_exts_slab->slab_cache, + obj_exts_slab, obj_exts); + /* codetag should be NULL */ + WARN_ON(slab_exts[offs].ref.ct); + set_codetag_empty(&slab_exts[offs].ref); + } +} + +#else /* CONFIG_MEM_ALLOC_PROFILING_DEBUG */ + +static inline void mark_objexts_empty(struct slabobj_ext *obj_exts) {} + +#endif /* CONFIG_MEM_ALLOC_PROFILING_DEBUG */ + static inline bool need_slab_obj_ext(void) { #ifdef CONFIG_MEM_ALLOC_PROFILING @@ -437,6 +462,14 @@ static inline void free_slab_obj_exts(struct slab *slab) if (!obj_exts) return; + /* + * obj_exts was created with __GFP_NO_OBJ_EXT flag, therefore its + * corresponding extension will be NULL. alloc_tag_sub() will throw a + * warning if slab has extensions but the extension of an object is + * NULL, therefore replace NULL with CODETAG_EMPTY to indicate that + * the extension for obj_exts is expected to be NULL. + */ + mark_objexts_empty(obj_exts); kfree(obj_exts); slab->obj_exts = 0; } diff --git a/mm/slab_common.c b/mm/slab_common.c index a05333bbb7f1..89265f825c43 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -244,6 +244,7 @@ int alloc_slab_obj_exts(struct slab *slab, struct kmem_cache *s, * assign slabobj_exts in parallel. In this case the existing * objcg vector should be reused. */ + mark_objexts_empty(vec); kfree(vec); return 0; } From patchwork Mon May 1 16:54:48 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 89124 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp72967vqo; Mon, 1 May 2023 10:21:53 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ5p88w4HFodQUYYBv/Vup17gZiCrg8ruv8ApjrN8Hc1GxaSV2x1PZbHoazc5R9MFNzKd1YQ X-Received: by 2002:a05:6a21:3a91:b0:f0:6517:2fd with SMTP id zv17-20020a056a213a9100b000f0651702fdmr14646277pzb.2.1682961713552; Mon, 01 May 2023 10:21:53 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682961713; cv=none; d=google.com; s=arc-20160816; b=1G1Q7967aK0GYzqt64WsnJf0eBGeV82/2mNZTF8Kz9McJZ0tjftb2kIHqWBc4eDaTb GchR18qQi0tSIFlVIAtPYBVQuILqOrzoJVtMcX/OAgHpvTI2z/TxJuOJcdLPJRKytsjt r+arHlB4m9pCViyH4EVupzJxwNZkms79hpq7h6PhqRmsTnmo9K3MJcRMGcBOFEZ4+mfx GMwEn8ArI6IjG7e8O78CQeXLA5xGCl5xtCNl0FmnTKHBIIaKVys0IHvydcTMFjGnRui/ JI1VXXD2fHct4bbNqMREouChKqwqBUS6q+0PXJoKgCePbUVXg3zYHgmstCclRsDaQA3E 6zbg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:dkim-signature; bh=uHz3gIMQ14LMAlJH3BHlupoStZ8/+BrUL09pT+1nDVs=; b=Hky1OUmJY2Op6YD0JbF+XOI+Q/tqn/5txKd9JnlnUQHKczVXeuLiOgURlt2IIYxpq8 VWzAkxWexXXZzeqSFoAE1RnVU5siQpG5E6UzSc7dz6mpcRGLyi/kh2sGa8Zu9+ZebPA6 ApTr0YegMnocd3v60Kpv7lr3yNK9tj6GINi2cSHPYTz/xrjCSCleJEQRfBn5ryBeZ4a4 dNWT8E3F/R8qBxRUCfYnmdlPY0BkvLAZH2qZKiYXCGin5VciwKff7Gen3uHTrItLltLo 9jf5DkyFjts5VWAc4eMTfpRMQNCnW//moY32Q5kTt0egmUwz71wCmYt2TmDUeeyL5rjJ ZslA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=cL9ulyr1; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id f63-20020a625142000000b0063f32f045aasi22899269pfb.342.2023.05.01.10.21.38; Mon, 01 May 2023 10:21:53 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=cL9ulyr1; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232748AbjEARDf (ORCPT + 99 others); Mon, 1 May 2023 13:03:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38238 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233053AbjEARCU (ORCPT ); Mon, 1 May 2023 13:02:20 -0400 Received: from mail-pf1-x449.google.com (mail-pf1-x449.google.com [IPv6:2607:f8b0:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 29CC140FD for ; Mon, 1 May 2023 09:57:06 -0700 (PDT) Received: by mail-pf1-x449.google.com with SMTP id d2e1a72fcca58-64115e69e1eso23348161b3a.0 for ; Mon, 01 May 2023 09:57:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1682960195; x=1685552195; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=uHz3gIMQ14LMAlJH3BHlupoStZ8/+BrUL09pT+1nDVs=; b=cL9ulyr1TZmloHcY9ZAQt2916/Hy8kFnLfCTiFsQOyXG21mKZDk2kVsVGS3swW/o9A lmsPI4V+ni8bJ2cDmSOvMhTe7jiZncm+a+dX+h32N0c6xUNbGxEO59xGx2raskwk+yE+ EY40N2q/B5lxnqf+wqKkKUxKcIxmFMWDeQ/MPMZ5UoklVL3SBusyJB6cuGrGjrAev2jt I6daIpOCKx+oNZEO+zjvrtYAnmo4PPKmPdsrXndRo+l1Ieu4A+pt+fjUYtVugs+1Ncqh keNiqCygIJKFaCxV1denDbSpt/uk9T3I2i0PdEP0vYROV8UgAo/d+Ogm15HjT6dg+WeL DFBQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682960195; x=1685552195; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=uHz3gIMQ14LMAlJH3BHlupoStZ8/+BrUL09pT+1nDVs=; b=jHFGheMWhAWgs8XhQYJeEHNjrf2CbyYRMja8F4ZIKJEzrIw2A/HDsHDM66/pztsdGn kLN1mRtLgemzHB+pgwdoVz9oC7ZkZjZds3I8eTZpIiVLEECqarfwYCrz3qv8IZSWFWqQ 7LeNb2AeiQrI/q/6Vw5VpoAyWlAbdrI3gKtgfXAUNKhMWNVcFLLvP1IF2Vp+2CZ7KpJT /f3+zLdCcxryd2+uMdPybh9JH32LdUTRXA01IEDlLGXJIxNoBb2fEkoxbByZW1G6irQ2 mD4Et7DwH34C+GFhBcpgLIlVoQVUYqXeY3I8QyDybGvKa5E4wxcMZzgAjC6dgNZcLSpe 90TA== X-Gm-Message-State: AC+VfDw8lPs+Mpzcr2U0814MJLzS6UMbacqXimy+yBptb44W4iskZG4p kELT0xT0Q+eLok6kmAOOVRCen6VUglU= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:6d24:3efd:facc:7ac4]) (user=surenb job=sendgmr) by 2002:a17:90a:5105:b0:244:9620:c114 with SMTP id t5-20020a17090a510500b002449620c114mr3673305pjh.1.1682960194723; Mon, 01 May 2023 09:56:34 -0700 (PDT) Date: Mon, 1 May 2023 09:54:48 -0700 In-Reply-To: <20230501165450.15352-1-surenb@google.com> Mime-Version: 1.0 References: <20230501165450.15352-1-surenb@google.com> X-Mailer: git-send-email 2.40.1.495.gc816e09b53d-goog Message-ID: <20230501165450.15352-39-surenb@google.com> Subject: [PATCH 38/40] codetag: debug: mark codetags for reserved pages as empty From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, ldufour@linux.ibm.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org X-Spam-Status: No, score=-9.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1764713261395327049?= X-GMAIL-MSGID: =?utf-8?q?1764713261395327049?= To avoid debug warnings while freeing reserved pages which were not allocated with usual allocators, mark their codetags as empty before freeing. Maybe we can annotate reserved pages correctly and avoid this? Signed-off-by: Suren Baghdasaryan --- include/linux/mm.h | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/include/linux/mm.h b/include/linux/mm.h index 27ce77080c79..f5969cb85879 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -5,6 +5,7 @@ #include #include #include +#include #include #include #include @@ -2920,6 +2921,13 @@ extern void reserve_bootmem_region(phys_addr_t start, phys_addr_t end); /* Free the reserved page into the buddy system, so it gets managed. */ static inline void free_reserved_page(struct page *page) { + union codetag_ref *ref; + + ref = get_page_tag_ref(page); + if (ref) { + set_codetag_empty(ref); + put_page_tag_ref(ref); + } ClearPageReserved(page); init_page_count(page); __free_page(page); From patchwork Mon May 1 16:54:49 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 89108 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp70955vqo; Mon, 1 May 2023 10:18:24 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ5LiO/tA6GAs+WA5O7ChBenX2HazYqpe5Xm/m1PkEgx45HvDqyKEC5EgxLWL7i1ub5Mtn11 X-Received: by 2002:a17:902:d48e:b0:1ab:528:5f85 with SMTP id c14-20020a170902d48e00b001ab05285f85mr1041386plg.59.1682961503621; Mon, 01 May 2023 10:18:23 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682961503; cv=none; d=google.com; s=arc-20160816; b=yPVHZtvfLJRLnZ42vUACdJGU13RD/S80aCT15RJpdtmsm68VaJbpSu4cf4l36VMrpn URXyMsOVZJjoEbLWxpTkXIa71S14UnLB3JyulLtLIPRHrlMNoyuXeCl4Icgs6VUu4Spv 0NCKGrsdy2qlS65NZoijn/UnfoNipfNmnQeqwQeFLZpi72zoD7d5XF33U0hWS/R6pEHq XkQ3vzYSbr3rsz1OY77XXxtkYKdLKamMC5svaDGySXUeM3XrFnM4FuZZSu/3H5eMjjgH Q1lBv0hzcTWyzRWS9y3S7vlM3YJowpekbF7qys+Y/or4Me/6rTP9tzhZIEcJlHkMcUTv Rj5Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:dkim-signature; bh=6mzPQystPbLbDwPdw2fPo41K7qknlkieEKK3sweddW8=; b=R5xtXvjlbiVLNxCeeUNmTnS3BQPtjHaceJsqvWP4xwiMkY9ATL6Don2EalVDYDxqgp T4tUuw88MopzVfZN8JoOp5aMwOQKBlJdlGNLMzvgxKgQc8LAyeiFH6C1H9Tn7ny5viUg 2XDZPIZvV61wpI8p9DMom4tF5zPmPeMQ6aTP2Z/NnQp9kLD8TfhFkd/eKEhHOp/VOSZ+ 64nGRpPG87YDqo0U3A0Y6QmuQPs4VATCWDPSCrGCipwUV8NBfIWR5PMPLGusoxohs1Ru hZu6um/gw2wlKCMQ+KUrvgMU3Y3Nr9vs5KcjavOdCmkcI4ZoJjZAKY7tWv4xxaeD8unq vBag== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=48U8jHG6; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id l2-20020a170903244200b001aafda896b3si1894216pls.626.2023.05.01.10.18.10; Mon, 01 May 2023 10:18:23 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=48U8jHG6; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233297AbjEARDi (ORCPT + 99 others); Mon, 1 May 2023 13:03:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39922 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233167AbjEARCV (ORCPT ); Mon, 1 May 2023 13:02:21 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 131CA420E for ; Mon, 1 May 2023 09:57:09 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-559e281c5dfso25142787b3.3 for ; Mon, 01 May 2023 09:57:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1682960197; x=1685552197; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=6mzPQystPbLbDwPdw2fPo41K7qknlkieEKK3sweddW8=; b=48U8jHG63mdA66+jSc2MsTTE/SQi8BR+RKY6JNGvzt9XyZfx935vfOLln6IV1D1nZA TDOiQ/WUR049cZ5YJ1Q2SXSzODz7LrWrQUYaviS+GqGgkaK3yK5eLgyzf9e3cLjtYX0E QEA8ptoQ0G50fmCmay29a1/AGKziq1anSqyI04k35PIYMGlvV6wz4iKeKGJ75o0JVkmO ON7F5nilmKInR+kNfyJ2/HvMFyxBUo4zG20TTbLKq7fKcUY3NShJQ1fHpbVtH1cQdhj8 PGEM3ZzK108E7+nu3T246tg/tKyPHMfi0CFtYYsesaQA8cqlaVTRPDBW+c2GnGCMueBG 1wCA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682960197; x=1685552197; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=6mzPQystPbLbDwPdw2fPo41K7qknlkieEKK3sweddW8=; b=lq5/1QdS7FRpeQI6018eVUNc5GNm4VYYCOkZyuzXTvFVTTgsjhcJz+KRipKIrERjrO C0YkcdQsiKqnm2nb9biPyRIXtVbMgkEKjWdwaf4vzSyFrSK0g1ht/UWkdLvC30zaQ+En FpONZrqyZ5v10L+Jpi2xnUeIZQMQXkt/NBMnXscIhlGV4HzvVbGf+5lsCDhs37OIZXzH t/LjSLLcAmYDBpCJ9zh5PrjwuxFXezIiSxACKVW0JIYFZy8NAChwOY4uIofevHnhpCEY 9M8UiQIHn6ZtvT5DLZ9lZH/0XnEXHHnbbo6Txs8KMZG67Y8imEJAWW3ITyPxADFfohXg xrnA== X-Gm-Message-State: AC+VfDxGnTM1dVsDu68CoIEo8hl33wNS/sL0E1s+B1lX2/MFeK9226kH Wtvx57pBPebMkadwdS+TSWTLIYTB03k= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:6d24:3efd:facc:7ac4]) (user=surenb job=sendgmr) by 2002:a81:e902:0:b0:541:61aa:9e60 with SMTP id d2-20020a81e902000000b0054161aa9e60mr9069879ywm.6.1682960197340; Mon, 01 May 2023 09:56:37 -0700 (PDT) Date: Mon, 1 May 2023 09:54:49 -0700 In-Reply-To: <20230501165450.15352-1-surenb@google.com> Mime-Version: 1.0 References: <20230501165450.15352-1-surenb@google.com> X-Mailer: git-send-email 2.40.1.495.gc816e09b53d-goog Message-ID: <20230501165450.15352-40-surenb@google.com> Subject: [PATCH 39/40] codetag: debug: introduce OBJEXTS_ALLOC_FAIL to mark failed slab_ext allocations From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, ldufour@linux.ibm.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org X-Spam-Status: No, score=-9.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1764713041290173723?= X-GMAIL-MSGID: =?utf-8?q?1764713041290173723?= If slabobj_ext vector allocation for a slab object fails and later on it succeeds for another object in the same slab, the slabobj_ext for the original object will be NULL and will be flagged in case when CONFIG_MEM_ALLOC_PROFILING_DEBUG is enabled. Mark failed slabobj_ext vector allocations using a new objext_flags flag stored in the lower bits of slab->obj_exts. When new allocation succeeds it marks all tag references in the same slabobj_ext vector as empty to avoid warnings implemented by CONFIG_MEM_ALLOC_PROFILING_DEBUG checks. Signed-off-by: Suren Baghdasaryan --- include/linux/memcontrol.h | 4 +++- mm/slab_common.c | 27 +++++++++++++++++++++++++-- 2 files changed, 28 insertions(+), 3 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index c7f21b15b540..3eb8975c1462 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -356,8 +356,10 @@ enum page_memcg_data_flags { #endif /* CONFIG_MEMCG */ enum objext_flags { + /* slabobj_ext vector failed to allocate */ + OBJEXTS_ALLOC_FAIL = __FIRST_OBJEXT_FLAG, /* the next bit after the last actual flag */ - __NR_OBJEXTS_FLAGS = __FIRST_OBJEXT_FLAG, + __NR_OBJEXTS_FLAGS = (__FIRST_OBJEXT_FLAG << 1), }; #define OBJEXTS_FLAGS_MASK (__NR_OBJEXTS_FLAGS - 1) diff --git a/mm/slab_common.c b/mm/slab_common.c index 89265f825c43..5b7e096b70a5 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -217,21 +217,44 @@ int alloc_slab_obj_exts(struct slab *slab, struct kmem_cache *s, { unsigned int objects = objs_per_slab(s, slab); unsigned long obj_exts; - void *vec; + struct slabobj_ext *vec; gfp &= ~OBJCGS_CLEAR_MASK; /* Prevent recursive extension vector allocation */ gfp |= __GFP_NO_OBJ_EXT; vec = kcalloc_node(objects, sizeof(struct slabobj_ext), gfp, slab_nid(slab)); - if (!vec) + if (!vec) { +#ifdef CONFIG_MEM_ALLOC_PROFILING_DEBUG + if (new_slab) { + /* Mark vectors which failed to allocate */ + slab->obj_exts = OBJEXTS_ALLOC_FAIL; +#ifdef CONFIG_MEMCG + slab->obj_exts |= MEMCG_DATA_OBJEXTS; +#endif + } +#endif return -ENOMEM; + } obj_exts = (unsigned long)vec; #ifdef CONFIG_MEMCG obj_exts |= MEMCG_DATA_OBJEXTS; #endif if (new_slab) { +#ifdef CONFIG_MEM_ALLOC_PROFILING_DEBUG + /* + * If vector previously failed to allocate then we have live + * objects with no tag reference. Mark all references in this + * vector as empty to avoid warnings later on. + */ + if (slab->obj_exts & OBJEXTS_ALLOC_FAIL) { + unsigned int i; + + for (i = 0; i < objects; i++) + set_codetag_empty(&vec[i].ref); + } +#endif /* * If the slab is brand new and nobody can yet access its * obj_exts, no synchronization is required and obj_exts can From patchwork Mon May 1 16:54:50 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 89110 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp71166vqo; Mon, 1 May 2023 10:18:48 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ5PaPG5YGOZiFtXMGc3UKVCvMGPMfG7nY43NK+Ihyw5F9TxKHchiUEJLEDQ5/LJl1jpS8op X-Received: by 2002:a05:6a20:938e:b0:f4:d4a8:9c82 with SMTP id x14-20020a056a20938e00b000f4d4a89c82mr18698784pzh.47.1682961528249; Mon, 01 May 2023 10:18:48 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682961528; cv=none; d=google.com; s=arc-20160816; b=tWjZYzAVobItCOEq+eKneVOgFiscuD6hig4osy+kShSmUVkdYt2xMpjn70H1hJGnmP M142kGBUsD/yFA4850jL9Oaqh6hgUE0x2pDqaloGxKDnzs2y9YOcbnoC0le/cGRaSMbT a/oDud4zL1It4ZOFq2v3Lnii7oF7DR3uj6OAiBrHTExawRiiI+GYoezCjN2a3JLUTT6b 1MeKGB3I178sQsPRdSX45EPgAK2wDkF9jrazVf5r+Qw+WUpFgUOcffmlVgP+4SxO9MnI EdVjuVGLCCvCLKLRgmioAkz5Kgfl3YLGMmJ4SzudqxN72pB6RTROJewFdA0Oiv0Chhfb exlw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:dkim-signature; bh=voe2xx017YXPnsPA1F569u968pU4cdVPLrVyFh58V3A=; b=JFEsd1v7CSY2lRYKVBwCw6BSuxc0FQqfK3mYXQ83Qlu+7ZPN+0wmemXtOHiaAz1Bba 1rFaN6PdoVmd1TJcmUMiyqKsZOe0GkPJlqPL4zA/bx3x0HU+5QzHX8jVl9GlbTOTdeXf bRRxJTTt/2WjoYKFKYJkRtOeM7fmNM4z94cIRPevFzdELW07dnZaNA87NfIx555lpWLN NehHy4McC7Q/g2ZliWFvtmJgqQU/b6h7IbUDn91C2IlB4nL1VACoc+T5c9TWN9mVFyEm 7KdXZbMbXfzREN/AjiWp+KKxTeUJ61MAXw8apICAM7/aU/1oXXyPXhnbw15bWPEOOA01 TC+A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=XY3g0fA5; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id k19-20020a63ff13000000b004fb98290dbdsi28435680pgi.50.2023.05.01.10.18.33; Mon, 01 May 2023 10:18:48 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=XY3g0fA5; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233217AbjEARD4 (ORCPT + 99 others); Mon, 1 May 2023 13:03:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40162 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232574AbjEARCe (ORCPT ); Mon, 1 May 2023 13:02:34 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D1CB710D4 for ; Mon, 1 May 2023 09:57:12 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-55a42c22300so22440167b3.2 for ; Mon, 01 May 2023 09:57:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1682960200; x=1685552200; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=voe2xx017YXPnsPA1F569u968pU4cdVPLrVyFh58V3A=; b=XY3g0fA5IrJ+YzYe2wUQ0ftZyFUg7ny2GrTo3NPiBlk4iPlMC/kY+SH8g6iLrwV+KV Xw+if+jca7PJgAtbAXeVriY1aFJ/EteLLOMZx1AdK6gJrXbG1uRZWDO/40Axb4AODOAW bu8B0l1k4HzrQxgM9N1NQnABkM6L6EHe4PLk5Qr9MgkaJ72bwWRTz68SbwPhkOTfAlCN aIInpOCqjYNQr/7r19P8lUnDyjtSJAnNSRwk6D0bpZ0QASPWz9H1GwXTn8bC79Kr3VsJ to3IBoyJHWuDBBn4X4nEQDSitAsGc5WJp6sVno05l5o1QoTuKQpKB8qBzlmM5zczDq7s CPoA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682960200; x=1685552200; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=voe2xx017YXPnsPA1F569u968pU4cdVPLrVyFh58V3A=; b=bvEIFLdKJHZsdYvx4Abqlj23hFcvnJ2jYz3F68lMjNLowJTRdSswxCXNZ8gigs6Zw/ m1FQsPoe4tQPQZMouPcDr2JDIx8qyGE1SJApZq+f9L/AaJ55nOU4xMdCMwLOHi6+v/jI ege2Phxpiw21vrCnMwj0b8/2AsWDZ4L2w7UoYEzvj3H7fi3bL0FH9nqRO5rbJbFhXvw3 jqa6fCSw++4ubbGoXzZn90RatZQDq51PFyvtljB0tQZSRxDAEkrg4uaRmXz0+mS63+38 CYONiCS2+jIHd+NQvVRRznpdB+lna4bm1X/bTI0XvLZgMrF8dkoRkiuoWc3NaOrKkRBI z6dw== X-Gm-Message-State: AC+VfDyWrLcdWSZhAvfzmSXF+h9qeYhTIHuBEAUzZPlVJ7TOPJRLpPd9 hxkVdsS4xrLvXSutPqRrsnzZ4nOpIBk= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:6d24:3efd:facc:7ac4]) (user=surenb job=sendgmr) by 2002:a81:de0c:0:b0:559:e97a:cb21 with SMTP id k12-20020a81de0c000000b00559e97acb21mr4262900ywj.9.1682960199781; Mon, 01 May 2023 09:56:39 -0700 (PDT) Date: Mon, 1 May 2023 09:54:50 -0700 In-Reply-To: <20230501165450.15352-1-surenb@google.com> Mime-Version: 1.0 References: <20230501165450.15352-1-surenb@google.com> X-Mailer: git-send-email 2.40.1.495.gc816e09b53d-goog Message-ID: <20230501165450.15352-41-surenb@google.com> Subject: [PATCH 40/40] MAINTAINERS: Add entries for code tagging and memory allocation profiling From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, ldufour@linux.ibm.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org X-Spam-Status: No, score=-9.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1764713067480381933?= X-GMAIL-MSGID: =?utf-8?q?1764713067480381933?= From: Kent Overstreet The new code & libraries added are being maintained - mark them as such. Signed-off-by: Kent Overstreet Signed-off-by: Suren Baghdasaryan --- MAINTAINERS | 22 ++++++++++++++++++++++ 1 file changed, 22 insertions(+) diff --git a/MAINTAINERS b/MAINTAINERS index 3889d1adf71f..6f3b79266204 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -5116,6 +5116,13 @@ S: Supported F: Documentation/process/code-of-conduct-interpretation.rst F: Documentation/process/code-of-conduct.rst +CODE TAGGING +M: Suren Baghdasaryan +M: Kent Overstreet +S: Maintained +F: include/linux/codetag.h +F: lib/codetag.c + COMEDI DRIVERS M: Ian Abbott M: H Hartley Sweeten @@ -11658,6 +11665,12 @@ S: Maintained F: Documentation/devicetree/bindings/leds/backlight/kinetic,ktz8866.yaml F: drivers/video/backlight/ktz8866.c +LAZY PERCPU COUNTERS +M: Kent Overstreet +S: Maintained +F: include/linux/lazy-percpu-counter.h +F: lib/lazy-percpu-counter.c + L3MDEV M: David Ahern L: netdev@vger.kernel.org @@ -13468,6 +13481,15 @@ F: mm/memblock.c F: mm/mm_init.c F: tools/testing/memblock/ +MEMORY ALLOCATION PROFILING +M: Suren Baghdasaryan +M: Kent Overstreet +S: Maintained +F: include/linux/alloc_tag.h +F: include/linux/codetag_ctx.h +F: lib/alloc_tag.c +F: lib/pgalloc_tag.c + MEMORY CONTROLLER DRIVERS M: Krzysztof Kozlowski L: linux-kernel@vger.kernel.org