Message ID | 20240212213922.783301-19-surenb@google.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel+bounces-62424-ouuuleilei=gmail.com@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:bc8a:b0:106:860b:bbdd with SMTP id dn10csp184537dyb; Mon, 12 Feb 2024 13:48:07 -0800 (PST) X-Forwarded-Encrypted: i=3; AJvYcCXkPmTF4UtN1Z+xGgNXldthYEh4taZQJdJZe9Iji5TL1cpNDyq5V7AOKUgoDun6fClqX+JcvfRN5S27gR9O2P1513Y3sg== X-Google-Smtp-Source: AGHT+IFyhdPIYZpLO9GDU1mQOAtc+auB2E8PUBnPI91efZZSmr10UvLxNz16GCXk0iH+VgCOtDU3 X-Received: by 2002:a81:69c1:0:b0:604:92e1:14fd with SMTP id e184-20020a8169c1000000b0060492e114fdmr6131186ywc.45.1707774487384; Mon, 12 Feb 2024 13:48:07 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1707774487; cv=pass; d=google.com; s=arc-20160816; b=OFbN1QEeHVmxw2WgsKTmbDCTXJcDsr+ukU4mf8l5gjiQ129fNn2B5p62NiAtSaHkuM onrYwRDmC7KEK3c8WgY8GNwVcGpIrvywgIxxXWj6hRk7d7HjeoCanv3bjOvh4db7d15Z LU9xzjUE7UVFE9YV5CGoM9pyWAwOcQy5DuxLbRW1/OrcdkpfSt21AJj59ejn2K+qP248 nbc/HMRGvneGcavJOrhfIHDCqvkl20fWan3HPHFu5f44IyPGH/2CdAnC63T6g+/POECl weWKmVXtQn8mEUar+nSf+bTWyaVN2ln5mZzEo+psxVcwWkXH6qt+M0gaS2eNd7TGMwa4 i52Q== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=cc:to:from:subject:message-id:references:mime-version :list-unsubscribe:list-subscribe:list-id:precedence:in-reply-to:date :dkim-signature; bh=ZrBz9DyywfwO606ZQMwpTbc2aylPEGoXqCQTswmuPUY=; fh=TTdGJ1zk+lsJP211eqlSGietIuUP92cUSLaTcp/LEZo=; b=jpcHxXd0/eXxlWzrQzWhky1Qk2crKbBE8sbUw0su5Nd6nOsWsq7hRjAiwhxfR1YvLp A+PNPyAyufsPCHUhNOMsD7kR5xnlVY2dn7dcydwsufBBn2wO1Di/S2Fd629OlqkT2UDX AZQUL4iYj4u3k7WHMJRRitCv6cKOuQgyGOSPxSAOVYlJiPtAvlJBmTaqm+rGJg9v+iIG 4BAD6gZH4ujKDaSYQWqMttO9dU0PU7J2d0EsXcCdLAdnza/EWvxlAvlcEjuj/i8s7O9F ZL5RugdxwSKqsf7xhk24HkP6F0m2wq55t8Nqua64RFg2443+wy46vkDsoPSx7ktHXMkz K8fg==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=OMMRIghA; arc=pass (i=1 spf=pass spfdomain=flex--surenb.bounces.google.com dkim=pass dkdomain=google.com dmarc=pass fromdomain=google.com); spf=pass (google.com: domain of linux-kernel+bounces-62424-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-62424-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com X-Forwarded-Encrypted: i=2; AJvYcCXAyw5qMtO5Pgv0O/lNKWxxcp60UW5SD8gIme0BhCypSZLV69qKbmoRV3fJ9drAaqvsht6Etl2e+GL9I4f72S30LPsY4Q== Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [2604:1380:45d1:ec00::1]) by mx.google.com with ESMTPS id a9-20020a05620a02e900b0078565a992e9si6690814qko.364.2024.02.12.13.48.07 for <ouuuleilei@gmail.com> (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 12 Feb 2024 13:48:07 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-62424-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) client-ip=2604:1380:45d1:ec00::1; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=OMMRIghA; arc=pass (i=1 spf=pass spfdomain=flex--surenb.bounces.google.com dkim=pass dkdomain=google.com dmarc=pass fromdomain=google.com); spf=pass (google.com: domain of linux-kernel+bounces-62424-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-62424-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id 216041C20AE8 for <ouuuleilei@gmail.com>; Mon, 12 Feb 2024 21:48:07 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 7479257866; Mon, 12 Feb 2024 21:40:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="OMMRIghA" Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 295EB55E54 for <linux-kernel@vger.kernel.org>; Mon, 12 Feb 2024 21:40:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707774015; cv=none; b=j37n5MDfHxfTORCQSV+b/PH+kQAIcIeSXUbns9ZVyEgAwsDEbgqichrzHi4NXaLnJr/JhzqRXYu18g5RbnQIZNTtatq/u+eA2rWjBlbFx10mEE+2JObiNf8eh5aTIxYjnYq695v307MUS2IJMT5FqpIDpAxSNWNXyTy1FavSsn8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707774015; c=relaxed/simple; bh=4k57aS8GXMk2iiyKPx5CmNXvPcq94eqkm08DeOX77O0=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=fV9H7bCSfS+R73WcCE0SatrFMmtkd1dsdThGL/UJp6TSAfzD+N70/olgAZnsSr4KkCAXUcGzHVkHp6v53ItghcluK0EH2J9rMVEVk0Q1P++294CDhsjzlDVY+LK1vVpV2EJDuA7HQxL9mlp5swj55pGLWV7i7SC0yBGTESL1Czg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=OMMRIghA; arc=none smtp.client-ip=209.85.219.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--surenb.bounces.google.com Received: by mail-yb1-f201.google.com with SMTP id 3f1490d57ef6-dc6b269b172so5948531276.1 for <linux-kernel@vger.kernel.org>; Mon, 12 Feb 2024 13:40:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1707774012; x=1708378812; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=ZrBz9DyywfwO606ZQMwpTbc2aylPEGoXqCQTswmuPUY=; b=OMMRIghA3WxmjiTZ3CP9/pkJhKKeCdPMwJR2nWzu8M7EcfU5lL2pejbzPFJTTdubAa wVLjkMLvmxqdGp8uzdu9dem8137vtGW/RS01ddLsipTYPYQO5qnL6Gp83yWI6aBEGQqP 1jhEMKuEf7sqxESWm/wHqm0ycrm9BIaD8ve8ivgOTE4eKUuk2zJc86w/7smqz59YDAgV +paBKr7e/6ARk2885w1hiqOvbvAruUf3qEC61nwvpFCtQKmkL1M/5v4c+tg4+jldl0yD Eecy0WeETwGAYzPcN59c2h4BFJNwBri5rcW3a3lcM9/biWpZvoGtzB/NmUzFgHi5MfU7 UYCg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1707774012; x=1708378812; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=ZrBz9DyywfwO606ZQMwpTbc2aylPEGoXqCQTswmuPUY=; b=cHc3a30PENctxBFxqLvIFomKEp5Fhalg+8PHZeRyb/Br3jXhe45JtUcHvdPnqowOxv tOotwQHWO6QyhY+qs8TI9aIS/wG3veUXZOPSLbM9H8lkyt6Ui/l9xHNMIvKgsWMDnHcO +MkG4iLNo8OhYkklvulPBcvK1IQlNXEym2/bqJueKpXizuNBxxEB7abkgZXt6XwJctb8 f+umV18PyfKxUxs0fsvUsB2l4NdZFhkPfX7qcqo004s7PHKPgneI+qqYRWKPuQRiOhKB BR+T1PpflJsfwTNKUp0rJ/zCI+7JdfSodVir1lWBv8QCcToG6xZQdWCI5y//ks0vogZZ RRag== X-Forwarded-Encrypted: i=1; AJvYcCV8tUzuGV49l13ZReqUyUZ8RdtfYkVmD+fgK2n7Op3+NFjHbn0TqLKg8jt3bxgOnPGMbodX2HtyfuER7H+Dc1qffS8zors+l6+N6zpE X-Gm-Message-State: AOJu0YwxxhqY1rzPoyHEKhQ+dKcDTc9PtE7FEmHDRoDTDIdppFiyBhBA UCM4E/a/pGdwEj19AjM2H2u8t1F8jV+rG7MLHapxv4YdvDoRnnhy5xVgarP/H5vQ2przmbdaGW5 abQ== X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:b848:2b3f:be49:9cbc]) (user=surenb job=sendgmr) by 2002:a25:ad02:0:b0:dcc:2267:796e with SMTP id y2-20020a25ad02000000b00dcc2267796emr133364ybi.2.1707774012029; Mon, 12 Feb 2024 13:40:12 -0800 (PST) Date: Mon, 12 Feb 2024 13:39:04 -0800 In-Reply-To: <20240212213922.783301-1-surenb@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: <linux-kernel.vger.kernel.org> List-Subscribe: <mailto:linux-kernel+subscribe@vger.kernel.org> List-Unsubscribe: <mailto:linux-kernel+unsubscribe@vger.kernel.org> Mime-Version: 1.0 References: <20240212213922.783301-1-surenb@google.com> X-Mailer: git-send-email 2.43.0.687.g38aa6559b0-goog Message-ID: <20240212213922.783301-19-surenb@google.com> Subject: [PATCH v3 18/35] mm: create new codetag references during page splitting From: Suren Baghdasaryan <surenb@google.com> To: akpm@linux-foundation.org Cc: kent.overstreet@linux.dev, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, roman.gushchin@linux.dev, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, corbet@lwn.net, void@manifault.com, peterz@infradead.org, juri.lelli@redhat.com, catalin.marinas@arm.com, will@kernel.org, arnd@arndb.de, tglx@linutronix.de, mingo@redhat.com, dave.hansen@linux.intel.com, x86@kernel.org, peterx@redhat.com, david@redhat.com, axboe@kernel.dk, mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, dennis@kernel.org, tj@kernel.org, muchun.song@linux.dev, rppt@kernel.org, paulmck@kernel.org, pasha.tatashin@soleen.com, yosryahmed@google.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, andreyknvl@gmail.com, keescook@chromium.org, ndesaulniers@google.com, vvvvvv@google.com, gregkh@linuxfoundation.org, ebiggers@google.com, ytcoode@gmail.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, bristot@redhat.com, vschneid@redhat.com, cl@linux.com, penberg@kernel.org, iamjoonsoo.kim@lge.com, 42.hyeyoo@gmail.com, glider@google.com, elver@google.com, dvyukov@google.com, shakeelb@google.com, songmuchun@bytedance.com, jbaron@akamai.com, rientjes@google.com, minchan@google.com, kaleshsingh@google.com, surenb@google.com, kernel-team@android.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-arch@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org Content-Type: text/plain; charset="UTF-8" X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1790731340878721987 X-GMAIL-MSGID: 1790731340878721987 |
Series |
Memory allocation profiling
|
|
Commit Message
Suren Baghdasaryan
Feb. 12, 2024, 9:39 p.m. UTC
When a high-order page is split into smaller ones, each newly split
page should get its codetag. The original codetag is reused for these
pages but it's recorded as 0-byte allocation because original codetag
already accounts for the original high-order allocated page.
Signed-off-by: Suren Baghdasaryan <surenb@google.com>
---
include/linux/pgalloc_tag.h | 30 ++++++++++++++++++++++++++++++
mm/huge_memory.c | 2 ++
mm/page_alloc.c | 2 ++
3 files changed, 34 insertions(+)
Comments
On 2/12/24 22:39, Suren Baghdasaryan wrote: > When a high-order page is split into smaller ones, each newly split > page should get its codetag. The original codetag is reused for these > pages but it's recorded as 0-byte allocation because original codetag > already accounts for the original high-order allocated page. Wouldn't it be possible to adjust the original's accounted size and redistribute to the split pages for more accuracy?
On Fri, Feb 16, 2024 at 6:33 AM Vlastimil Babka <vbabka@suse.cz> wrote: > > On 2/12/24 22:39, Suren Baghdasaryan wrote: > > When a high-order page is split into smaller ones, each newly split > > page should get its codetag. The original codetag is reused for these > > pages but it's recorded as 0-byte allocation because original codetag > > already accounts for the original high-order allocated page. > > Wouldn't it be possible to adjust the original's accounted size and > redistribute to the split pages for more accuracy? I can't recall why I didn't do it that way but I'll try to change and see if something non-obvious comes up. Thanks! >
On Fri, Feb 16, 2024 at 4:46 PM Suren Baghdasaryan <surenb@google.com> wrote: > > On Fri, Feb 16, 2024 at 6:33 AM Vlastimil Babka <vbabka@suse.cz> wrote: > > > > On 2/12/24 22:39, Suren Baghdasaryan wrote: > > > When a high-order page is split into smaller ones, each newly split > > > page should get its codetag. The original codetag is reused for these > > > pages but it's recorded as 0-byte allocation because original codetag > > > already accounts for the original high-order allocated page. > > > > Wouldn't it be possible to adjust the original's accounted size and > > redistribute to the split pages for more accuracy? > > I can't recall why I didn't do it that way but I'll try to change and > see if something non-obvious comes up. Thanks! Ok, now I recall what's happening here. alloc_tag_add() effectively does two things: 1. it sets reference to point to the tag (ref->ct = &tag->ct) 2. it increments tag->counters In pgalloc_tag_split() by calling alloc_tag_add(codetag_ref_from_page_ext(page_ext), tag, 0); we effectively set the reference from new page_ext to point to the original tag but we keep the tag->counters->bytes counter the same (incrementing by 0). It still increments tag->counters->calls but I think we need that because when freeing individual split pages we will be decrementing this counter for each individual page. We allocated many pages with one call, then split into smaller pages and will be freeing them with multiple calls. We need to balance out the call counter during the split. I can refactor the part of alloc_tag_add() that sets the reference into a separate alloc_tag_ref_set() and make it set the reference and increments tag->counters->calls (with a comment explaining why we need this increment here). Then I can call alloc_tag_ref_set() from inside alloc_tag_add() and when splitting pages. I think that will be a bit more clear. > > >
diff --git a/include/linux/pgalloc_tag.h b/include/linux/pgalloc_tag.h index a060c26eb449..0174aff5e871 100644 --- a/include/linux/pgalloc_tag.h +++ b/include/linux/pgalloc_tag.h @@ -62,11 +62,41 @@ static inline void pgalloc_tag_sub(struct page *page, unsigned int order) } } +static inline void pgalloc_tag_split(struct page *page, unsigned int nr) +{ + int i; + struct page_ext *page_ext; + union codetag_ref *ref; + struct alloc_tag *tag; + + if (!mem_alloc_profiling_enabled()) + return; + + page_ext = page_ext_get(page); + if (unlikely(!page_ext)) + return; + + ref = codetag_ref_from_page_ext(page_ext); + if (!ref->ct) + goto out; + + tag = ct_to_alloc_tag(ref->ct); + page_ext = page_ext_next(page_ext); + for (i = 1; i < nr; i++) { + /* New reference with 0 bytes accounted */ + alloc_tag_add(codetag_ref_from_page_ext(page_ext), tag, 0); + page_ext = page_ext_next(page_ext); + } +out: + page_ext_put(page_ext); +} + #else /* CONFIG_MEM_ALLOC_PROFILING */ static inline void pgalloc_tag_add(struct page *page, struct task_struct *task, unsigned int order) {} static inline void pgalloc_tag_sub(struct page *page, unsigned int order) {} +static inline void pgalloc_tag_split(struct page *page, unsigned int nr) {} #endif /* CONFIG_MEM_ALLOC_PROFILING */ diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 94c958f7ebb5..86daae671319 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -38,6 +38,7 @@ #include <linux/sched/sysctl.h> #include <linux/memory-tiers.h> #include <linux/compat.h> +#include <linux/pgalloc_tag.h> #include <asm/tlb.h> #include <asm/pgalloc.h> @@ -2899,6 +2900,7 @@ static void __split_huge_page(struct page *page, struct list_head *list, /* Caller disabled irqs, so they are still disabled here */ split_page_owner(head, nr); + pgalloc_tag_split(head, nr); /* See comment in __split_huge_page_tail() */ if (PageAnon(head)) { diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 58c0e8b948a4..4bc5b4720fee 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -2621,6 +2621,7 @@ void split_page(struct page *page, unsigned int order) for (i = 1; i < (1 << order); i++) set_page_refcounted(page + i); split_page_owner(page, 1 << order); + pgalloc_tag_split(page, 1 << order); split_page_memcg(page, 1 << order); } EXPORT_SYMBOL_GPL(split_page); @@ -4806,6 +4807,7 @@ static void *make_alloc_exact(unsigned long addr, unsigned int order, struct page *last = page + nr; split_page_owner(page, 1 << order); + pgalloc_tag_split(page, 1 << order); split_page_memcg(page, 1 << order); while (page < --last) set_page_refcounted(last);