[v2,09/10] mm/mmu_gather: improve cond_resched() handling with large folios and expensive page freeing
Message ID | 20240209221509.585251-10-david@redhat.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel+bounces-60081-ouuuleilei=gmail.com@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:50ea:b0:106:860b:bbdd with SMTP id r10csp1162251dyd; Fri, 9 Feb 2024 14:18:09 -0800 (PST) X-Google-Smtp-Source: AGHT+IERt2klLePoNZ3riQD0WqIxXs9BITmD/RQGAlKqKYo4lstM8aRPDrD4XeKVe6LBxWv7V4Ix X-Received: by 2002:aa7:da51:0:b0:55f:da79:9126 with SMTP id w17-20020aa7da51000000b0055fda799126mr151314eds.34.1707517089335; Fri, 09 Feb 2024 14:18:09 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1707517089; cv=pass; d=google.com; s=arc-20160816; b=LaiHZ6E4RVHxlaIBZAikDYlZ9Nn7EHjp5URb9U5lNyE9Tg8RC4SCd7SXg0o8Xf5lJj ur90LkqRJHEz1uV/ljDz7hzURjs0vhBhsCWwB4DbZKOAhC3AcDQRI4pW2AfeShzEvVNA hSkchtjcb8bxvLMbUsGCtN+8fyAlBotqT8QXDDQtQ4HNfTgql3EPvocD5F+B+Z1kk3JT AuvSOklNQdOXiynHTgEVJITqyNFMAgp/1meYnHiJdrWLdr615qakJ/NUmEY/4r5KGm1H QEArOGZ+S6vudJTohAxCq4fWHL/7KXpIQEup1ha4nxoMYfexPouib4zGwFFqVUclsbqE RegA== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=m8W53GW4ZEWC3PprgXE/ukTwzuRzol5bFe9xhF3bvjg=; fh=9tvc7SVs9sBrnZk6qwA2/HxAT6T9KzL2iLcdlqeQt/A=; b=W9ZYh0R+egP+ooJN3IKHUTW7TKr4nhjHcFLURU/WWtOgfs5OVPHUFD1pKSgH04sY8o fGYt6o1payBI1oWWWMMDKcq5ataJAn7mTc73MU1KCACa/BlhbHsn/HdKcjL4igqpL7WZ nfAGWIH/LyPuLYVE05+bJ428cK0jk/Kmj2Ntix3Il/IPZRRKsDXYHoQJiweR5rIvqt+w VAyv5HqCVG8AGHBinrNqxzDnPESWpbm9za/JUxaQGmWzDzo3Rr/dylC9+YuYyZ2Vt2v/ T9dlzPa4tPGog0Dh8yPCpEGUXEjBhFUSbLRgKY9L8tRXekiLsJSJd/QHm1kW47I8DisK VOdg==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=KWGv6VNe; arc=pass (i=1 spf=pass spfdomain=redhat.com dkim=pass dkdomain=redhat.com dmarc=pass fromdomain=redhat.com); spf=pass (google.com: domain of linux-kernel+bounces-60081-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) smtp.mailfrom="linux-kernel+bounces-60081-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com X-Forwarded-Encrypted: i=2; AJvYcCUx1iWOm+sgNxlptGJ2QlOyQFHnYnKfZvtn8YzhD6hdg1tNdKwtrRkcF9usY6zDIWutf4+Rgp/X6xdorDV57yCTG2Jfkg== Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [147.75.80.249]) by mx.google.com with ESMTPS id h19-20020a056402281300b0056148579524si174252ede.327.2024.02.09.14.18.09 for <ouuuleilei@gmail.com> (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 09 Feb 2024 14:18:09 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-60081-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) client-ip=147.75.80.249; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=KWGv6VNe; arc=pass (i=1 spf=pass spfdomain=redhat.com dkim=pass dkdomain=redhat.com dmarc=pass fromdomain=redhat.com); spf=pass (google.com: domain of linux-kernel+bounces-60081-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) smtp.mailfrom="linux-kernel+bounces-60081-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id C24391F21501 for <ouuuleilei@gmail.com>; Fri, 9 Feb 2024 22:18:08 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 64D3739FFB; Fri, 9 Feb 2024 22:16:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="KWGv6VNe" Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D6F4239FCF for <linux-kernel@vger.kernel.org>; Fri, 9 Feb 2024 22:16:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707516970; cv=none; b=iMh34OmW1Lz7AuoxwugLUjPiGfAQJ88MeD9QY2VNgkp/xLLzV68vspwhu84fxNH972ztMWZxdMfOuiJfQC9J2q8sSqe+ydl602Y71D+hswuaTz9FqyosFXfDpdnGukDOj1DLSk9l7T8p1GUJ46Qtr/tL6Rgni8+QrfQXM9qW/OY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707516970; c=relaxed/simple; bh=/5mbH+9wsrYjqsJ0BTPMr7p8fqUtmFZU4UxswK+eIVc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=WgyIcb1oKN8hraQ7+PFIJddBLZlw5RMnUqi9C8mnGvvFNHWDYBOu3H2JSM/AVnPFk62rqwE9Z1rZsBRWtAood1q+A2lVgx27BF7GimVCFuSXZbd1xdC72p/DwrmYEKtPnZOeNLA8uxvo8WGkCd6yHmCIPDLAmzkGDsqf07daixI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=KWGv6VNe; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1707516967; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=m8W53GW4ZEWC3PprgXE/ukTwzuRzol5bFe9xhF3bvjg=; b=KWGv6VNe1qwKYZH4ouyIx8BCMeZLXRyOykz6QqhP0wfZQMc+viANG4NiaffJ4rzf89cIuZ H+329RWgRMBLtGTaLUfoyJBO32j8tsO+dhtUhHHFJ0/+yvBaaiZWB4zazEw3M7FPOrFMY5 rkEzMCT+e3U4H9eR3RYO0RDaCrf1R7w= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-60-t2V2UsG2PJSAXQJ2V6pbnw-1; Fri, 09 Feb 2024 17:16:04 -0500 X-MC-Unique: t2V2UsG2PJSAXQJ2V6pbnw-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id BE704837230; Fri, 9 Feb 2024 22:16:02 +0000 (UTC) Received: from t14s.redhat.com (unknown [10.39.194.59]) by smtp.corp.redhat.com (Postfix) with ESMTP id E26B01C14B04; Fri, 9 Feb 2024 22:15:56 +0000 (UTC) From: David Hildenbrand <david@redhat.com> To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, David Hildenbrand <david@redhat.com>, Andrew Morton <akpm@linux-foundation.org>, Matthew Wilcox <willy@infradead.org>, Ryan Roberts <ryan.roberts@arm.com>, Catalin Marinas <catalin.marinas@arm.com>, Yin Fengwei <fengwei.yin@intel.com>, Michal Hocko <mhocko@suse.com>, Will Deacon <will@kernel.org>, "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com>, Nick Piggin <npiggin@gmail.com>, Peter Zijlstra <peterz@infradead.org>, Michael Ellerman <mpe@ellerman.id.au>, Christophe Leroy <christophe.leroy@csgroup.eu>, "Naveen N. Rao" <naveen.n.rao@linux.ibm.com>, Heiko Carstens <hca@linux.ibm.com>, Vasily Gorbik <gor@linux.ibm.com>, Alexander Gordeev <agordeev@linux.ibm.com>, Christian Borntraeger <borntraeger@linux.ibm.com>, Sven Schnelle <svens@linux.ibm.com>, Arnd Bergmann <arnd@arndb.de>, linux-arch@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-s390@vger.kernel.org Subject: [PATCH v2 09/10] mm/mmu_gather: improve cond_resched() handling with large folios and expensive page freeing Date: Fri, 9 Feb 2024 23:15:08 +0100 Message-ID: <20240209221509.585251-10-david@redhat.com> In-Reply-To: <20240209221509.585251-1-david@redhat.com> References: <20240209221509.585251-1-david@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: <linux-kernel.vger.kernel.org> List-Subscribe: <mailto:linux-kernel+subscribe@vger.kernel.org> List-Unsubscribe: <mailto:linux-kernel+unsubscribe@vger.kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.7 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1790461439713639513 X-GMAIL-MSGID: 1790461439713639513 |
Series |
mm/memory: optimize unmap/zap with PTE-mapped THP
|
|
Commit Message
David Hildenbrand
Feb. 9, 2024, 10:15 p.m. UTC
It's a pain that we have to handle cond_resched() in
tlb_batch_pages_flush() manually and cannot simply handle it in
release_pages() -- release_pages() can be called from atomic context.
Well, in a perfect world we wouldn't have to make our code more at all.
With page poisoning and init_on_free, we might now run into soft lockups
when we free a lot of rather large folio fragments, because page freeing
time then depends on the actual memory size we are freeing instead of on
the number of folios that are involved.
In the absolute (unlikely) worst case, on arm64 with 64k we will be able
to free up to 256 folio fragments that each span 512 MiB: zeroing out 128
GiB does sound like it might take a while. But instead of ignoring this
unlikely case, let's just handle it.
So, let's teach tlb_batch_pages_flush() that there are some
configurations where page freeing is horribly slow, and let's reschedule
more frequently -- similarly like we did for now before we had large folio
fragments in there. Note that we might end up freeing only a single folio
fragment at a time that might exceed the old 512 pages limit: but if we
cannot even free a single MAX_ORDER page on a system without running into
soft lockups, something else is already completely bogus.
In the future, we might want to detect if handling cond_resched() is
required at all, and just not do any of that with full preemption enabled.
Signed-off-by: David Hildenbrand <david@redhat.com>
---
mm/mmu_gather.c | 50 ++++++++++++++++++++++++++++++++++++++++---------
1 file changed, 41 insertions(+), 9 deletions(-)
Comments
On 09/02/2024 22:15, David Hildenbrand wrote: > It's a pain that we have to handle cond_resched() in > tlb_batch_pages_flush() manually and cannot simply handle it in > release_pages() -- release_pages() can be called from atomic context. > Well, in a perfect world we wouldn't have to make our code more at all. > > With page poisoning and init_on_free, we might now run into soft lockups > when we free a lot of rather large folio fragments, because page freeing > time then depends on the actual memory size we are freeing instead of on > the number of folios that are involved. > > In the absolute (unlikely) worst case, on arm64 with 64k we will be able > to free up to 256 folio fragments that each span 512 MiB: zeroing out 128 > GiB does sound like it might take a while. But instead of ignoring this > unlikely case, let's just handle it. > > So, let's teach tlb_batch_pages_flush() that there are some > configurations where page freeing is horribly slow, and let's reschedule > more frequently -- similarly like we did for now before we had large folio > fragments in there. Note that we might end up freeing only a single folio > fragment at a time that might exceed the old 512 pages limit: but if we > cannot even free a single MAX_ORDER page on a system without running into > soft lockups, something else is already completely bogus. > > In the future, we might want to detect if handling cond_resched() is > required at all, and just not do any of that with full preemption enabled. > > Signed-off-by: David Hildenbrand <david@redhat.com> > --- > mm/mmu_gather.c | 50 ++++++++++++++++++++++++++++++++++++++++--------- > 1 file changed, 41 insertions(+), 9 deletions(-) > > diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c > index d175c0f1e2c8..2774044b5790 100644 > --- a/mm/mmu_gather.c > +++ b/mm/mmu_gather.c > @@ -91,18 +91,19 @@ void tlb_flush_rmaps(struct mmu_gather *tlb, struct vm_area_struct *vma) > } > #endif > > -static void tlb_batch_pages_flush(struct mmu_gather *tlb) > +static void __tlb_batch_free_encoded_pages(struct mmu_gather_batch *batch) > { > - struct mmu_gather_batch *batch; > - > - for (batch = &tlb->local; batch && batch->nr; batch = batch->next) { > - struct encoded_page **pages = batch->encoded_pages; > + struct encoded_page **pages = batch->encoded_pages; > + unsigned int nr, nr_pages; > > + /* > + * We might end up freeing a lot of pages. Reschedule on a regular > + * basis to avoid soft lockups in configurations without full > + * preemption enabled. The magic number of 512 folios seems to work. > + */ > + if (!page_poisoning_enabled_static() && !want_init_on_free()) { Is the performance win really worth 2 separate implementations keyed off this? It seems a bit fragile, in case any other operations get added to free which are proportional to size in future. Why not just always do the conservative version? > while (batch->nr) { > - /* > - * limit free batch count when PAGE_SIZE > 4K > - */ > - unsigned int nr = min(512U, batch->nr); > + nr = min(512, batch->nr); If any entries are for more than 1 page, nr_pages will also be encoded in the batch, so effectively this could be limiting to 256 actual folios (half of 512). Is it worth checking for ENCODED_PAGE_BIT_NR_PAGES_NEXT and limiting accordingly? nit: You're using 512 magic number in 2 places now; perhaps make a macro? > > /* > * Make sure we cover page + nr_pages, and don't leave > @@ -119,6 +120,37 @@ static void tlb_batch_pages_flush(struct mmu_gather *tlb) > cond_resched(); > } > } > + > + /* > + * With page poisoning and init_on_free, the time it takes to free > + * memory grows proportionally with the actual memory size. Therefore, > + * limit based on the actual memory size and not the number of involved > + * folios. > + */ > + while (batch->nr) { > + for (nr = 0, nr_pages = 0; > + nr < batch->nr && nr_pages < 512; nr++) { > + if (unlikely(encoded_page_flags(pages[nr]) & > + ENCODED_PAGE_BIT_NR_PAGES_NEXT)) > + nr_pages += encoded_nr_pages(pages[++nr]); > + else > + nr_pages++; > + } I guess worst case here is freeing (511 + 8192) * 64K pages = ~544M. That's up from the old limit of 512 * 64K = 32M, and 511 pages bigger than your statement in the commit log. Are you comfortable with this? I guess the only alternative is to start splitting a batch which would be really messy. I agree your approach is preferable if 544M is acceptable. > + > + free_pages_and_swap_cache(pages, nr); > + pages += nr; > + batch->nr -= nr; > + > + cond_resched(); > + } > +} > + > +static void tlb_batch_pages_flush(struct mmu_gather *tlb) > +{ > + struct mmu_gather_batch *batch; > + > + for (batch = &tlb->local; batch && batch->nr; batch = batch->next) > + __tlb_batch_free_encoded_pages(batch); > tlb->active = &tlb->local; > } >
Hi Ryan, >> -static void tlb_batch_pages_flush(struct mmu_gather *tlb) >> +static void __tlb_batch_free_encoded_pages(struct mmu_gather_batch *batch) >> { >> - struct mmu_gather_batch *batch; >> - >> - for (batch = &tlb->local; batch && batch->nr; batch = batch->next) { >> - struct encoded_page **pages = batch->encoded_pages; >> + struct encoded_page **pages = batch->encoded_pages; >> + unsigned int nr, nr_pages; >> >> + /* >> + * We might end up freeing a lot of pages. Reschedule on a regular >> + * basis to avoid soft lockups in configurations without full >> + * preemption enabled. The magic number of 512 folios seems to work. >> + */ >> + if (!page_poisoning_enabled_static() && !want_init_on_free()) { > > Is the performance win really worth 2 separate implementations keyed off this? > It seems a bit fragile, in case any other operations get added to free which are > proportional to size in future. Why not just always do the conservative version? I really don't want to iterate over all entries on the "sane" common case. We already do that two times: a) free_pages_and_swap_cache() b) release_pages() Only the latter really is required, and I'm planning on removing the one in (a) to move it into (b) as well. So I keep it separate to keep any unnecessary overhead to the setups that are already terribly slow. No need to iterate a page full of entries if it can be easily avoided. Especially, no need to degrade the common order-0 case. > >> while (batch->nr) { >> - /* >> - * limit free batch count when PAGE_SIZE > 4K >> - */ >> - unsigned int nr = min(512U, batch->nr); >> + nr = min(512, batch->nr); > > If any entries are for more than 1 page, nr_pages will also be encoded in the > batch, so effectively this could be limiting to 256 actual folios (half of 512). Right, in the patch description I state "256 folio fragments". It's up to 512 folios (order-0). > Is it worth checking for ENCODED_PAGE_BIT_NR_PAGES_NEXT and limiting accordingly? At least with 4k page size, we never have more than 510 (IIRC) entries per batch page. So any such optimization would only matter for large page sizes, which I don't think is worth it. Which exact optimization do you have in mind and would it really make a difference? > > nit: You're using 512 magic number in 2 places now; perhaps make a macro? I played 3 times with macro names (including just using something "intuitive" like MAX_ORDER_NR_PAGES) but returned to just using 512. That cond_resched() handling is just absolutely disgusting, one way or the other. Do you have a good idea for a macro name? > >> >> /* >> * Make sure we cover page + nr_pages, and don't leave >> @@ -119,6 +120,37 @@ static void tlb_batch_pages_flush(struct mmu_gather *tlb) >> cond_resched(); >> } >> } >> + >> + /* >> + * With page poisoning and init_on_free, the time it takes to free >> + * memory grows proportionally with the actual memory size. Therefore, >> + * limit based on the actual memory size and not the number of involved >> + * folios. >> + */ >> + while (batch->nr) { >> + for (nr = 0, nr_pages = 0; >> + nr < batch->nr && nr_pages < 512; nr++) { >> + if (unlikely(encoded_page_flags(pages[nr]) & >> + ENCODED_PAGE_BIT_NR_PAGES_NEXT)) >> + nr_pages += encoded_nr_pages(pages[++nr]); >> + else >> + nr_pages++; >> + } > > I guess worst case here is freeing (511 + 8192) * 64K pages = ~544M. That's up > from the old limit of 512 * 64K = 32M, and 511 pages bigger than your statement > in the commit log. Are you comfortable with this? I guess the only alternative > is to start splitting a batch which would be really messy. I agree your approach > is preferable if 544M is acceptable. Right, I have in the description: "if we cannot even free a single MAX_ORDER page on a system without running into soft lockups, something else is already completely bogus.". That would be 8192 pages on arm64. Anybody freeing a PMD-mapped THP would be in trouble already and should just reconsider life choices running such a machine. We could have 511 more pages, yes. If 8192 don't trigger a soft-lockup, I am confident that 511 more pages won't make a difference. But, if that ever is a problem, we can butcher this code as much as we want, because performance with poisoning/zeroing is already down the drain. As you say, splitting even further is messy, so I rather avoid that unless really required.
On 12/02/2024 10:11, David Hildenbrand wrote: > Hi Ryan, > >>> -static void tlb_batch_pages_flush(struct mmu_gather *tlb) >>> +static void __tlb_batch_free_encoded_pages(struct mmu_gather_batch *batch) >>> { >>> - struct mmu_gather_batch *batch; >>> - >>> - for (batch = &tlb->local; batch && batch->nr; batch = batch->next) { >>> - struct encoded_page **pages = batch->encoded_pages; >>> + struct encoded_page **pages = batch->encoded_pages; >>> + unsigned int nr, nr_pages; >>> + /* >>> + * We might end up freeing a lot of pages. Reschedule on a regular >>> + * basis to avoid soft lockups in configurations without full >>> + * preemption enabled. The magic number of 512 folios seems to work. >>> + */ >>> + if (!page_poisoning_enabled_static() && !want_init_on_free()) { >> >> Is the performance win really worth 2 separate implementations keyed off this? >> It seems a bit fragile, in case any other operations get added to free which are >> proportional to size in future. Why not just always do the conservative version? > > I really don't want to iterate over all entries on the "sane" common case. We > already do that two times: > > a) free_pages_and_swap_cache() > > b) release_pages() > > Only the latter really is required, and I'm planning on removing the one in (a) > to move it into (b) as well. > > So I keep it separate to keep any unnecessary overhead to the setups that are > already terribly slow. > > No need to iterate a page full of entries if it can be easily avoided. > Especially, no need to degrade the common order-0 case. Yeah, I understand all that. But given this is all coming from an array, (so easy to prefetch?) and will presumably all fit in the cache for the common case, at least, so its hot for (a) and (b), does separating this out really make a measurable performance difference? If yes then absolutely this optimizaiton makes sense. But if not, I think its a bit questionable. You're the boss though, so if your experience tells you this is neccessary, then I'm ok with that. By the way, Matthew had an RFC a while back that was doing some clever things with batches further down the call chain (I think; be memory). Might be worth taking a look at that if you are planning a follow up change to (a). > >> >>> while (batch->nr) { >>> - /* >>> - * limit free batch count when PAGE_SIZE > 4K >>> - */ >>> - unsigned int nr = min(512U, batch->nr); >>> + nr = min(512, batch->nr); >> >> If any entries are for more than 1 page, nr_pages will also be encoded in the >> batch, so effectively this could be limiting to 256 actual folios (half of 512). > > Right, in the patch description I state "256 folio fragments". It's up to 512 > folios (order-0). > >> Is it worth checking for ENCODED_PAGE_BIT_NR_PAGES_NEXT and limiting accordingly? > > At least with 4k page size, we never have more than 510 (IIRC) entries per batch > page. So any such optimization would only matter for large page sizes, which I > don't think is worth it. Yep; agreed. > > Which exact optimization do you have in mind and would it really make a difference? No I don't think it would make any difference, performance-wise. I'm just pointing out that in pathalogical cases you could end up with half the number of pages being freed at a time. > >> >> nit: You're using 512 magic number in 2 places now; perhaps make a macro? > > I played 3 times with macro names (including just using something "intuitive" > like MAX_ORDER_NR_PAGES) but returned to just using 512. > > That cond_resched() handling is just absolutely disgusting, one way or the other. > > Do you have a good idea for a macro name? MAX_NR_FOLIOS_PER_BATCH? MAX_NR_FOLIOS_PER_FREE? I don't think the name has to be perfect, because its private to the c file; but it ensures the 2 usages remain in sync if someone wants to change it in future. > >> >>> /* >>> * Make sure we cover page + nr_pages, and don't leave >>> @@ -119,6 +120,37 @@ static void tlb_batch_pages_flush(struct mmu_gather *tlb) >>> cond_resched(); >>> } >>> } >>> + >>> + /* >>> + * With page poisoning and init_on_free, the time it takes to free >>> + * memory grows proportionally with the actual memory size. Therefore, >>> + * limit based on the actual memory size and not the number of involved >>> + * folios. >>> + */ >>> + while (batch->nr) { >>> + for (nr = 0, nr_pages = 0; >>> + nr < batch->nr && nr_pages < 512; nr++) { >>> + if (unlikely(encoded_page_flags(pages[nr]) & >>> + ENCODED_PAGE_BIT_NR_PAGES_NEXT)) >>> + nr_pages += encoded_nr_pages(pages[++nr]); >>> + else >>> + nr_pages++; >>> + } >> >> I guess worst case here is freeing (511 + 8192) * 64K pages = ~544M. That's up >> from the old limit of 512 * 64K = 32M, and 511 pages bigger than your statement >> in the commit log. Are you comfortable with this? I guess the only alternative >> is to start splitting a batch which would be really messy. I agree your approach >> is preferable if 544M is acceptable. > > Right, I have in the description: > > "if we cannot even free a single MAX_ORDER page on a system without running into > soft lockups, something else is already completely bogus.". > > That would be 8192 pages on arm64. Anybody freeing a PMD-mapped THP would be in > trouble already and should just reconsider life choices running such a machine. > > We could have 511 more pages, yes. If 8192 don't trigger a soft-lockup, I am > confident that 511 more pages won't make a difference. > > But, if that ever is a problem, we can butcher this code as much as we want, > because performance with poisoning/zeroing is already down the drain. > > As you say, splitting even further is messy, so I rather avoid that unless > really required. > Yep ok, I understand the argument better now - thanks.
On 12.02.24 11:32, Ryan Roberts wrote: > On 12/02/2024 10:11, David Hildenbrand wrote: >> Hi Ryan, >> >>>> -static void tlb_batch_pages_flush(struct mmu_gather *tlb) >>>> +static void __tlb_batch_free_encoded_pages(struct mmu_gather_batch *batch) >>>> { >>>> - struct mmu_gather_batch *batch; >>>> - >>>> - for (batch = &tlb->local; batch && batch->nr; batch = batch->next) { >>>> - struct encoded_page **pages = batch->encoded_pages; >>>> + struct encoded_page **pages = batch->encoded_pages; >>>> + unsigned int nr, nr_pages; >>>> + /* >>>> + * We might end up freeing a lot of pages. Reschedule on a regular >>>> + * basis to avoid soft lockups in configurations without full >>>> + * preemption enabled. The magic number of 512 folios seems to work. >>>> + */ >>>> + if (!page_poisoning_enabled_static() && !want_init_on_free()) { >>> >>> Is the performance win really worth 2 separate implementations keyed off this? >>> It seems a bit fragile, in case any other operations get added to free which are >>> proportional to size in future. Why not just always do the conservative version? >> >> I really don't want to iterate over all entries on the "sane" common case. We >> already do that two times: >> >> a) free_pages_and_swap_cache() >> >> b) release_pages() >> >> Only the latter really is required, and I'm planning on removing the one in (a) >> to move it into (b) as well. >> >> So I keep it separate to keep any unnecessary overhead to the setups that are >> already terribly slow. >> >> No need to iterate a page full of entries if it can be easily avoided. >> Especially, no need to degrade the common order-0 case. > > Yeah, I understand all that. But given this is all coming from an array, (so > easy to prefetch?) and will presumably all fit in the cache for the common case, > at least, so its hot for (a) and (b), does separating this out really make a > measurable performance difference? If yes then absolutely this optimizaiton > makes sense. But if not, I think its a bit questionable. I primarily added it because (a) we learned that each cycle counts during mmap() just like it does during fork(). (b) Linus was similarly concerned about optimizing out another batching walk in c47454823bd4 ("mm: mmu_gather: allow more than one batch of delayed rmaps"): "it needs to walk that array of pages while still holding the page table lock, and our mmu_gather infrastructure allows for batching quite a lot of pages. We may have thousands on pages queued up for freeing, and we wanted to walk only the last batch if we then added a dirty page to the queue." So if it matters enough for reducing the time we hold the page table lock, it surely adds "some" overhead in general. > > You're the boss though, so if your experience tells you this is neccessary, then > I'm ok with that. I did not do any measurements myself, I just did that intuitively as above. After all, it's all pretty straight forward (keeping the existing logic, we need a new one either way) and not that much code. So unless there are strong opinions, I'd just leave the common case as it was, and the odd case be special. > > By the way, Matthew had an RFC a while back that was doing some clever things > with batches further down the call chain (I think; be memory). Might be worth > taking a look at that if you are planning a follow up change to (a). > Do you have a pointer? >> >>> >>>> while (batch->nr) { >>>> - /* >>>> - * limit free batch count when PAGE_SIZE > 4K >>>> - */ >>>> - unsigned int nr = min(512U, batch->nr); >>>> + nr = min(512, batch->nr); >>> >>> If any entries are for more than 1 page, nr_pages will also be encoded in the >>> batch, so effectively this could be limiting to 256 actual folios (half of 512). >> >> Right, in the patch description I state "256 folio fragments". It's up to 512 >> folios (order-0). >> >>> Is it worth checking for ENCODED_PAGE_BIT_NR_PAGES_NEXT and limiting accordingly? >> >> At least with 4k page size, we never have more than 510 (IIRC) entries per batch >> page. So any such optimization would only matter for large page sizes, which I >> don't think is worth it. > > Yep; agreed. > >> >> Which exact optimization do you have in mind and would it really make a difference? > > No I don't think it would make any difference, performance-wise. I'm just > pointing out that in pathalogical cases you could end up with half the number of > pages being freed at a time. Yes, I'll extend the patch description! > >> >>> >>> nit: You're using 512 magic number in 2 places now; perhaps make a macro? >> >> I played 3 times with macro names (including just using something "intuitive" >> like MAX_ORDER_NR_PAGES) but returned to just using 512. >> >> That cond_resched() handling is just absolutely disgusting, one way or the other. >> >> Do you have a good idea for a macro name? > > MAX_NR_FOLIOS_PER_BATCH? > MAX_NR_FOLIOS_PER_FREE? > > I don't think the name has to be perfect, because its private to the c file; but > it ensures the 2 usages remain in sync if someone wants to change it in future. Makes sense, I'll use something along those lines. > >> >>> >>>> /* >>>> * Make sure we cover page + nr_pages, and don't leave >>>> @@ -119,6 +120,37 @@ static void tlb_batch_pages_flush(struct mmu_gather *tlb) >>>> cond_resched(); >>>> } >>>> } >>>> + >>>> + /* >>>> + * With page poisoning and init_on_free, the time it takes to free >>>> + * memory grows proportionally with the actual memory size. Therefore, >>>> + * limit based on the actual memory size and not the number of involved >>>> + * folios. >>>> + */ >>>> + while (batch->nr) { >>>> + for (nr = 0, nr_pages = 0; >>>> + nr < batch->nr && nr_pages < 512; nr++) { >>>> + if (unlikely(encoded_page_flags(pages[nr]) & >>>> + ENCODED_PAGE_BIT_NR_PAGES_NEXT)) >>>> + nr_pages += encoded_nr_pages(pages[++nr]); >>>> + else >>>> + nr_pages++; >>>> + } >>> >>> I guess worst case here is freeing (511 + 8192) * 64K pages = ~544M. That's up >>> from the old limit of 512 * 64K = 32M, and 511 pages bigger than your statement >>> in the commit log. Are you comfortable with this? I guess the only alternative >>> is to start splitting a batch which would be really messy. I agree your approach >>> is preferable if 544M is acceptable. >> >> Right, I have in the description: >> >> "if we cannot even free a single MAX_ORDER page on a system without running into >> soft lockups, something else is already completely bogus.". >> >> That would be 8192 pages on arm64. Anybody freeing a PMD-mapped THP would be in >> trouble already and should just reconsider life choices running such a machine. >> >> We could have 511 more pages, yes. If 8192 don't trigger a soft-lockup, I am >> confident that 511 more pages won't make a difference. >> >> But, if that ever is a problem, we can butcher this code as much as we want, >> because performance with poisoning/zeroing is already down the drain. >> >> As you say, splitting even further is messy, so I rather avoid that unless >> really required. >> > > Yep ok, I understand the argument better now - thanks. > I'll further extend the patch description. Thanks!
On 12.02.24 11:56, David Hildenbrand wrote: > On 12.02.24 11:32, Ryan Roberts wrote: >> On 12/02/2024 10:11, David Hildenbrand wrote: >>> Hi Ryan, >>> >>>>> -static void tlb_batch_pages_flush(struct mmu_gather *tlb) >>>>> +static void __tlb_batch_free_encoded_pages(struct mmu_gather_batch *batch) >>>>> { >>>>> - struct mmu_gather_batch *batch; >>>>> - >>>>> - for (batch = &tlb->local; batch && batch->nr; batch = batch->next) { >>>>> - struct encoded_page **pages = batch->encoded_pages; >>>>> + struct encoded_page **pages = batch->encoded_pages; >>>>> + unsigned int nr, nr_pages; >>>>> + /* >>>>> + * We might end up freeing a lot of pages. Reschedule on a regular >>>>> + * basis to avoid soft lockups in configurations without full >>>>> + * preemption enabled. The magic number of 512 folios seems to work. >>>>> + */ >>>>> + if (!page_poisoning_enabled_static() && !want_init_on_free()) { >>>> >>>> Is the performance win really worth 2 separate implementations keyed off this? >>>> It seems a bit fragile, in case any other operations get added to free which are >>>> proportional to size in future. Why not just always do the conservative version? >>> >>> I really don't want to iterate over all entries on the "sane" common case. We >>> already do that two times: >>> >>> a) free_pages_and_swap_cache() >>> >>> b) release_pages() >>> >>> Only the latter really is required, and I'm planning on removing the one in (a) >>> to move it into (b) as well. >>> >>> So I keep it separate to keep any unnecessary overhead to the setups that are >>> already terribly slow. >>> >>> No need to iterate a page full of entries if it can be easily avoided. >>> Especially, no need to degrade the common order-0 case. >> >> Yeah, I understand all that. But given this is all coming from an array, (so >> easy to prefetch?) and will presumably all fit in the cache for the common case, >> at least, so its hot for (a) and (b), does separating this out really make a >> measurable performance difference? If yes then absolutely this optimizaiton >> makes sense. But if not, I think its a bit questionable. > > I primarily added it because > > (a) we learned that each cycle counts during mmap() just like it does > during fork(). > > (b) Linus was similarly concerned about optimizing out another batching > walk in c47454823bd4 ("mm: mmu_gather: allow more than one batch of > delayed rmaps"): > > "it needs to walk that array of pages while still holding the page table > lock, and our mmu_gather infrastructure allows for batching quite a lot > of pages. We may have thousands on pages queued up for freeing, and we > wanted to walk only the last batch if we then added a dirty page to the > queue." > > So if it matters enough for reducing the time we hold the page table > lock, it surely adds "some" overhead in general. > > >> >> You're the boss though, so if your experience tells you this is neccessary, then >> I'm ok with that. > > I did not do any measurements myself, I just did that intuitively as > above. After all, it's all pretty straight forward (keeping the existing > logic, we need a new one either way) and not that much code. > > So unless there are strong opinions, I'd just leave the common case as > it was, and the odd case be special. I think we can just reduce the code duplication easily: diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c index d175c0f1e2c8..99b3e9408aa0 100644 --- a/mm/mmu_gather.c +++ b/mm/mmu_gather.c @@ -91,18 +91,21 @@ void tlb_flush_rmaps(struct mmu_gather *tlb, struct vm_area_struct *vma) } #endif -static void tlb_batch_pages_flush(struct mmu_gather *tlb) -{ - struct mmu_gather_batch *batch; +/* + * We might end up freeing a lot of pages. Reschedule on a regular + * basis to avoid soft lockups in configurations without full + * preemption enabled. The magic number of 512 folios seems to work. + */ +#define MAX_NR_FOLIOS_PER_FREE 512 - for (batch = &tlb->local; batch && batch->nr; batch = batch->next) { - struct encoded_page **pages = batch->encoded_pages; +static void __tlb_batch_free_encoded_pages(struct mmu_gather_batch *batch) +{ + struct encoded_page **pages = batch->encoded_pages; + unsigned int nr, nr_pages; - while (batch->nr) { - /* - * limit free batch count when PAGE_SIZE > 4K - */ - unsigned int nr = min(512U, batch->nr); + while (batch->nr) { + if (!page_poisoning_enabled_static() && !want_init_on_free()) { + nr = min(MAX_NR_FOLIOS_PER_FREE, batch->nr); /* * Make sure we cover page + nr_pages, and don't leave @@ -111,14 +114,39 @@ static void tlb_batch_pages_flush(struct mmu_gather *tlb) if (unlikely(encoded_page_flags(pages[nr - 1]) & ENCODED_PAGE_BIT_NR_PAGES_NEXT)) nr++; + } else { + /* + * With page poisoning and init_on_free, the time it + * takes to free memory grows proportionally with the + * actual memory size. Therefore, limit based on the + * actual memory size and not the number of involved + * folios. + */ + for (nr = 0, nr_pages = 0; + nr < batch->nr && nr_pages < MAX_NR_FOLIOS_PER_FREE; + nr++) { + if (unlikely(encoded_page_flags(pages[nr]) & + ENCODED_PAGE_BIT_NR_PAGES_NEXT)) + nr_pages += encoded_nr_pages(pages[++nr]); + else + nr_pages++; + } + } - free_pages_and_swap_cache(pages, nr); - pages += nr; - batch->nr -= nr; + free_pages_and_swap_cache(pages, nr); + pages += nr; + batch->nr -= nr; - cond_resched(); - } + cond_resched(); } +} + +static void tlb_batch_pages_flush(struct mmu_gather *tlb) +{ + struct mmu_gather_batch *batch; + + for (batch = &tlb->local; batch && batch->nr; batch = batch->next) + __tlb_batch_free_encoded_pages(batch); tlb->active = &tlb->local; }
On 12/02/2024 11:05, David Hildenbrand wrote: > On 12.02.24 11:56, David Hildenbrand wrote: >> On 12.02.24 11:32, Ryan Roberts wrote: >>> On 12/02/2024 10:11, David Hildenbrand wrote: >>>> Hi Ryan, >>>> >>>>>> -static void tlb_batch_pages_flush(struct mmu_gather *tlb) >>>>>> +static void __tlb_batch_free_encoded_pages(struct mmu_gather_batch *batch) >>>>>> { >>>>>> - struct mmu_gather_batch *batch; >>>>>> - >>>>>> - for (batch = &tlb->local; batch && batch->nr; batch = batch->next) { >>>>>> - struct encoded_page **pages = batch->encoded_pages; >>>>>> + struct encoded_page **pages = batch->encoded_pages; >>>>>> + unsigned int nr, nr_pages; >>>>>> + /* >>>>>> + * We might end up freeing a lot of pages. Reschedule on a regular >>>>>> + * basis to avoid soft lockups in configurations without full >>>>>> + * preemption enabled. The magic number of 512 folios seems to work. >>>>>> + */ >>>>>> + if (!page_poisoning_enabled_static() && !want_init_on_free()) { >>>>> >>>>> Is the performance win really worth 2 separate implementations keyed off this? >>>>> It seems a bit fragile, in case any other operations get added to free >>>>> which are >>>>> proportional to size in future. Why not just always do the conservative >>>>> version? >>>> >>>> I really don't want to iterate over all entries on the "sane" common case. We >>>> already do that two times: >>>> >>>> a) free_pages_and_swap_cache() >>>> >>>> b) release_pages() >>>> >>>> Only the latter really is required, and I'm planning on removing the one in (a) >>>> to move it into (b) as well. >>>> >>>> So I keep it separate to keep any unnecessary overhead to the setups that are >>>> already terribly slow. >>>> >>>> No need to iterate a page full of entries if it can be easily avoided. >>>> Especially, no need to degrade the common order-0 case. >>> >>> Yeah, I understand all that. But given this is all coming from an array, (so >>> easy to prefetch?) and will presumably all fit in the cache for the common case, >>> at least, so its hot for (a) and (b), does separating this out really make a >>> measurable performance difference? If yes then absolutely this optimizaiton >>> makes sense. But if not, I think its a bit questionable. >> >> I primarily added it because >> >> (a) we learned that each cycle counts during mmap() just like it does >> during fork(). >> >> (b) Linus was similarly concerned about optimizing out another batching >> walk in c47454823bd4 ("mm: mmu_gather: allow more than one batch of >> delayed rmaps"): >> >> "it needs to walk that array of pages while still holding the page table >> lock, and our mmu_gather infrastructure allows for batching quite a lot >> of pages. We may have thousands on pages queued up for freeing, and we >> wanted to walk only the last batch if we then added a dirty page to the >> queue." >> >> So if it matters enough for reducing the time we hold the page table >> lock, it surely adds "some" overhead in general. >> >> >>> >>> You're the boss though, so if your experience tells you this is neccessary, then >>> I'm ok with that. >> >> I did not do any measurements myself, I just did that intuitively as >> above. After all, it's all pretty straight forward (keeping the existing >> logic, we need a new one either way) and not that much code. >> >> So unless there are strong opinions, I'd just leave the common case as >> it was, and the odd case be special. > > I think we can just reduce the code duplication easily: > > diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c > index d175c0f1e2c8..99b3e9408aa0 100644 > --- a/mm/mmu_gather.c > +++ b/mm/mmu_gather.c > @@ -91,18 +91,21 @@ void tlb_flush_rmaps(struct mmu_gather *tlb, struct > vm_area_struct *vma) > } > #endif > > -static void tlb_batch_pages_flush(struct mmu_gather *tlb) > -{ > - struct mmu_gather_batch *batch; > +/* > + * We might end up freeing a lot of pages. Reschedule on a regular > + * basis to avoid soft lockups in configurations without full > + * preemption enabled. The magic number of 512 folios seems to work. > + */ > +#define MAX_NR_FOLIOS_PER_FREE 512 > > - for (batch = &tlb->local; batch && batch->nr; batch = batch->next) { > - struct encoded_page **pages = batch->encoded_pages; > +static void __tlb_batch_free_encoded_pages(struct mmu_gather_batch *batch) > +{ > + struct encoded_page **pages = batch->encoded_pages; > + unsigned int nr, nr_pages; > > - while (batch->nr) { > - /* > - * limit free batch count when PAGE_SIZE > 4K > - */ > - unsigned int nr = min(512U, batch->nr); > + while (batch->nr) { > + if (!page_poisoning_enabled_static() && !want_init_on_free()) { > + nr = min(MAX_NR_FOLIOS_PER_FREE, batch->nr); > > /* > * Make sure we cover page + nr_pages, and don't leave > @@ -111,14 +114,39 @@ static void tlb_batch_pages_flush(struct mmu_gather *tlb) > if (unlikely(encoded_page_flags(pages[nr - 1]) & > ENCODED_PAGE_BIT_NR_PAGES_NEXT)) > nr++; > + } else { > + /* > + * With page poisoning and init_on_free, the time it > + * takes to free memory grows proportionally with the > + * actual memory size. Therefore, limit based on the > + * actual memory size and not the number of involved > + * folios. > + */ > + for (nr = 0, nr_pages = 0; > + nr < batch->nr && nr_pages < MAX_NR_FOLIOS_PER_FREE; > + nr++) { > + if (unlikely(encoded_page_flags(pages[nr]) & > + ENCODED_PAGE_BIT_NR_PAGES_NEXT)) > + nr_pages += encoded_nr_pages(pages[++nr]); > + else > + nr_pages++; > + } > + } > > - free_pages_and_swap_cache(pages, nr); > - pages += nr; > - batch->nr -= nr; > + free_pages_and_swap_cache(pages, nr); > + pages += nr; > + batch->nr -= nr; > > - cond_resched(); > - } > + cond_resched(); > } > +} > + > +static void tlb_batch_pages_flush(struct mmu_gather *tlb) > +{ > + struct mmu_gather_batch *batch; > + > + for (batch = &tlb->local; batch && batch->nr; batch = batch->next) > + __tlb_batch_free_encoded_pages(batch); > tlb->active = &tlb->local; > } > Yes this is much cleaner IMHO! I don't think putting the poison and init_on_free checks inside the while loops should make a whole lot of difference - you're only going round that loop once in the common (4K pages) case. Reviewed-by: Ryan Roberts <ryan.roberts@arm.com>
On 12.02.24 12:21, Ryan Roberts wrote: > On 12/02/2024 11:05, David Hildenbrand wrote: >> On 12.02.24 11:56, David Hildenbrand wrote: >>> On 12.02.24 11:32, Ryan Roberts wrote: >>>> On 12/02/2024 10:11, David Hildenbrand wrote: >>>>> Hi Ryan, >>>>> >>>>>>> -static void tlb_batch_pages_flush(struct mmu_gather *tlb) >>>>>>> +static void __tlb_batch_free_encoded_pages(struct mmu_gather_batch *batch) >>>>>>> { >>>>>>> - struct mmu_gather_batch *batch; >>>>>>> - >>>>>>> - for (batch = &tlb->local; batch && batch->nr; batch = batch->next) { >>>>>>> - struct encoded_page **pages = batch->encoded_pages; >>>>>>> + struct encoded_page **pages = batch->encoded_pages; >>>>>>> + unsigned int nr, nr_pages; >>>>>>> + /* >>>>>>> + * We might end up freeing a lot of pages. Reschedule on a regular >>>>>>> + * basis to avoid soft lockups in configurations without full >>>>>>> + * preemption enabled. The magic number of 512 folios seems to work. >>>>>>> + */ >>>>>>> + if (!page_poisoning_enabled_static() && !want_init_on_free()) { >>>>>> >>>>>> Is the performance win really worth 2 separate implementations keyed off this? >>>>>> It seems a bit fragile, in case any other operations get added to free >>>>>> which are >>>>>> proportional to size in future. Why not just always do the conservative >>>>>> version? >>>>> >>>>> I really don't want to iterate over all entries on the "sane" common case. We >>>>> already do that two times: >>>>> >>>>> a) free_pages_and_swap_cache() >>>>> >>>>> b) release_pages() >>>>> >>>>> Only the latter really is required, and I'm planning on removing the one in (a) >>>>> to move it into (b) as well. >>>>> >>>>> So I keep it separate to keep any unnecessary overhead to the setups that are >>>>> already terribly slow. >>>>> >>>>> No need to iterate a page full of entries if it can be easily avoided. >>>>> Especially, no need to degrade the common order-0 case. >>>> >>>> Yeah, I understand all that. But given this is all coming from an array, (so >>>> easy to prefetch?) and will presumably all fit in the cache for the common case, >>>> at least, so its hot for (a) and (b), does separating this out really make a >>>> measurable performance difference? If yes then absolutely this optimizaiton >>>> makes sense. But if not, I think its a bit questionable. >>> >>> I primarily added it because >>> >>> (a) we learned that each cycle counts during mmap() just like it does >>> during fork(). >>> >>> (b) Linus was similarly concerned about optimizing out another batching >>> walk in c47454823bd4 ("mm: mmu_gather: allow more than one batch of >>> delayed rmaps"): >>> >>> "it needs to walk that array of pages while still holding the page table >>> lock, and our mmu_gather infrastructure allows for batching quite a lot >>> of pages. We may have thousands on pages queued up for freeing, and we >>> wanted to walk only the last batch if we then added a dirty page to the >>> queue." >>> >>> So if it matters enough for reducing the time we hold the page table >>> lock, it surely adds "some" overhead in general. >>> >>> >>>> >>>> You're the boss though, so if your experience tells you this is neccessary, then >>>> I'm ok with that. >>> >>> I did not do any measurements myself, I just did that intuitively as >>> above. After all, it's all pretty straight forward (keeping the existing >>> logic, we need a new one either way) and not that much code. >>> >>> So unless there are strong opinions, I'd just leave the common case as >>> it was, and the odd case be special. >> >> I think we can just reduce the code duplication easily: >> >> diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c >> index d175c0f1e2c8..99b3e9408aa0 100644 >> --- a/mm/mmu_gather.c >> +++ b/mm/mmu_gather.c >> @@ -91,18 +91,21 @@ void tlb_flush_rmaps(struct mmu_gather *tlb, struct >> vm_area_struct *vma) >> } >> #endif >> >> -static void tlb_batch_pages_flush(struct mmu_gather *tlb) >> -{ >> - struct mmu_gather_batch *batch; >> +/* >> + * We might end up freeing a lot of pages. Reschedule on a regular >> + * basis to avoid soft lockups in configurations without full >> + * preemption enabled. The magic number of 512 folios seems to work. >> + */ >> +#define MAX_NR_FOLIOS_PER_FREE 512 >> >> - for (batch = &tlb->local; batch && batch->nr; batch = batch->next) { >> - struct encoded_page **pages = batch->encoded_pages; >> +static void __tlb_batch_free_encoded_pages(struct mmu_gather_batch *batch) >> +{ >> + struct encoded_page **pages = batch->encoded_pages; >> + unsigned int nr, nr_pages; >> >> - while (batch->nr) { >> - /* >> - * limit free batch count when PAGE_SIZE > 4K >> - */ >> - unsigned int nr = min(512U, batch->nr); >> + while (batch->nr) { >> + if (!page_poisoning_enabled_static() && !want_init_on_free()) { >> + nr = min(MAX_NR_FOLIOS_PER_FREE, batch->nr); >> >> /* >> * Make sure we cover page + nr_pages, and don't leave >> @@ -111,14 +114,39 @@ static void tlb_batch_pages_flush(struct mmu_gather *tlb) >> if (unlikely(encoded_page_flags(pages[nr - 1]) & >> ENCODED_PAGE_BIT_NR_PAGES_NEXT)) >> nr++; >> + } else { >> + /* >> + * With page poisoning and init_on_free, the time it >> + * takes to free memory grows proportionally with the >> + * actual memory size. Therefore, limit based on the >> + * actual memory size and not the number of involved >> + * folios. >> + */ >> + for (nr = 0, nr_pages = 0; >> + nr < batch->nr && nr_pages < MAX_NR_FOLIOS_PER_FREE; >> + nr++) { >> + if (unlikely(encoded_page_flags(pages[nr]) & >> + ENCODED_PAGE_BIT_NR_PAGES_NEXT)) >> + nr_pages += encoded_nr_pages(pages[++nr]); >> + else >> + nr_pages++; >> + } >> + } >> >> - free_pages_and_swap_cache(pages, nr); >> - pages += nr; >> - batch->nr -= nr; >> + free_pages_and_swap_cache(pages, nr); >> + pages += nr; >> + batch->nr -= nr; >> >> - cond_resched(); >> - } >> + cond_resched(); >> } >> +} >> + >> +static void tlb_batch_pages_flush(struct mmu_gather *tlb) >> +{ >> + struct mmu_gather_batch *batch; >> + >> + for (batch = &tlb->local; batch && batch->nr; batch = batch->next) >> + __tlb_batch_free_encoded_pages(batch); >> tlb->active = &tlb->local; >> } >> > > Yes this is much cleaner IMHO! I don't think putting the poison and init_on_free > checks inside the while loops should make a whole lot of difference - you're > only going round that loop once in the common (4K pages) case. Exactly. > > Reviewed-by: Ryan Roberts <ryan.roberts@arm.com> Thanks, this is the full patch, including the extended patch description: From 5518fb32b950154794380d029eef8751af8c9804 Mon Sep 17 00:00:00 2001 From: David Hildenbrand <david@redhat.com> Date: Fri, 9 Feb 2024 18:43:11 +0100 Subject: [PATCH] mm/mmu_gather: improve cond_resched() handling with large folios and expensive page freeing In tlb_batch_pages_flush(), we can end up freeing up to 512 pages or now up to 256 folio fragments that span more than one page, before we conditionally reschedule. It's a pain that we have to handle cond_resched() in tlb_batch_pages_flush() manually and cannot simply handle it in release_pages() -- release_pages() can be called from atomic context. Well, in a perfect world we wouldn't have to make our code more complicated at all. With page poisoning and init_on_free, we might now run into soft lockups when we free a lot of rather large folio fragments, because page freeing time then depends on the actual memory size we are freeing instead of on the number of folios that are involved. In the absolute (unlikely) worst case, on arm64 with 64k we will be able to free up to 256 folio fragments that each span 512 MiB: zeroing out 128 GiB does sound like it might take a while. But instead of ignoring this unlikely case, let's just handle it. So, let's teach tlb_batch_pages_flush() that there are some configurations where page freeing is horribly slow, and let's reschedule more frequently -- similarly like we did for now before we had large folio fragments in there. Avoid yet another loop over all encoded pages in the common case by handling that separately. Note that with page poisoning/zeroing, we might now end up freeing only a single folio fragment at a time that might exceed the old 512 pages limit: but if we cannot even free a single MAX_ORDER page on a system without running into soft lockups, something else is already completely bogus. Freeing a PMD-mapped THP would similarly cause trouble. In theory, we might even free 511 order-0 pages + a single MAX_ORDER page, effectively having to zero out 8703 pages on arm64 with 64k, translating to ~544 MiB of memory: however, if 512 MiB doesn't result in soft lockups, 544 MiB is unlikely to result in soft lockups, so we won't care about that for the time being. In the future, we might want to detect if handling cond_resched() is required at all, and just not do any of that with full preemption enabled. Reviewed-by: Ryan Roberts <ryan.roberts@arm.com> Signed-off-by: David Hildenbrand <david@redhat.com> --- mm/mmu_gather.c | 58 ++++++++++++++++++++++++++++++++++++------------- 1 file changed, 43 insertions(+), 15 deletions(-) diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c index d175c0f1e2c8..99b3e9408aa0 100644 --- a/mm/mmu_gather.c +++ b/mm/mmu_gather.c @@ -91,18 +91,21 @@ void tlb_flush_rmaps(struct mmu_gather *tlb, struct vm_area_struct *vma) } #endif -static void tlb_batch_pages_flush(struct mmu_gather *tlb) -{ - struct mmu_gather_batch *batch; +/* + * We might end up freeing a lot of pages. Reschedule on a regular + * basis to avoid soft lockups in configurations without full + * preemption enabled. The magic number of 512 folios seems to work. + */ +#define MAX_NR_FOLIOS_PER_FREE 512 - for (batch = &tlb->local; batch && batch->nr; batch = batch->next) { - struct encoded_page **pages = batch->encoded_pages; +static void __tlb_batch_free_encoded_pages(struct mmu_gather_batch *batch) +{ + struct encoded_page **pages = batch->encoded_pages; + unsigned int nr, nr_pages; - while (batch->nr) { - /* - * limit free batch count when PAGE_SIZE > 4K - */ - unsigned int nr = min(512U, batch->nr); + while (batch->nr) { + if (!page_poisoning_enabled_static() && !want_init_on_free()) { + nr = min(MAX_NR_FOLIOS_PER_FREE, batch->nr); /* * Make sure we cover page + nr_pages, and don't leave @@ -111,14 +114,39 @@ static void tlb_batch_pages_flush(struct mmu_gather *tlb) if (unlikely(encoded_page_flags(pages[nr - 1]) & ENCODED_PAGE_BIT_NR_PAGES_NEXT)) nr++; + } else { + /* + * With page poisoning and init_on_free, the time it + * takes to free memory grows proportionally with the + * actual memory size. Therefore, limit based on the + * actual memory size and not the number of involved + * folios. + */ + for (nr = 0, nr_pages = 0; + nr < batch->nr && nr_pages < MAX_NR_FOLIOS_PER_FREE; + nr++) { + if (unlikely(encoded_page_flags(pages[nr]) & + ENCODED_PAGE_BIT_NR_PAGES_NEXT)) + nr_pages += encoded_nr_pages(pages[++nr]); + else + nr_pages++; + } + } - free_pages_and_swap_cache(pages, nr); - pages += nr; - batch->nr -= nr; + free_pages_and_swap_cache(pages, nr); + pages += nr; + batch->nr -= nr; - cond_resched(); - } + cond_resched(); } +} + +static void tlb_batch_pages_flush(struct mmu_gather *tlb) +{ + struct mmu_gather_batch *batch; + + for (batch = &tlb->local; batch && batch->nr; batch = batch->next) + __tlb_batch_free_encoded_pages(batch); tlb->active = &tlb->local; }
diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c index d175c0f1e2c8..2774044b5790 100644 --- a/mm/mmu_gather.c +++ b/mm/mmu_gather.c @@ -91,18 +91,19 @@ void tlb_flush_rmaps(struct mmu_gather *tlb, struct vm_area_struct *vma) } #endif -static void tlb_batch_pages_flush(struct mmu_gather *tlb) +static void __tlb_batch_free_encoded_pages(struct mmu_gather_batch *batch) { - struct mmu_gather_batch *batch; - - for (batch = &tlb->local; batch && batch->nr; batch = batch->next) { - struct encoded_page **pages = batch->encoded_pages; + struct encoded_page **pages = batch->encoded_pages; + unsigned int nr, nr_pages; + /* + * We might end up freeing a lot of pages. Reschedule on a regular + * basis to avoid soft lockups in configurations without full + * preemption enabled. The magic number of 512 folios seems to work. + */ + if (!page_poisoning_enabled_static() && !want_init_on_free()) { while (batch->nr) { - /* - * limit free batch count when PAGE_SIZE > 4K - */ - unsigned int nr = min(512U, batch->nr); + nr = min(512, batch->nr); /* * Make sure we cover page + nr_pages, and don't leave @@ -119,6 +120,37 @@ static void tlb_batch_pages_flush(struct mmu_gather *tlb) cond_resched(); } } + + /* + * With page poisoning and init_on_free, the time it takes to free + * memory grows proportionally with the actual memory size. Therefore, + * limit based on the actual memory size and not the number of involved + * folios. + */ + while (batch->nr) { + for (nr = 0, nr_pages = 0; + nr < batch->nr && nr_pages < 512; nr++) { + if (unlikely(encoded_page_flags(pages[nr]) & + ENCODED_PAGE_BIT_NR_PAGES_NEXT)) + nr_pages += encoded_nr_pages(pages[++nr]); + else + nr_pages++; + } + + free_pages_and_swap_cache(pages, nr); + pages += nr; + batch->nr -= nr; + + cond_resched(); + } +} + +static void tlb_batch_pages_flush(struct mmu_gather *tlb) +{ + struct mmu_gather_batch *batch; + + for (batch = &tlb->local; batch && batch->nr; batch = batch->next) + __tlb_batch_free_encoded_pages(batch); tlb->active = &tlb->local; }