Message ID | c4b8485b-1f26-1a5f-bdf-c6c22611f610@google.com |
---|---|
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp80096wrr; Fri, 18 Nov 2022 01:14:38 -0800 (PST) X-Google-Smtp-Source: AA0mqf4gu5OCM/kBsuAtMyIGTEIkzBaUPrd+bLHfnpvLCV62HHQy6+Z2/eU6d6KGh7gQW6m/FK39 X-Received: by 2002:a17:906:6d48:b0:7ad:d408:3a2b with SMTP id a8-20020a1709066d4800b007add4083a2bmr5247890ejt.280.1668762877876; Fri, 18 Nov 2022 01:14:37 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1668762877; cv=none; d=google.com; s=arc-20160816; b=nMLiPwX2taZoxoZWdebFiOFfR7NBKRRDvJ05n3GqUburf+U1ASb9xhp44obsEcFkAl fZvdW12FIvv5Or5PNkyubYPQI//mw9LDuRfaeVjebQPllpqq1Q8NGpvkvKDnCuG69Fum aNIyNb9eO1LqINNT9c9SHZlIQ0ka3a6V2b5LJKTM8TKFq1RUT0nhtRyd1eMkY8lngmxn /X66FX3ZPs/rnzLOlJHVUoNX1hikPSyNXqyLjDm6/qx8UejBq7kel+cYxcAAeqBo6PlN 8Skvp6xB0OzVgSYMvamk0ynhA9oO/uMrPPXovlc0IFo94+oV0lMZYHoQyZRcIGRWWv9Z zSjw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:message-id:in-reply-to :subject:cc:to:from:date:dkim-signature; bh=4jgJCL3sHoSHtEVZaxlQfRVEHYZ212AYWgGq2TLVydQ=; b=EGijuArIKwcDachE59niOuOoD4QgfBJylg3rzwjD+87LzJB/qPm8Y59b3WmnOlR8cw 7AqGUhHPUJYjH4hsegGezNn+YVXWaOgm2YcFj6N8nh7dJUuBHy1EBva7tAufsYW4suU6 T9ech9BO/VMkMrPKOhcBoEwj873hhIiJb4S/LmX7I1hQFAbAZvYuKWpFd0z3V8Oflsiw ddnhg9PW5iejotsbs1IiQJgbEWWtnamuQ5cCQdDuNsuUq/m6z0k8355fnDjZyRKu9uAH hUJQclFUgMdlc6hOAUckFZyey4mWYwWqpxBN0FElzl0QVv/VOP2NRVpKjj10XOizrdx3 5hfA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=a2wpWHTp; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id i9-20020a1709064fc900b007ae8b1704b1si2970700ejw.67.2022.11.18.01.14.11; Fri, 18 Nov 2022 01:14:37 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=a2wpWHTp; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241754AbiKRJIm (ORCPT <rfc822;a1648639935@gmail.com> + 99 others); Fri, 18 Nov 2022 04:08:42 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56428 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241782AbiKRJIb (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Fri, 18 Nov 2022 04:08:31 -0500 Received: from mail-qv1-xf30.google.com (mail-qv1-xf30.google.com [IPv6:2607:f8b0:4864:20::f30]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8B11EDF88 for <linux-kernel@vger.kernel.org>; Fri, 18 Nov 2022 01:08:29 -0800 (PST) Received: by mail-qv1-xf30.google.com with SMTP id mi9so2945788qvb.8 for <linux-kernel@vger.kernel.org>; Fri, 18 Nov 2022 01:08:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=mime-version:references:message-id:in-reply-to:subject:cc:to:from :date:from:to:cc:subject:date:message-id:reply-to; bh=4jgJCL3sHoSHtEVZaxlQfRVEHYZ212AYWgGq2TLVydQ=; b=a2wpWHTporknCd1s9pPc2LAMPddZOemCMFlzk3MqZj9hClmoNsmS0E8aopvfJ0Pk5O lfFrfDf1DQNA98cj2IqdkeThsGiioaNhdEU8x9FlasQAeG9gv77XVBLgXLtm22tb6ds8 DElvqyoQ5RP2FEbXrxwuJJf5NAcRyBkmwlXZ/Wz3FSGRHEyGEfhZN2hofXa99LgGJOo8 4kCB1BEjbjw2S5SOXetEdh2El/+g2g+xZEF1ZOVe2+LobsueOYobBJxZZboKv7bYMSWt Mr7iNpsDvn+j7ut/QE9PIa52pe8cXOGgIivGMEoYPa7h7RNB0ogvBM8QS2+Gg27TTadd 69Cg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=mime-version:references:message-id:in-reply-to:subject:cc:to:from :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=4jgJCL3sHoSHtEVZaxlQfRVEHYZ212AYWgGq2TLVydQ=; b=BIjUVK4578uoEtcxIBs2JLX6CHFjhoMM+HkEguWlZ9USqpmJnKxwq0dUfoykfX8Nm4 mLpiJy1jRn/VkbuAtF8DkhxNw6DiIlI9baFvQ/bQRfOhIIN8On4jz1O4nG/3FmYqwkHC +E1q3brdo4etMoNg3mmM3YnyqEwJvx2zRmMMS8UmnSPjqxDlc9E3Go0JlT1sdXOz91tq Ik+JurhmYg5WnRFTch5EKZRNEgMrTfS3Cd9sgjyWlITP9YnwfWcolwDjrRawt94QjVgB tTXztecx9tti517oykOK3dp/GKzk044EbjEAZ8NlfBYrPDZtUn78pT10g7x26EdvYux3 6a3Q== X-Gm-Message-State: ANoB5plMvWcSSLCkxmCy3EIQJSHenI+IEAGRbShYBoxeWhrx/WooE4rv GvapEV4b79u5uexIfRgkt/IFJA== X-Received: by 2002:a0c:bf0b:0:b0:4b1:a359:c204 with SMTP id m11-20020a0cbf0b000000b004b1a359c204mr5893127qvi.64.1668762508583; Fri, 18 Nov 2022 01:08:28 -0800 (PST) Received: from ripple.attlocal.net (172-10-233-147.lightspeed.sntcca.sbcglobal.net. [172.10.233.147]) by smtp.gmail.com with ESMTPSA id x22-20020a05622a001600b003a57f822157sm1765388qtw.90.2022.11.18.01.08.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 18 Nov 2022 01:08:28 -0800 (PST) Date: Fri, 18 Nov 2022 01:08:13 -0800 (PST) From: Hugh Dickins <hughd@google.com> X-X-Sender: hugh@ripple.attlocal.net To: Andrew Morton <akpm@linux-foundation.org> cc: Linus Torvalds <torvalds@linux-foundation.org>, Johannes Weiner <hannes@cmpxchg.org>, "Kirill A. Shutemov" <kirill@shutemov.name>, Matthew Wilcox <willy@infradead.org>, David Hildenbrand <david@redhat.com>, Vlastimil Babka <vbabka@suse.cz>, Peter Xu <peterx@redhat.com>, Yang Shi <shy828301@gmail.com>, John Hubbard <jhubbard@nvidia.com>, Mike Kravetz <mike.kravetz@oracle.com>, Sidhartha Kumar <sidhartha.kumar@oracle.com>, Muchun Song <songmuchun@bytedance.com>, Miaohe Lin <linmiaohe@huawei.com>, Naoya Horiguchi <naoya.horiguchi@linux.dev>, Mina Almasry <almasrymina@google.com>, James Houghton <jthoughton@google.com>, Zach O'Keefe <zokeefe@google.com>, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 0/3] mm,thp,rmap: rework the use of subpages_mapcount In-Reply-To: <5f52de70-975-e94f-f141-543765736181@google.com> Message-ID: <c4b8485b-1f26-1a5f-bdf-c6c22611f610@google.com> References: <5f52de70-975-e94f-f141-543765736181@google.com> MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII X-Spam-Status: No, score=-17.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF, ENV_AND_HDR_SPF_MATCH,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS, USER_IN_DEF_DKIM_WL,USER_IN_DEF_SPF_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1749824703326071670?= X-GMAIL-MSGID: =?utf-8?q?1749824703326071670?= |
Series |
mm,thp,rmap: rework the use of subpages_mapcount
|
|
Message
Hugh Dickins
Nov. 18, 2022, 9:08 a.m. UTC
Linus was underwhelmed by the earlier compound mapcounts series: this series builds on top of it (as in next-20221117) to follow up on his suggestions - except rmap.c still using lock_page_memcg(), since I hesitate to steal the pleasure of deletion from Johannes. 1/3 mm,thp,rmap: subpages_mapcount of PTE-mapped subpages 2/3 mm,thp,rmap: subpages_mapcount COMPOUND_MAPPED if PMD-mapped 3/3 mm,thp,rmap: clean up the end of __split_huge_pmd_locked() Documentation/mm/transhuge.rst | 10 +- include/linux/mm.h | 65 +++++++---- include/linux/rmap.h | 12 +- mm/debug.c | 2 +- mm/huge_memory.c | 15 +-- mm/rmap.c | 213 ++++++++++------------------------- 6 files changed, 119 insertions(+), 198 deletions(-) Hugh
Comments
On Fri, Nov 18, 2022 at 1:08 AM Hugh Dickins <hughd@google.com> wrote: > > Linus was underwhelmed by the earlier compound mapcounts series: > this series builds on top of it (as in next-20221117) to follow > up on his suggestions - except rmap.c still using lock_page_memcg(), > since I hesitate to steal the pleasure of deletion from Johannes. This looks good to me. Particularly 2/3 made me go "Aww, yes" but the overall line removal stats look good too. That said, I only looked at the patches, and not the end result itself. But not having the bit spin lock is, I think, a huge improvement. I do wonder if this should be now just merged with your previous series - it looks a bit odd how your previous series adds that bitlock, only for it to be immediately removed. But if you think the logic ends up being easier to follow this way as two separate patch series, I guess I don't care. And the memcg locking is entirely a separate issue, and I hope Johannes will deal with that. Thanks, Linus
On Fri, Nov 18, 2022 at 12:18:42PM -0800, Linus Torvalds wrote: > On Fri, Nov 18, 2022 at 1:08 AM Hugh Dickins <hughd@google.com> wrote: > > > > Linus was underwhelmed by the earlier compound mapcounts series: > > this series builds on top of it (as in next-20221117) to follow > > up on his suggestions - except rmap.c still using lock_page_memcg(), > > since I hesitate to steal the pleasure of deletion from Johannes. > > This looks good to me. Particularly 2/3 made me go "Aww, yes" but the > overall line removal stats look good too. > > That said, I only looked at the patches, and not the end result > itself. But not having the bit spin lock is, I think, a huge > improvement. > > I do wonder if this should be now just merged with your previous > series - it looks a bit odd how your previous series adds that > bitlock, only for it to be immediately removed. > > But if you think the logic ends up being easier to follow this way as > two separate patch series, I guess I don't care. > > And the memcg locking is entirely a separate issue, and I hope > Johannes will deal with that. Yeah, I'll redo the removal on top of this series and resend it. Thanks
On Fri, 18 Nov 2022, Linus Torvalds wrote: > On Fri, Nov 18, 2022 at 1:08 AM Hugh Dickins <hughd@google.com> wrote: > > > > Linus was underwhelmed by the earlier compound mapcounts series: > > this series builds on top of it (as in next-20221117) to follow > > up on his suggestions - except rmap.c still using lock_page_memcg(), > > since I hesitate to steal the pleasure of deletion from Johannes. > > This looks good to me. Particularly 2/3 made me go "Aww, yes" but the > overall line removal stats look good too. > > That said, I only looked at the patches, and not the end result > itself. But not having the bit spin lock is, I think, a huge > improvement. Great, thanks a lot for looking through. > > I do wonder if this should be now just merged with your previous > series - it looks a bit odd how your previous series adds that > bitlock, only for it to be immediately removed. > > But if you think the logic ends up being easier to follow this way as > two separate patch series, I guess I don't care. I rather like having its evolution on record there, but that might just be my sentimentality + laziness. Kirill did a grand job of reviewing the first series: I think that, at least for now, it would be easier for people to review the changes if the two series are not recombined. But the first series has not yet graduated from mm-unstable, so if Andrew and/or Kirill also prefer to have them combined into one bit_spin_lock-less series, that I can do. (And the end result should be identical, so would not complicate Johannes's lock_page_memcg() excision.) Hugh > > And the memcg locking is entirely a separate issue, and I hope > Johannes will deal with that. > > Thanks, > Linus
On Fri, 18 Nov 2022 12:51:09 -0800 (PST) Hugh Dickins <hughd@google.com> wrote: > But the first series has not yet graduated from mm-unstable, > so if Andrew and/or Kirill also prefer to have them combined into one > bit_spin_lock-less series, that I can do. (And the end result should be > identical, so would not complicate Johannes's lock_page_memcg() excision.) I'd prefer that approach. It's -rc5 and the earlier "mm,huge,rmap: unify and speed up compound mapcounts" series has had some testing. I'd prefer not to toss it all out and start again.
On Fri, Nov 18, 2022 at 2:03 PM Andrew Morton <akpm@linux-foundation.org> wrote: > > I'd prefer that approach. The "that approach" is a bit ambiguous here, particularly considering how you quoted things. But I think from the context you meant "keep them as two separate series, even if the second undoes part of the first and does it differently". And that's fine. Even if it's maybe a bit odd to introduce that locking that then goes away, I can't argue with "the first series was already reviewed and has gone through a fair amount of testing". Linus
On Fri, 18 Nov 2022, Andrew Morton wrote: > On Fri, 18 Nov 2022 12:51:09 -0800 (PST) Hugh Dickins <hughd@google.com> wrote: > > > But the first series has not yet graduated from mm-unstable, > > so if Andrew and/or Kirill also prefer to have them combined into one > > bit_spin_lock-less series, that I can do. (And the end result should be > > identical, so would not complicate Johannes's lock_page_memcg() excision.) > > I'd prefer that approach. I think you're saying that you prefer the other approach, to keep the two series separate (second immediately after the first, or not, doesn't matter), rather than combined into one bit_spin_lock-less series. Please clarify! Thanks, Hugh > It's -rc5 and the earlier "mm,huge,rmap: > unify and speed up compound mapcounts" series has had some testing. > I'd prefer not to toss it all out and start again.
On Fri, 18 Nov 2022 14:10:32 -0800 (PST) Hugh Dickins <hughd@google.com> wrote: > On Fri, 18 Nov 2022, Andrew Morton wrote: > > On Fri, 18 Nov 2022 12:51:09 -0800 (PST) Hugh Dickins <hughd@google.com> wrote: > > > > > But the first series has not yet graduated from mm-unstable, > > > so if Andrew and/or Kirill also prefer to have them combined into one > > > bit_spin_lock-less series, that I can do. (And the end result should be > > > identical, so would not complicate Johannes's lock_page_memcg() excision.) > > > > I'd prefer that approach. > > I think you're saying that you prefer the other approach, to keep the > two series separate (second immediately after the first, or not, doesn't > matter), rather than combined into one bit_spin_lock-less series. > Please clarify! Thanks, Yes, two separate series. Apologies for the confuddling.
On Fri, Nov 18, 2022 at 01:08:13AM -0800, Hugh Dickins wrote: > Linus was underwhelmed by the earlier compound mapcounts series: > this series builds on top of it (as in next-20221117) to follow > up on his suggestions - except rmap.c still using lock_page_memcg(), > since I hesitate to steal the pleasure of deletion from Johannes. > Is there a plan to remove lock_page_memcg() altogether which I missed? I am planning to make lock_page_memcg() a nop for cgroup-v2 (as it shows up in the perf profile on exit path) but if we are removing it then I should just wait.
On Mon, Nov 21, 2022 at 8:59 AM Shakeel Butt <shakeelb@google.com> wrote: > > Is there a plan to remove lock_page_memcg() altogether which I missed? I > am planning to make lock_page_memcg() a nop for cgroup-v2 (as it shows > up in the perf profile on exit path) Yay. It seems I'm not the only one hating it. > but if we are removing it then I should just wait. Well, I think Johannes was saying that at least the case I disliked (the rmap removal from the page table tear-down - I strongly suspect it's the one you're seeing on your perf profile too) can be removed entirely as long as it's done under the page table lock (which my final version of the rmap delaying still was). See https://lore.kernel.org/all/Y2llcRiDLHc2kg%2FN@cmpxchg.org/ for his preliminary patch. That said, if you have some patch to make it a no-op for _other_ reasons, and could be done away with _entirely_ (not just for rmap), then that would be even better. I am not a fan of that lock in general, but in the teardown rmap path it's actively horrifying because it is taken one page at a time. So it's taken a *lot* (although you might not see it if all you run is long-running benchmarks - it's mainly the "run lots of small scripts that really hits it). The reason it seems to be so horrifyingly noticeable on the exit path is that the fork() side already does the rmap stuff (mainly __page_dup_rmap()) _without_ having to do the lock_page_memcg() dance. So I really hate that lock. It's completely inconsistent, and it all feels very wrong. It seemed entirely pointless when I was looking at the rmap removal path for a single page. The fact that both you and Johannes seem to be more than ready to just remove it makes me much happier, because I've never actually known the memcg code enough to do anything about my simmering hatred. Linus
On Mon, Nov 21, 2022 at 04:59:38PM +0000, Shakeel Butt wrote: > On Fri, Nov 18, 2022 at 01:08:13AM -0800, Hugh Dickins wrote: > > Linus was underwhelmed by the earlier compound mapcounts series: > > this series builds on top of it (as in next-20221117) to follow > > up on his suggestions - except rmap.c still using lock_page_memcg(), > > since I hesitate to steal the pleasure of deletion from Johannes. > > Is there a plan to remove lock_page_memcg() altogether which I missed? I > am planning to make lock_page_memcg() a nop for cgroup-v2 (as it shows > up in the perf profile on exit path) but if we are removing it then I > should just wait. We can remove it for rmap at least, but we might be able to do more. Besides rmap, we're left with the dirty and writeback page transitions that wrt cgroups need to be atomic with NR_FILE_DIRTY and NR_WRITEBACK. Looking through the various callsites, I think we can delete it from setting and clearing dirty state, as we always hold the page lock (or the pte lock in some instances of folio_mark_dirty). Both of these are taken from the cgroup side, so we're good there. I think we can also remove it when setting writeback, because those sites have the page locked as well. That leaves clearing writeback. This can't hold the page lock due to the atomic context, so currently we need to take lock_page_memcg() as the lock of last resort. I wonder if we can have cgroup take the xalock instead: writeback ending on file pages always acquires the xarray lock. Swap writeback currently doesn't, but we could make it so (swap_address_space). The only thing that gives me pause is the !mapping check in __folio_end_writeback. File and swapcache pages usually have mappings, and truncation waits for writeback to finish before axing page->mapping. So AFAICS this can only happen if we call end_writeback on something that isn't under writeback - in which case the test_clear will fail and we don't update the stats anyway. But I want to be sure. Does anybody know from the top of their heads if a page under writeback could be without a mapping in some weird cornercase? If we could ensure that the NR_WRITEBACK decs are always protected by the xalock, we could grab it from mem_cgroup_move_account(), and then kill lock_page_memcg() altogether.
On Mon, 21 Nov 2022, Johannes Weiner wrote: > On Mon, Nov 21, 2022 at 04:59:38PM +0000, Shakeel Butt wrote: > > On Fri, Nov 18, 2022 at 01:08:13AM -0800, Hugh Dickins wrote: > > > Linus was underwhelmed by the earlier compound mapcounts series: > > > this series builds on top of it (as in next-20221117) to follow > > > up on his suggestions - except rmap.c still using lock_page_memcg(), > > > since I hesitate to steal the pleasure of deletion from Johannes. > > > > Is there a plan to remove lock_page_memcg() altogether which I missed? I > > am planning to make lock_page_memcg() a nop for cgroup-v2 (as it shows > > up in the perf profile on exit path) but if we are removing it then I > > should just wait. > > We can remove it for rmap at least, but we might be able to do more. I hope the calls from mm/rmap.c can be deleted before deciding the bigger picture for lock_page_memcg() itself; getting rid of it would be very nice, but it has always had a difficult job to do (and you've devoted lots of good effort to minimizing it). > > Besides rmap, we're left with the dirty and writeback page transitions > that wrt cgroups need to be atomic with NR_FILE_DIRTY and NR_WRITEBACK. > > Looking through the various callsites, I think we can delete it from > setting and clearing dirty state, as we always hold the page lock (or > the pte lock in some instances of folio_mark_dirty). Both of these are > taken from the cgroup side, so we're good there. > > I think we can also remove it when setting writeback, because those > sites have the page locked as well. > > That leaves clearing writeback. This can't hold the page lock due to > the atomic context, so currently we need to take lock_page_memcg() as > the lock of last resort. > > I wonder if we can have cgroup take the xalock instead: writeback > ending on file pages always acquires the xarray lock. Swap writeback > currently doesn't, but we could make it so (swap_address_space). It's a little bit of a regression to have to take that lock when ending writeback on swap (compared with the rcu_read_lock() of almost every lock_page_memcg()); but I suppose if swap had been doing that all along, like the normal page cache case, I would not be complaining. > > The only thing that gives me pause is the !mapping check in > __folio_end_writeback. File and swapcache pages usually have mappings, > and truncation waits for writeback to finish before axing > page->mapping. So AFAICS this can only happen if we call end_writeback > on something that isn't under writeback - in which case the test_clear > will fail and we don't update the stats anyway. But I want to be sure. > > Does anybody know from the top of their heads if a page under > writeback could be without a mapping in some weird cornercase? End of writeback has been a persistent troublemaker, in several ways; I forget whether we are content with it now or not. I would not trust whatever I think OTOH of that !mapping case, but I was deeper into it two years ago, and find myself saying "Can mapping be NULL? I don't see how, but allow for that with a WARN_ON_ONCE()" in a patch I posted then (but it didn't go in, we went in another direction). I'm pretty sure it never warned once for me, but I probably wasn't doing enough to test it. And IIRC I did also think that the !mapping check had perhaps been copied from a related function, one where it made more sense. It's also worth noting that the two stats which get decremented there, NR_WRITEBACK and NR_ZONE_WRITE_PENDING, are two of the three which we have commented "Skip checking stats known to go negative occasionally" in mm/vmstat.c: I never did come up with a convincing explanation for that (Roman had his explanation, but I wasn't quite convinced). Maybe it would just be wrong to touch them if mapping were NULL. > > If we could ensure that the NR_WRITEBACK decs are always protected by > the xalock, we could grab it from mem_cgroup_move_account(), and then > kill lock_page_memcg() altogether. I suppose so (but I still feel grudging about the xalock for swap). Hugh
On Mon, Nov 21, 2022 at 01:52:23PM -0500, Johannes Weiner wrote: > That leaves clearing writeback. This can't hold the page lock due to > the atomic context, so currently we need to take lock_page_memcg() as > the lock of last resort. > > I wonder if we can have cgroup take the xalock instead: writeback > ending on file pages always acquires the xarray lock. Swap writeback > currently doesn't, but we could make it so (swap_address_space). > > The only thing that gives me pause is the !mapping check in > __folio_end_writeback. File and swapcache pages usually have mappings, > and truncation waits for writeback to finish before axing > page->mapping. So AFAICS this can only happen if we call end_writeback > on something that isn't under writeback - in which case the test_clear > will fail and we don't update the stats anyway. But I want to be sure. > > Does anybody know from the top of their heads if a page under > writeback could be without a mapping in some weird cornercase? I can't think of such a corner case. We should always wait for writeback to finish before removing the page from the page cache; the writeback bit used to be (and kind of still is) an implicit reference to the page, which means that we can't remove the page cache's reference to the page without waiting for writeback. > If we could ensure that the NR_WRITEBACK decs are always protected by > the xalock, we could grab it from mem_cgroup_move_account(), and then > kill lock_page_memcg() altogether. I'm not thrilled by this idea, but I'm not going to veto it.
On Tue, Nov 22, 2022 at 05:57:42AM +0000, Matthew Wilcox wrote: > On Mon, Nov 21, 2022 at 01:52:23PM -0500, Johannes Weiner wrote: > > That leaves clearing writeback. This can't hold the page lock due to > > the atomic context, so currently we need to take lock_page_memcg() as > > the lock of last resort. > > > > I wonder if we can have cgroup take the xalock instead: writeback > > ending on file pages always acquires the xarray lock. Swap writeback > > currently doesn't, but we could make it so (swap_address_space). > > > > The only thing that gives me pause is the !mapping check in > > __folio_end_writeback. File and swapcache pages usually have mappings, > > and truncation waits for writeback to finish before axing > > page->mapping. So AFAICS this can only happen if we call end_writeback > > on something that isn't under writeback - in which case the test_clear > > will fail and we don't update the stats anyway. But I want to be sure. > > > > Does anybody know from the top of their heads if a page under > > writeback could be without a mapping in some weird cornercase? > > I can't think of such a corner case. We should always wait for > writeback to finish before removing the page from the page cache; > the writeback bit used to be (and kind of still is) an implicit > reference to the page, which means that we can't remove the page > cache's reference to the page without waiting for writeback. Great, thanks! > > If we could ensure that the NR_WRITEBACK decs are always protected by > > the xalock, we could grab it from mem_cgroup_move_account(), and then > > kill lock_page_memcg() altogether. > > I'm not thrilled by this idea, but I'm not going to veto it. Ok, I'm also happy to drop this one. Certainly, the rmap one is the lowest-hanging fruit. I have the patch rebased against Hugh's series in mm-unstable; I'll wait for that to settle down, and then send an updated version to Andrew.
On Mon, Nov 21, 2022 at 09:16:58AM -0800, Linus Torvalds wrote: > On Mon, Nov 21, 2022 at 8:59 AM Shakeel Butt <shakeelb@google.com> wrote: > > > > Is there a plan to remove lock_page_memcg() altogether which I missed? I > > am planning to make lock_page_memcg() a nop for cgroup-v2 (as it shows > > up in the perf profile on exit path) > > Yay. It seems I'm not the only one hating it. > > > but if we are removing it then I should just wait. > > Well, I think Johannes was saying that at least the case I disliked > (the rmap removal from the page table tear-down - I strongly suspect > it's the one you're seeing on your perf profile too) Yes indeed that is the one. - 99.89% 0.00% fork-large-mmap [kernel.kallsyms] [k] entry_SYSCALL_64_after_hw◆ entry_SYSCALL_64_after_hwframe - do_syscall_64 - 48.94% __x64_sys_exit_group do_group_exit - do_exit - 48.94% exit_mm mmput - __mmput - exit_mmap - 48.61% unmap_vmas - 48.61% unmap_single_vma - unmap_page_range - 48.60% zap_p4d_range - 44.66% zap_pte_range + 12.61% tlb_flush_mmu - 9.38% page_remove_rmap 2.50% lock_page_memcg 2.37% unlock_page_memcg 0.61% PageHuge 4.80% vm_normal_page 2.56% __tlb_remove_page_size 0.85% lock_page_memcg 0.53% PageHuge 2.22% __tlb_remove_page_size 0.93% vm_normal_page 0.72% page_remove_rmap > can be removed > entirely as long as it's done under the page table lock (which my > final version of the rmap delaying still was). > > See > > https://lore.kernel.org/all/Y2llcRiDLHc2kg%2FN@cmpxchg.org/ > > for his preliminary patch. > > That said, if you have some patch to make it a no-op for _other_ > reasons, and could be done away with _entirely_ (not just for rmap), > then that would be even better. I am actually looking at deprecating the whole "move charge" funcitonality of cgroup-v1 i.e. the underlying reason lock_page_memcg exists. That already does not work for couple of cases like partially mapped THP and madv_free'd pages. Though that deprecation process would take some time. In the meantime I was looking at if we can make these functions nop for cgroup-v2. thanks, Shakeel
On Tue, Nov 22, 2022 at 01:55:39AM -0500, Johannes Weiner wrote: > On Tue, Nov 22, 2022 at 05:57:42AM +0000, Matthew Wilcox wrote: > > On Mon, Nov 21, 2022 at 01:52:23PM -0500, Johannes Weiner wrote: > > > That leaves clearing writeback. This can't hold the page lock due to > > > the atomic context, so currently we need to take lock_page_memcg() as > > > the lock of last resort. > > > > > > I wonder if we can have cgroup take the xalock instead: writeback > > > ending on file pages always acquires the xarray lock. Swap writeback > > > currently doesn't, but we could make it so (swap_address_space). > > > > > > The only thing that gives me pause is the !mapping check in > > > __folio_end_writeback. File and swapcache pages usually have mappings, > > > and truncation waits for writeback to finish before axing > > > page->mapping. So AFAICS this can only happen if we call end_writeback > > > on something that isn't under writeback - in which case the test_clear > > > will fail and we don't update the stats anyway. But I want to be sure. > > > > > > Does anybody know from the top of their heads if a page under > > > writeback could be without a mapping in some weird cornercase? > > > > I can't think of such a corner case. We should always wait for > > writeback to finish before removing the page from the page cache; > > the writeback bit used to be (and kind of still is) an implicit > > reference to the page, which means that we can't remove the page > > cache's reference to the page without waiting for writeback. > > Great, thanks! > > > > If we could ensure that the NR_WRITEBACK decs are always protected by > > > the xalock, we could grab it from mem_cgroup_move_account(), and then > > > kill lock_page_memcg() altogether. > > > > I'm not thrilled by this idea, but I'm not going to veto it. > > Ok, I'm also happy to drop this one. > > Certainly, the rmap one is the lowest-hanging fruit. I have the patch > rebased against Hugh's series in mm-unstable; I'll wait for that to > settle down, and then send an updated version to Andrew. I am planning to initiate the deprecation of the move charge functionality of v1. So I would say let's go with low hanging fruit for now and let slow process of deprecation remove the remaining cases.