Message ID | 20230626204612.106165-1-lstoakes@gmail.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp7773626vqr; Mon, 26 Jun 2023 14:46:26 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ7IfAHHbQLuMMTXSZeokHypa2kftjzuY1JuwQHqob9FpZMLLlVOHWmY3S+hnD32OC9tqwbo X-Received: by 2002:a17:907:318c:b0:987:e230:690 with SMTP id xe12-20020a170907318c00b00987e2300690mr25106176ejb.57.1687815985785; Mon, 26 Jun 2023 14:46:25 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1687815985; cv=none; d=google.com; s=arc-20160816; b=qY4ZJmIGlApklYi6F4Wa7dBc1Ss9Bve/fch/1iY8blAB1KELCZIa+AMwoHhKUPwX0i MhX7fTfmMnkliNuwHRJB6YipBpzY28bpZEOCE4mGAqQbDzxZJZfoOZdgdCEaXfnD+tMN mYhX+VXpfkqsfr7j3lRrMvKiMSy3NRmzSEWJ3FLazBD9aAby2R6R4aPZ+mLsryoWrK2u ZdZRvDF+s20pyS37FtN12a2Lvu8oU+3i5tMw13j35rTyFAK+2M6QEFx7jjpSB3QOr3oz 8dNFFU0aIFFfcN3KRA719g9o/n9Gg2iVEz8GOntTWxUkUGzHRx61/s1zjJvm9rMgV5zK gCxg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from:dkim-signature; bh=2gaBnNp5IBEywyARaUu76L5/AmouMcYydCft2iETs/w=; fh=QRnpIgBF0qsKZ1MJcXw8QFV461uOfSEBIKVT8KTWVXs=; b=VT9IJreC6/2YfSXR6xABwXz1smxHLEV0RgIibt7yfsNHMTpHvZQLz/jzjc5CV6JEU3 0nJjMUT0uLj8669e+ZiKHjcWRWYmbdgAD4TdbO15MVNhZF5m8lO3GYAyFBVcWaTslSFh ZfWjRguN+iLSDuXMye8ZlDUgih2QCyWkDbQlIVR1tc78k0CMC4qMQEJAwnZrUcRzLJ/l RvI5gzWs0i6Q5otAO0iWJa+hLcv+qKNRk/FnTJw1arHT9dQp5zoVt+sXvYpr+SNzD0Lv 2TAIb0J5iJkR2Sk38GRqYCmgVLxMyWeFPWfq9XoxabxaFL0UkAkoXDbJqdBNxyOtRkPM 9Nnw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20221208 header.b=Z1uS+Wpd; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id n11-20020a170906088b00b0098f99532db9si1814840eje.662.2023.06.26.14.46.01; Mon, 26 Jun 2023 14:46:25 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20221208 header.b=Z1uS+Wpd; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230311AbjFZUqZ (ORCPT <rfc822;filip.gregor98@gmail.com> + 99 others); Mon, 26 Jun 2023 16:46:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45530 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230022AbjFZUqW (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Mon, 26 Jun 2023 16:46:22 -0400 Received: from mail-wm1-x32f.google.com (mail-wm1-x32f.google.com [IPv6:2a00:1450:4864:20::32f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7A129198C for <linux-kernel@vger.kernel.org>; Mon, 26 Jun 2023 13:46:17 -0700 (PDT) Received: by mail-wm1-x32f.google.com with SMTP id 5b1f17b1804b1-3fa99742bd1so12472205e9.3 for <linux-kernel@vger.kernel.org>; Mon, 26 Jun 2023 13:46:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1687812376; x=1690404376; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date:message-id:reply-to; bh=2gaBnNp5IBEywyARaUu76L5/AmouMcYydCft2iETs/w=; b=Z1uS+WpdhZxDdhL8sris3m1/L9QbHq0SHLWsGQwnXANleHoAyDYRhYh1WRtfU5y/TW x1kAYzGFzqawHRlabWEV4rsr9Wctw4yhgdFlRQMukZgOXD6RH+VPQDrCidrf/QR7u636 edAgMbP0v0zoNZdDbdx6WuWMOhPbi3Z02IZtc7OXr+3g9A+Z1L8gNs99ErJQept0HmnH XHuTVLZ7knD8orstCeXrk43WVXUo4jXZzIoytpmEEkSljL7sYRjkGbKX2KH2XNLQ2XDF NxMPxeSC83ITv0h3cLQGqzFefWq1sapQyQDVA3IopTwUfkNmvlHUV9EYhL5da4DzA5f3 vsmA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687812376; x=1690404376; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=2gaBnNp5IBEywyARaUu76L5/AmouMcYydCft2iETs/w=; b=VcoZDGfUDf0/1NwHQ4QiJNrfWU3t1hKitpQgwhB3uwqwiAPjtWDQGmyJBFL887G3FO sDA7paJBdToYmQgGNWYnykjimtJN+4qobMU+X4YFXoRsxTzkc9Hm11IpLC/lN9knTA3x XBU3qek0Cn+lPSrmTtGkVY0QIdkem8wOIKdtFdLeR+pLKIKPwnv35cFKulcY4HtwF29E UC1iBRb1P0tE21I8K1StkwZLTnHbeaetQrJpVdYpwc9ODSXC0NyBaLDGDVFy5uNdkEU8 moAmDyjH5p5zMWhN5L5x69z+0s1wP0CAfuE3YmI3SW9PTTVCHN1u5bjVzN8oF47enUOm KJEw== X-Gm-Message-State: AC+VfDxwW6g+YZ3hebO7begUeKW6htLwh0hAa/8gnTZwP5Gru7Q1YvDw HJGqLZKM607y3PEZBNs3Xro= X-Received: by 2002:a7b:c7c5:0:b0:3fa:9554:fb23 with SMTP id z5-20020a7bc7c5000000b003fa9554fb23mr2756440wmk.21.1687812375494; Mon, 26 Jun 2023 13:46:15 -0700 (PDT) Received: from lucifer.home ([2a00:23c5:dc8c:8701:1663:9a35:5a7b:1d76]) by smtp.googlemail.com with ESMTPSA id d11-20020a1c730b000000b003fb416d732csm1111726wmb.6.2023.06.26.13.46.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 26 Jun 2023 13:46:14 -0700 (PDT) From: Lorenzo Stoakes <lstoakes@gmail.com> To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton <akpm@linux-foundation.org> Cc: Mike Rapoport <rppt@kernel.org>, David Hildenbrand <david@redhat.com>, "Liam R . Howlett" <Liam.Howlett@oracle.com>, Vlastimil Babka <vbabka@suse.cz>, Lorenzo Stoakes <lstoakes@gmail.com> Subject: [PATCH] mm/mprotect: allow unfaulted VMAs to be unaccounted on mprotect() Date: Mon, 26 Jun 2023 21:46:12 +0100 Message-ID: <20230626204612.106165-1-lstoakes@gmail.com> X-Mailer: git-send-email 2.41.0 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM, RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1769803335113264520?= X-GMAIL-MSGID: =?utf-8?q?1769803335113264520?= |
Series |
mm/mprotect: allow unfaulted VMAs to be unaccounted on mprotect()
|
|
Commit Message
Lorenzo Stoakes
June 26, 2023, 8:46 p.m. UTC
When mprotect() is used to make unwritable VMAs writable, they have the
VM_ACCOUNT flag applied and memory accounted accordingly.
If the VMA has had no pages faulted in and is then made unwritable once
again, it will remain accounted for, despite not being capable of extending
memory usage.
Consider:-
ptr = mmap(NULL, page_size * 3, PROT_READ, MAP_ANON | MAP_PRIVATE, -1, 0);
mprotect(ptr + page_size, page_size, PROT_READ | PROT_WRITE);
mprotect(ptr + page_size, page_size, PROT_READ);
The first mprotect() splits the range into 3 VMAs and the second fails to
merge the three as the middle VMA has VM_ACCOUNT set and the others do not,
rendering them unmergeable.
This is unnecessary, since no pages have actually been allocated and the
middle VMA is not capable of utilising more memory, thereby introducing
unnecessary VMA fragmentation (and accounting for more memory than is
necessary).
Since we cannot efficiently determine which pages map to an anonymous VMA,
we have to be very conservative - determining whether any pages at all have
been faulted in, by checking whether vma->anon_vma is NULL.
We can see that the lack of anon_vma implies that no anonymous pages are
present as evidenced by vma_needs_copy() utilising this on fork to
determine whether page tables need to be copied.
The only place where anon_vma is set NULL explicitly is on fork with
VM_WIPEONFORK set, however since this flag is intended to cause the child
process to not CoW on a given memory range, it is right to interpret this
as indicating the VMA has no faulted-in anonymous memory mapped.
If the VMA was forked without VM_WIPEONFORK set, then anon_vma_fork() will
have ensured that a new anon_vma is assigned (and correctly related to its
parent anon_vma) should any pages be CoW-mapped.
The overall operation is safe against races as we hold a write lock against
mm->mmap_lock.
If we could efficiently look up the VMA's faulted-in pages then we would
unaccount all those pages not yet faulted in. However as the original
comment alludes this simply isn't currently possible, so we remain
conservative and account all pages or none at all.
Signed-off-by: Lorenzo Stoakes <lstoakes@gmail.com>
---
mm/mprotect.c | 13 +++++++++++--
1 file changed, 11 insertions(+), 2 deletions(-)
Comments
On 6/26/23 22:46, Lorenzo Stoakes wrote: > When mprotect() is used to make unwritable VMAs writable, they have the > VM_ACCOUNT flag applied and memory accounted accordingly. > > If the VMA has had no pages faulted in and is then made unwritable once > again, it will remain accounted for, despite not being capable of extending > memory usage. > > Consider:- > > ptr = mmap(NULL, page_size * 3, PROT_READ, MAP_ANON | MAP_PRIVATE, -1, 0); > mprotect(ptr + page_size, page_size, PROT_READ | PROT_WRITE); > mprotect(ptr + page_size, page_size, PROT_READ); In the original Mike's example there were actual pages populated, in that case we still won't merge the vma's, right? Guess that can't be helped. > The first mprotect() splits the range into 3 VMAs and the second fails to > merge the three as the middle VMA has VM_ACCOUNT set and the others do not, > rendering them unmergeable. > > This is unnecessary, since no pages have actually been allocated and the > middle VMA is not capable of utilising more memory, thereby introducing > unnecessary VMA fragmentation (and accounting for more memory than is > necessary). > > Since we cannot efficiently determine which pages map to an anonymous VMA, > we have to be very conservative - determining whether any pages at all have > been faulted in, by checking whether vma->anon_vma is NULL. > > We can see that the lack of anon_vma implies that no anonymous pages are > present as evidenced by vma_needs_copy() utilising this on fork to > determine whether page tables need to be copied. > > The only place where anon_vma is set NULL explicitly is on fork with > VM_WIPEONFORK set, however since this flag is intended to cause the child > process to not CoW on a given memory range, it is right to interpret this > as indicating the VMA has no faulted-in anonymous memory mapped. > > If the VMA was forked without VM_WIPEONFORK set, then anon_vma_fork() will > have ensured that a new anon_vma is assigned (and correctly related to its > parent anon_vma) should any pages be CoW-mapped. > > The overall operation is safe against races as we hold a write lock against > mm->mmap_lock. > > If we could efficiently look up the VMA's faulted-in pages then we would > unaccount all those pages not yet faulted in. However as the original > comment alludes this simply isn't currently possible, so we remain > conservative and account all pages or none at all. > > Signed-off-by: Lorenzo Stoakes <lstoakes@gmail.com> So in practice programs will likely do the PROT_WRITE in order to actually populate the area, so this won't trigger as I commented above. But it can still help in some cases and is cheap to do, so: Acked-by: Vlastimil Babka <vbabka@suse.cz> > --- > mm/mprotect.c | 13 +++++++++++-- > 1 file changed, 11 insertions(+), 2 deletions(-) > > diff --git a/mm/mprotect.c b/mm/mprotect.c > index 6f658d483704..9461c936082b 100644 > --- a/mm/mprotect.c > +++ b/mm/mprotect.c > @@ -607,8 +607,11 @@ mprotect_fixup(struct vma_iterator *vmi, struct mmu_gather *tlb, > /* > * If we make a private mapping writable we increase our commit; > * but (without finer accounting) cannot reduce our commit if we > - * make it unwritable again. hugetlb mapping were accounted for > - * even if read-only so there is no need to account for them here > + * make it unwritable again except in the anonymous case where no > + * anon_vma has yet been assigned. > + * > + * hugetlb mapping were accounted for even if read-only so there is > + * no need to account for them here. > */ > if (newflags & VM_WRITE) { > /* Check space limits when area turns into data. */ > @@ -622,6 +625,9 @@ mprotect_fixup(struct vma_iterator *vmi, struct mmu_gather *tlb, > return -ENOMEM; > newflags |= VM_ACCOUNT; > } > + } else if ((oldflags & VM_ACCOUNT) && vma_is_anonymous(vma) && > + !vma->anon_vma) { > + newflags &= ~VM_ACCOUNT; > } > > /* > @@ -652,6 +658,9 @@ mprotect_fixup(struct vma_iterator *vmi, struct mmu_gather *tlb, > } > > success: > + if ((oldflags & VM_ACCOUNT) && !(newflags & VM_ACCOUNT)) > + vm_unacct_memory(nrpages); > + > /* > * vm_flags and vm_page_prot are protected by the mmap_lock > * held in write mode.
Hi all, On 27.06.23 08:28, Vlastimil Babka wrote: > On 6/26/23 22:46, Lorenzo Stoakes wrote: >> When mprotect() is used to make unwritable VMAs writable, they have the >> VM_ACCOUNT flag applied and memory accounted accordingly. >> >> If the VMA has had no pages faulted in and is then made unwritable once >> again, it will remain accounted for, despite not being capable of extending >> memory usage. >> >> Consider:- >> >> ptr = mmap(NULL, page_size * 3, PROT_READ, MAP_ANON | MAP_PRIVATE, -1, 0); >> mprotect(ptr + page_size, page_size, PROT_READ | PROT_WRITE); >> mprotect(ptr + page_size, page_size, PROT_READ); > > In the original Mike's example there were actual pages populated, in that > case we still won't merge the vma's, right? Guess that can't be helped. > I am clearly missing the motivation for this patch: above is a artificial reproducer that I cannot really imagine being relevant in practice. So is there any sane workload that does random mprotect() without even touching memory once? Sure, fuzzing, ... artificial reproducers ... but is there any real-life problem we're solving here? IOW, why did you (Lorenzo) invest time optimizing for this andcrafting this patch and why should reviewer invest time to understand if it's correct? :) >> The first mprotect() splits the range into 3 VMAs and the second fails to >> merge the three as the middle VMA has VM_ACCOUNT set and the others do not, >> rendering them unmergeable. >> >> This is unnecessary, since no pages have actually been allocated and the >> middle VMA is not capable of utilising more memory, thereby introducing >> unnecessary VMA fragmentation (and accounting for more memory than is >> necessary). >> >> Since we cannot efficiently determine which pages map to an anonymous VMA, >> we have to be very conservative - determining whether any pages at all have >> been faulted in, by checking whether vma->anon_vma is NULL. >> >> We can see that the lack of anon_vma implies that no anonymous pages are >> present as evidenced by vma_needs_copy() utilising this on fork to >> determine whether page tables need to be copied. >> >> The only place where anon_vma is set NULL explicitly is on fork with >> VM_WIPEONFORK set, however since this flag is intended to cause the child >> process to not CoW on a given memory range, it is right to interpret this >> as indicating the VMA has no faulted-in anonymous memory mapped. >> >> If the VMA was forked without VM_WIPEONFORK set, then anon_vma_fork() will >> have ensured that a new anon_vma is assigned (and correctly related to its >> parent anon_vma) should any pages be CoW-mapped. >> >> The overall operation is safe against races as we hold a write lock against >> mm->mmap_lock. >> >> If we could efficiently look up the VMA's faulted-in pages then we would >> unaccount all those pages not yet faulted in. However as the original >> comment alludes this simply isn't currently possible, so we remain >> conservative and account all pages or none at all. >> >> Signed-off-by: Lorenzo Stoakes <lstoakes@gmail.com> > > So in practice programs will likely do the PROT_WRITE in order to actually > populate the area, so this won't trigger as I commented above. But it can > still help in some cases and is cheap to do, so: IMHO we should much rather look into getting hugetlb ranges merged. Mt recollection is that we'll never end up merging hugetlb VMAs once split. This patch adds code without a clear motivation. Maybe there is a good motivation?
On Tue, Jun 27, 2023 at 08:59:33AM +0200, David Hildenbrand wrote: > Hi all, > > On 27.06.23 08:28, Vlastimil Babka wrote: > > On 6/26/23 22:46, Lorenzo Stoakes wrote: > > > When mprotect() is used to make unwritable VMAs writable, they have the > > > VM_ACCOUNT flag applied and memory accounted accordingly. > > > > > > If the VMA has had no pages faulted in and is then made unwritable once > > > again, it will remain accounted for, despite not being capable of extending > > > memory usage. > > > > > > Consider:- > > > > > > ptr = mmap(NULL, page_size * 3, PROT_READ, MAP_ANON | MAP_PRIVATE, -1, 0); > > > mprotect(ptr + page_size, page_size, PROT_READ | PROT_WRITE); > > > mprotect(ptr + page_size, page_size, PROT_READ); > > > > In the original Mike's example there were actual pages populated, in that > > case we still won't merge the vma's, right? Guess that can't be helped. > > > > I am clearly missing the motivation for this patch: above is a artificial > reproducer that I cannot really imagine being relevant in practice. I cc'd you on this patch exactly because I knew you'd scrutinise it greatly :) Well the motivator for the initial investigation was rppt playing with R[WO]X (this came from an #mm irc conversation), however in his case he will be mapping pages between the two. (apologies to rppt, I forgot to add the Reported-By...) > > So is there any sane workload that does random mprotect() without even > touching memory once? Sure, fuzzing, ... artificial reproducers ... but is > there any real-life problem we're solving here? > > IOW, why did you (Lorenzo) invest time optimizing for this andcrafting this > patch and why should reviewer invest time to understand if it's correct? :) > So why I (that Stoakes guy) invested time here was, well I had chased down the issue for rppt out of curiosity, and 'proved' the point by making essentially this patch. I dug into it further and (as the patch message aludes to) have convinced myself that this is safe, so essentially why NOT submit it :) In real-use scenarios, yes fuzzers are a thing, but what comes to mind more immediately is a process that maps a big chunk of virtual memory PROT_NONE and uses that as part of an internal allocator. If the process then allocates memory from this chunk (mprotect() -> PROT_READ | PROT_WRITE), which then gets freed without being used (mprotect() -> PROT_NONE) we hit the issue. For OVERCOMMIT_NEVER this could become quite an issue more so than the VMA fragmentation. In addition, I think a user simply doing the artificial test above would find the split remaining quite confusing, and somebody debugging some code like this would equally wonder why it happened, so there is benefit in clarity too (they of course observing the VMA fragmentation from the perspective of /proc/$pid/[s]maps). I believe given we hold a very strong lock (write on mm->mmap_lock) and that vma->anon_vma being NULL really does seem to imply no pages have been allocated that this is therefore a safe thing to do and worthwhile. > > > > The first mprotect() splits the range into 3 VMAs and the second fails to > > > merge the three as the middle VMA has VM_ACCOUNT set and the others do not, > > > rendering them unmergeable. > > > > > > This is unnecessary, since no pages have actually been allocated and the > > > middle VMA is not capable of utilising more memory, thereby introducing > > > unnecessary VMA fragmentation (and accounting for more memory than is > > > necessary). > > > > > > Since we cannot efficiently determine which pages map to an anonymous VMA, > > > we have to be very conservative - determining whether any pages at all have > > > been faulted in, by checking whether vma->anon_vma is NULL. > > > > > > We can see that the lack of anon_vma implies that no anonymous pages are > > > present as evidenced by vma_needs_copy() utilising this on fork to > > > determine whether page tables need to be copied. > > > > > > The only place where anon_vma is set NULL explicitly is on fork with > > > VM_WIPEONFORK set, however since this flag is intended to cause the child > > > process to not CoW on a given memory range, it is right to interpret this > > > as indicating the VMA has no faulted-in anonymous memory mapped. > > > > > > If the VMA was forked without VM_WIPEONFORK set, then anon_vma_fork() will > > > have ensured that a new anon_vma is assigned (and correctly related to its > > > parent anon_vma) should any pages be CoW-mapped. > > > > > > The overall operation is safe against races as we hold a write lock against > > > mm->mmap_lock. > > > > > > If we could efficiently look up the VMA's faulted-in pages then we would > > > unaccount all those pages not yet faulted in. However as the original > > > comment alludes this simply isn't currently possible, so we remain > > > conservative and account all pages or none at all. > > > > > > Signed-off-by: Lorenzo Stoakes <lstoakes@gmail.com> > > > > So in practice programs will likely do the PROT_WRITE in order to actually > > populate the area, so this won't trigger as I commented above. But it can > > still help in some cases and is cheap to do, so: > > IMHO we should much rather look into getting hugetlb ranges merged. Mt > recollection is that we'll never end up merging hugetlb VMAs once split. I'm not sure how that's relevant to fragmented non-hugetlb VMAs though? > > This patch adds code without a clear motivation. Maybe there is a good > motivation? See above for motivational thoughts :) > > -- > Cheers, > > David / dhildenb >
On 27.06.23 10:49, Lorenzo Stoakes wrote: > On Tue, Jun 27, 2023 at 08:59:33AM +0200, David Hildenbrand wrote: >> Hi all, >> >> On 27.06.23 08:28, Vlastimil Babka wrote: >>> On 6/26/23 22:46, Lorenzo Stoakes wrote: >>>> When mprotect() is used to make unwritable VMAs writable, they have the >>>> VM_ACCOUNT flag applied and memory accounted accordingly. >>>> >>>> If the VMA has had no pages faulted in and is then made unwritable once >>>> again, it will remain accounted for, despite not being capable of extending >>>> memory usage. >>>> >>>> Consider:- >>>> >>>> ptr = mmap(NULL, page_size * 3, PROT_READ, MAP_ANON | MAP_PRIVATE, -1, 0); >>>> mprotect(ptr + page_size, page_size, PROT_READ | PROT_WRITE); >>>> mprotect(ptr + page_size, page_size, PROT_READ); >>> >>> In the original Mike's example there were actual pages populated, in that >>> case we still won't merge the vma's, right? Guess that can't be helped. >>> >> >> I am clearly missing the motivation for this patch: above is a artificial >> reproducer that I cannot really imagine being relevant in practice. > > I cc'd you on this patch exactly because I knew you'd scrutinise it > greatly :) > Yeah, and that needs time and you have to motivate me :) > Well the motivator for the initial investigation was rppt playing with > R[WO]X (this came from an #mm irc conversation), however in his case he > will be mapping pages between the two. And that's the scenario I think we care about in practice (actually accessing memory). > > (apologies to rppt, I forgot to add the Reported-By...) > >> >> So is there any sane workload that does random mprotect() without even >> touching memory once? Sure, fuzzing, ... artificial reproducers ... but is >> there any real-life problem we're solving here? >> >> IOW, why did you (Lorenzo) invest time optimizing for this andcrafting this >> patch and why should reviewer invest time to understand if it's correct? :) >> > > So why I (that Stoakes guy) invested time here was, well I had chased down > the issue for rppt out of curiosity, and 'proved' the point by making > essentially this patch. > > I dug into it further and (as the patch message aludes to) have convinced > myself that this is safe, so essentially why NOT submit it :) > > In real-use scenarios, yes fuzzers are a thing, but what comes to mind more > immediately is a process that maps a big chunk of virtual memory PROT_NONE > and uses that as part of an internal allocator. > > If the process then allocates memory from this chunk (mprotect() -> > PROT_READ | PROT_WRITE), which then gets freed without being used > (mprotect() -> PROT_NONE) we hit the issue. For OVERCOMMIT_NEVER this could > become quite an issue more so than the VMA fragmentation. Using mprotect() when allocating/freeing memory in an allocator is already horribly harmful for performance (well, and the #VMAs), so I don't think that scenario is relevant in practice. What some allocators (iirc even glibc) do is reserve a bigger area with PROT_NONE and grow the accessible part slowly on demand, discarding freed memory using MADV_DONTNEED. So you essentially end up with two VMAs -- one completely accessible, one completely inaccessible. They don't use mprotect() because: (a) It's bad for performance (b) It might increase the #VMAs There is efence, but I remember it simply does mmap()+munmap() and runs into VMA limits easily just by relying on a lot of mappings. > > In addition, I think a user simply doing the artificial test above would > find the split remaining quite confusing, and somebody debugging some code > like this would equally wonder why it happened, so there is benefit in > clarity too (they of course observing the VMA fragmentation from the > perspective of /proc/$pid/[s]maps). My answer would have been "memory gets commited the first time we allow write access, and that wasn't the case for all memory in that range". Now, take your example above and touch the memory. ptr = mmap(NULL, page_size * 3, PROT_READ, MAP_ANON | MAP_PRIVATE, -1, 0); mprotect(ptr + page_size, page_size, PROT_READ | PROT_WRITE); *(ptr + page_size) = 1; mprotect(ptr + page_size, page_size, PROT_READ); And we'll not merge the VMAs. Which, at least to me, makes existing handling more consistent. And users could rightfully wonder "why isn't it getting merged". And the answer would be the same: "memory gets commited the first time we allow write access, and that wasn't the case for all memory in that range". > > I believe given we hold a very strong lock (write on mm->mmap_lock) and > that vma->anon_vma being NULL really does seem to imply no pages have been > allocated that this is therefore a safe thing to do and worthwhile. Do we have to care about the VMA locks now that pagefaults can be served without the mmap_lock in write mode? [...] >>> So in practice programs will likely do the PROT_WRITE in order to actually >>> populate the area, so this won't trigger as I commented above. But it can >>> still help in some cases and is cheap to do, so: >> >> IMHO we should much rather look into getting hugetlb ranges merged. Mt >> recollection is that we'll never end up merging hugetlb VMAs once split. > > I'm not sure how that's relevant to fragmented non-hugetlb VMAs though? It's a VMA merging issue that can be hit in practice, so I raised it. No strong opinion from my side, just my 2 cents reading the patch description and wondering "why do we even invest time thinking about this case" -- and eventually make handling less consistent IMHO (see above).
On Tue, Jun 27, 2023 at 11:13:59AM +0200, David Hildenbrand wrote: > On 27.06.23 10:49, Lorenzo Stoakes wrote: > > On Tue, Jun 27, 2023 at 08:59:33AM +0200, David Hildenbrand wrote: > > > Hi all, > > > > > > On 27.06.23 08:28, Vlastimil Babka wrote: > > > > On 6/26/23 22:46, Lorenzo Stoakes wrote: > > > > > When mprotect() is used to make unwritable VMAs writable, they have the > > > > > VM_ACCOUNT flag applied and memory accounted accordingly. > > > > > > > > > > If the VMA has had no pages faulted in and is then made unwritable once > > > > > again, it will remain accounted for, despite not being capable of extending > > > > > memory usage. > > > > > > > > > > Consider:- > > > > > > > > > > ptr = mmap(NULL, page_size * 3, PROT_READ, MAP_ANON | MAP_PRIVATE, -1, 0); > > > > > mprotect(ptr + page_size, page_size, PROT_READ | PROT_WRITE); > > > > > mprotect(ptr + page_size, page_size, PROT_READ); > > > > > > > > In the original Mike's example there were actual pages populated, in that > > > > case we still won't merge the vma's, right? Guess that can't be helped. > > > > > > > > > > I am clearly missing the motivation for this patch: above is a artificial > > > reproducer that I cannot really imagine being relevant in practice. > > > > I cc'd you on this patch exactly because I knew you'd scrutinise it > > greatly :) > > > > Yeah, and that needs time and you have to motivate me :) > Beer? ;) > > Well the motivator for the initial investigation was rppt playing with > > R[WO]X (this came from an #mm irc conversation), however in his case he > > will be mapping pages between the two. > > And that's the scenario I think we care about in practice (actually > accessing memory). Yes indeed, I mean I am not denying this patch is edge case stuff, in reality you'd allocate pages, and correctly that would be accountable, the unallocated R/O bit not and it would remain accountable. > > > > > (apologies to rppt, I forgot to add the Reported-By...) > > > > > > > > So is there any sane workload that does random mprotect() without even > > > touching memory once? Sure, fuzzing, ... artificial reproducers ... but is > > > there any real-life problem we're solving here? > > > > > > IOW, why did you (Lorenzo) invest time optimizing for this andcrafting this > > > patch and why should reviewer invest time to understand if it's correct? :) > > > > > > > So why I (that Stoakes guy) invested time here was, well I had chased down > > the issue for rppt out of curiosity, and 'proved' the point by making > > essentially this patch. > > > > I dug into it further and (as the patch message aludes to) have convinced > > myself that this is safe, so essentially why NOT submit it :) > > > > In real-use scenarios, yes fuzzers are a thing, but what comes to mind more > > immediately is a process that maps a big chunk of virtual memory PROT_NONE > > and uses that as part of an internal allocator. > > > > If the process then allocates memory from this chunk (mprotect() -> > > PROT_READ | PROT_WRITE), which then gets freed without being used > > (mprotect() -> PROT_NONE) we hit the issue. For OVERCOMMIT_NEVER this could > > become quite an issue more so than the VMA fragmentation. > > Using mprotect() when allocating/freeing memory in an allocator is already > horribly harmful for performance (well, and the #VMAs), so I don't think > that scenario is relevant in practice. Chrome for instance maintains vast memory ranges as PROT_NONE. I've not dug into what they're doing, but surely to make use of them they'd need to mprotect() or mmap()/mremap() (which maybe is what the intent is) But fair point. However I can't imagine m[re]map'ing like this would be cheap either, as you're doing the same kind of expensive operations, so the general _approach_ seems like it's used in some way in practice. > > What some allocators (iirc even glibc) do is reserve a bigger area with > PROT_NONE and grow the accessible part slowly on demand, discarding freed > memory using MADV_DONTNEED. So you essentially end up with two VMAs -- one > completely accessible, one completely inaccessible. > > They don't use mprotect() because: > (a) It's bad for performance > (b) It might increase the #VMAs > > There is efence, but I remember it simply does mmap()+munmap() and runs into > VMA limits easily just by relying on a lot of mappings. > > > > > > In addition, I think a user simply doing the artificial test above would > > find the split remaining quite confusing, and somebody debugging some code > > like this would equally wonder why it happened, so there is benefit in > > clarity too (they of course observing the VMA fragmentation from the > > perspective of /proc/$pid/[s]maps). > > My answer would have been "memory gets commited the first time we allow > write access, and that wasn't the case for all memory in that range". > > > Now, take your example above and touch the memory. > > > ptr = mmap(NULL, page_size * 3, PROT_READ, MAP_ANON | MAP_PRIVATE, -1, 0); > mprotect(ptr + page_size, page_size, PROT_READ | PROT_WRITE); > *(ptr + page_size) = 1; > mprotect(ptr + page_size, page_size, PROT_READ); > > > And we'll not merge the VMAs. > > Which, at least to me, makes existing handling more consistent. Indeed, but I don't think it's currently consistent at all. The 'correct' solution would be to:- 1. account for the block when it becomes writable 2. unaccount for any pages not used when it becomes unwritable However since we can't go from vma -> folios for anon pages without some extreme effort this is not feasible. Therefore the existing code hacks it and just keep things accountable. The patch reduces the hacking so we get halfway to the correct approach. So before: "if you ever make this read/write, we account it forever" After: "if you ever make this read/write and USE IT, we account it forever" To me it is more consistent. Of course this is subjective... > > And users could rightfully wonder "why isn't it getting merged". And the > answer would be the same: "memory gets commited the first time we allow > write access, and that wasn't the case for all memory in that range". > Yes indeed, a bigger answer is that we don't have fine-grained accounting for pages for anon_vma. > > > > I believe given we hold a very strong lock (write on mm->mmap_lock) and > > that vma->anon_vma being NULL really does seem to imply no pages have been > > allocated that this is therefore a safe thing to do and worthwhile. > > Do we have to care about the VMA locks now that pagefaults can be served > without the mmap_lock in write mode? Any switch to VMA locking would need careful attention applied to mprotect and require equally strong assurances given we are fiddling with entries in the maple tree (and more broadly the mmap_lock implies something stronger). > > [...] > > > > > So in practice programs will likely do the PROT_WRITE in order to actually > > > > populate the area, so this won't trigger as I commented above. But it can > > > > still help in some cases and is cheap to do, so: > > > > > > IMHO we should much rather look into getting hugetlb ranges merged. Mt > > > recollection is that we'll never end up merging hugetlb VMAs once split. > > > > I'm not sure how that's relevant to fragmented non-hugetlb VMAs though? > > It's a VMA merging issue that can be hit in practice, so I raised it. > > > No strong opinion from my side, just my 2 cents reading the patch > description and wondering "why do we even invest time thinking about this > case" -- and eventually make handling less consistent IMHO (see above). Hmm it seems ilke you have quite a strong opinion :P but this is why I cc-d you, as you are a great scrutiniser. Yeah, the time investment was just by accident, the patch was originally a throwaway thing to prove the point :] I very much appreciate your time though! And I owe you at least one beer now. I would ask that while you might question the value, whether you think it so harmful as not to go in, so Andrew can know whether this debate = don't take? An Ack-with-meh would be fine. But also if you want to nak, it's also fine. I will buy you the beer either way ;) > > -- > Cheers, > > David / dhildenb >
[...] >> >> Yeah, and that needs time and you have to motivate me :) >> > > Beer? ;) Oh, that always works :) > >>> Well the motivator for the initial investigation was rppt playing with >>> R[WO]X (this came from an #mm irc conversation), however in his case he >>> will be mapping pages between the two. >> >> And that's the scenario I think we care about in practice (actually >> accessing memory). [...] >>> In real-use scenarios, yes fuzzers are a thing, but what comes to mind more >>> immediately is a process that maps a big chunk of virtual memory PROT_NONE >>> and uses that as part of an internal allocator. >>> >>> If the process then allocates memory from this chunk (mprotect() -> >>> PROT_READ | PROT_WRITE), which then gets freed without being used >>> (mprotect() -> PROT_NONE) we hit the issue. For OVERCOMMIT_NEVER this could >>> become quite an issue more so than the VMA fragmentation. >> >> Using mprotect() when allocating/freeing memory in an allocator is already >> horribly harmful for performance (well, and the #VMAs), so I don't think >> that scenario is relevant in practice. > > Chrome for instance maintains vast memory ranges as PROT_NONE. I've not dug > into what they're doing, but surely to make use of them they'd need to > mprotect() or mmap()/mremap() (which maybe is what the intent is) I suspect they are doing something similar than glibc (and some other allocators like jemalloc IIRC), because they want to minimze the #VMAs. > > But fair point. However I can't imagine m[re]map'ing like this would be > cheap either, as you're doing the same kind of expensive operations, so the > general _approach_ seems like it's used in some way in practice. Usually people access memory and not play mprotect() games for fun :) > >> >> What some allocators (iirc even glibc) do is reserve a bigger area with >> PROT_NONE and grow the accessible part slowly on demand, discarding freed >> memory using MADV_DONTNEED. So you essentially end up with two VMAs -- one >> completely accessible, one completely inaccessible. >> >> They don't use mprotect() because: >> (a) It's bad for performance >> (b) It might increase the #VMAs >> >> There is efence, but I remember it simply does mmap()+munmap() and runs into >> VMA limits easily just by relying on a lot of mappings. >> >> >>> >>> In addition, I think a user simply doing the artificial test above would >>> find the split remaining quite confusing, and somebody debugging some code >>> like this would equally wonder why it happened, so there is benefit in >>> clarity too (they of course observing the VMA fragmentation from the >>> perspective of /proc/$pid/[s]maps). >> >> My answer would have been "memory gets commited the first time we allow >> write access, and that wasn't the case for all memory in that range". >> >> >> Now, take your example above and touch the memory. >> >> >> ptr = mmap(NULL, page_size * 3, PROT_READ, MAP_ANON | MAP_PRIVATE, -1, 0); >> mprotect(ptr + page_size, page_size, PROT_READ | PROT_WRITE); >> *(ptr + page_size) = 1; >> mprotect(ptr + page_size, page_size, PROT_READ); >> >> >> And we'll not merge the VMAs. >> >> Which, at least to me, makes existing handling more consistent. > > Indeed, but I don't think it's currently consistent at all. > > The 'correct' solution would be to:- > > 1. account for the block when it becomes writable > 2. unaccount for any pages not used when it becomes unwritable > I've been messing with something related (but slightly different) for a while now in my mind, and I'm not at the point where I can talk about my work/idea yet. But because I've been messing with it, I can comment on some existing oddities. Just imagine: * userfaultfd() can place anon pages even in PROT_NONE areas * ptrace can place anon pages in PROT_READ areas * "fun" like the forbidden shared zeropage on s390x in some VMAs can place anon pages into PROT_READ areas. It's all far from "correct" when talking about memory accounting. But it seems to get the job done for the most case for now. > However since we can't go from vma -> folios for anon pages without some > extreme effort this is not feasible. > > Therefore the existing code hacks it and just keep things accountable. > > The patch reduces the hacking so we get halfway to the correct approach. > > So before: "if you ever make this read/write, we account it forever" > After: "if you ever make this read/write and USE IT, we account it forever" > "USE" is probably the wrong word. Maybe "MODIFIED", but there are other cases (MADV_POPULATE_WRITE) > To me it is more consistent. Of course this is subjective... > You made the conditional more complicated to make it consistent, won't argue with that :) >> >> And users could rightfully wonder "why isn't it getting merged". And the >> answer would be the same: "memory gets commited the first time we allow >> write access, and that wasn't the case for all memory in that range". >> > > Yes indeed, a bigger answer is that we don't have fine-grained accounting > for pages for anon_vma. Yes, VM_ACCOUNT is all-or nothing, which makes a lot of sense in many cases (not in all, though). [...] >> >>>>> So in practice programs will likely do the PROT_WRITE in order to actually >>>>> populate the area, so this won't trigger as I commented above. But it can >>>>> still help in some cases and is cheap to do, so: >>>> >>>> IMHO we should much rather look into getting hugetlb ranges merged. Mt >>>> recollection is that we'll never end up merging hugetlb VMAs once split. >>> >>> I'm not sure how that's relevant to fragmented non-hugetlb VMAs though? >> >> It's a VMA merging issue that can be hit in practice, so I raised it. >> >> >> No strong opinion from my side, just my 2 cents reading the patch >> description and wondering "why do we even invest time thinking about this >> case" -- and eventually make handling less consistent IMHO (see above). > > Hmm it seems ilke you have quite a strong opinion :P but this is why I cc-d > you, as you are a great scrutiniser. I might make it sound like a strong opinion (because I am challenging the motivation), but there is no nak :) > > Yeah, the time investment was just by accident, the patch was originally a > throwaway thing to prove the point :] > > I very much appreciate your time though! And I owe you at least one beer now. > > I would ask that while you might question the value, whether you think it > so harmful as not to go in, so Andrew can know whether this debate = don't > take? > > An Ack-with-meh would be fine. But also if you want to nak, it's also > fine. I will buy you the beer either way ;) It's more a "no nak" -- I don't see the real benefit but I also don't see the harm (as long as VMA locking is not an issue). If others see the benefit, great, so I'll let these decide.
diff --git a/mm/mprotect.c b/mm/mprotect.c index 6f658d483704..9461c936082b 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -607,8 +607,11 @@ mprotect_fixup(struct vma_iterator *vmi, struct mmu_gather *tlb, /* * If we make a private mapping writable we increase our commit; * but (without finer accounting) cannot reduce our commit if we - * make it unwritable again. hugetlb mapping were accounted for - * even if read-only so there is no need to account for them here + * make it unwritable again except in the anonymous case where no + * anon_vma has yet been assigned. + * + * hugetlb mapping were accounted for even if read-only so there is + * no need to account for them here. */ if (newflags & VM_WRITE) { /* Check space limits when area turns into data. */ @@ -622,6 +625,9 @@ mprotect_fixup(struct vma_iterator *vmi, struct mmu_gather *tlb, return -ENOMEM; newflags |= VM_ACCOUNT; } + } else if ((oldflags & VM_ACCOUNT) && vma_is_anonymous(vma) && + !vma->anon_vma) { + newflags &= ~VM_ACCOUNT; } /* @@ -652,6 +658,9 @@ mprotect_fixup(struct vma_iterator *vmi, struct mmu_gather *tlb, } success: + if ((oldflags & VM_ACCOUNT) && !(newflags & VM_ACCOUNT)) + vm_unacct_memory(nrpages); + /* * vm_flags and vm_page_prot are protected by the mmap_lock * held in write mode.