Message ID | 20230101230042.244286-1-jthoughton@google.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4e01:0:0:0:0:0 with SMTP id p1csp3937188wrt; Sun, 1 Jan 2023 15:20:51 -0800 (PST) X-Google-Smtp-Source: AMrXdXthvS7j18akYXYv+xDXHUfjWcKPxMivbRzalgd8Cv8uApHAspGg4skPw135UNzJW7eT2ICk X-Received: by 2002:a17:907:b9d5:b0:81e:8dd4:51c3 with SMTP id xa21-20020a170907b9d500b0081e8dd451c3mr33099227ejc.76.1672615250928; Sun, 01 Jan 2023 15:20:50 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1672615250; cv=none; d=google.com; s=arc-20160816; b=ejX5bAlNlLw5Q3iIIeziho6GKFGBrT7HjIpLhfFbvZWBug7FmGLsb4Q8kKjs80SAJ3 5ZQA68set+nY/wmSEdVoOgWxC2/HY9mKvAHpdjNNaXztSMMPvyLGzN88Zk5R6jvzLFyU 9cgJ+IthuJm8Z8rZXzNqo47T6Z89vJuUYEFsxeYMIhNeaE6RHiG4dL0WRpp9ww50dGi5 H2Z27uFDjlEXR814bBgzJFUTkKK4HZIh9PRvIzHCk8KJFpD2eSIBO3oLRG+xLhXFkm/M g9+q0XjziIab8ZA3RjwL5+noj6iVWbGUfxFSHIRpd4kuh41P7qMLUqHHhOwGxyxZHPqk 4oMA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:message-id:mime-version:date :dkim-signature; bh=KIoFTjTTjDz9d1b1yxyyUqxAJ41QzJduAZlhlKAINmQ=; b=Q6iv+SJxKZgVX1kSkXv0ADa26inE2GGA1kpmc5C+G2n0FWmSfEu12v2CTBk1HOjlds /NJBRFRcHevFzCxzBXi3lQWbz9kkmLHXPhr26kSK446vymoxu8CuGViWBh+xC4KiXOa8 P/yaoPOe8aYR18VN+0Tr9/eruEr7L6D8gJ1sVTMqQgEfvtXM+cr+w/RQ2wEKiKco2fcH 3KilS8QBLS72ngHCcPDkxgNz+wNUnzyx/kTGjPJHO0Kv+1Gd416wMe/ssqfUH8uvUydQ 4FEEapGlA0o9dcrdJpPmiZT+vIoyB0YZZnGe8Wo/n7x/jKMwTdShJBZidHaBouafA1eF JIQw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=QaXUiWn3; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id gt40-20020a1709072da800b00813b319af0esi26648793ejc.110.2023.01.01.15.20.17; Sun, 01 Jan 2023 15:20:50 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=QaXUiWn3; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230023AbjAAXBL (ORCPT <rfc822;cscallsign@gmail.com> + 99 others); Sun, 1 Jan 2023 18:01:11 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40602 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229447AbjAAXBI (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Sun, 1 Jan 2023 18:01:08 -0500 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 11D3C101F for <linux-kernel@vger.kernel.org>; Sun, 1 Jan 2023 15:01:05 -0800 (PST) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-45c1b233dd7so280652957b3.20 for <linux-kernel@vger.kernel.org>; Sun, 01 Jan 2023 15:01:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:mime-version:date:from:to:cc:subject :date:message-id:reply-to; bh=KIoFTjTTjDz9d1b1yxyyUqxAJ41QzJduAZlhlKAINmQ=; b=QaXUiWn3XNUQEd6ip4vKheFQ35b9GlV46CrIuxtdTNeswfec+k/oSdlTKQJTDN7laS IXWJ84jgV80eS0dFPfxIK9buoMDZri3O1HyL8J3CvF+RGrFVhngmPoLHGwHlrLXDHT+W 6GMJ+zRQMhOyC8Dnd5ktYf2ujXYes76nARhGdQj/ZVMKhLQGWvNYqV6A8hBMZmxlPDdT Hxt98MeLSanNiBXBhi+ypdRnxFptGyezj8WV6BzlPslY7fuHXAGpzj3tDkvUtgsBz6zH h0TvosJg0ov2IxUEC24zMpQicQzUeWMR9gwaVEXy8nQ08c+3cb5sUSic/HIVU7zs0iuz YuzA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:mime-version:date:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=KIoFTjTTjDz9d1b1yxyyUqxAJ41QzJduAZlhlKAINmQ=; b=E1VDkyIrj3cy95Xfor8rv1LOLb5CAwD4QXobWOfP+WgPPeJPlw//ZTAAaryx4JF4pP 2TwHKqofSpPW4s3fcLi8a9GAHHgEhTD6ncPRe6vZVfVlBGAMNPHM94aVsqyobFzd2lGw zp97Ttez60ibeyOFZqrp6l/agr5mCpaj2ZOAhM/9yjgZjUgqebG2v/cuRTVeNofcjcw3 52vYtmrdskgJ63Zqk1ktYI+jmH6aySUEqxOdkwUfzSVjdrS3AgvbQpkWxyJ8kkGBmipm bO6x1rWnq/j97i0+6jydydUhkA0uRX56afcpeW6D5/5uDzHzj2b2d9v6Sb6O3f3NAjNt Ak5g== X-Gm-Message-State: AFqh2kpAPjNfkvh1e9hJf9Re21aqobj+CGZ3OUDEzf9pDRm6ay79sD1T gE4MwyqM68d8eowYWGPvLXMz4Kn+EvVh9pNe X-Received: from jthoughton.c.googlers.com ([fda3:e722:ac3:cc00:14:4d90:c0a8:2a4f]) (user=jthoughton job=sendgmr) by 2002:a05:690c:79f:b0:3c4:1f67:a2a2 with SMTP id bw31-20020a05690c079f00b003c41f67a2a2mr4768867ywb.234.1672614064278; Sun, 01 Jan 2023 15:01:04 -0800 (PST) Date: Sun, 1 Jan 2023 23:00:42 +0000 Mime-Version: 1.0 X-Mailer: git-send-email 2.39.0.314.g84b9a713c41-goog Message-ID: <20230101230042.244286-1-jthoughton@google.com> Subject: [PATCH] hugetlb: unshare some PMDs when splitting VMAs From: James Houghton <jthoughton@google.com> To: Mike Kravetz <mike.kravetz@oracle.com>, Muchun Song <songmuchun@bytedance.com>, Peter Xu <peterx@redhat.com> Cc: Axel Rasmussen <axelrasmussen@google.com>, Andrew Morton <akpm@linux-foundation.org>, linux-mm@kvack.org, linux-kernel@vger.kernel.org, James Houghton <jthoughton@google.com> Content-Type: text/plain; charset="UTF-8" X-Spam-Status: No, score=-9.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1753864209010879956?= X-GMAIL-MSGID: =?utf-8?q?1753864209010879956?= |
Series |
hugetlb: unshare some PMDs when splitting VMAs
|
|
Commit Message
James Houghton
Jan. 1, 2023, 11 p.m. UTC
PMD sharing can only be done in PUD_SIZE-aligned pieces of VMAs;
however, it is possible that HugeTLB VMAs are split without unsharing
the PMDs first.
In some (most?) cases, this is a non-issue, like userfaultfd_register
and mprotect, where PMDs are unshared before anything is done. However,
mbind() and madvise() (like MADV_DONTDUMP) can cause a split without
unsharing first.
It might seem ideal to unshare in hugetlb_vm_op_open, but that would
only unshare PMDs in the new VMA.
Signed-off-by: James Houghton <jthoughton@google.com>
---
mm/hugetlb.c | 42 +++++++++++++++++++++++++++++++++---------
1 file changed, 33 insertions(+), 9 deletions(-)
Comments
On 01/01/23 23:00, James Houghton wrote: > PMD sharing can only be done in PUD_SIZE-aligned pieces of VMAs; > however, it is possible that HugeTLB VMAs are split without unsharing > the PMDs first. > > In some (most?) cases, this is a non-issue, like userfaultfd_register > and mprotect, where PMDs are unshared before anything is done. However, > mbind() and madvise() (like MADV_DONTDUMP) can cause a split without > unsharing first. Thanks James. I am just trying to determine if we may have any issues/bugs/ undesired behavior based on this today. Consider the cases mentioned above: mbind - I do not think this would cause any user visible issues. mbind is only dealing with newly allocated pages. We do not unshare as the result of a mbind call today. madvise(MADV_DONTDUMP) - It looks like this results in a flag (VM_DONTDUMP) being set on the vma. So, I do not believe sharing page tables would cause any user visible issue. One somewhat strange things about two vmas after split sharing a PMD is that operations on one VMA can impact the other. For example, suppose A VMA split via mbind happens. Then later, mprotect is done on one of the VMAs in the range that is shared. That would result in the area being unshared in both VMAs. So, the 'other' vma could see minor faults after the mprotect. Just curious if you (or anyone) knows of a user visible issue caused by this today. Trying to determine if we need a Fixes: tag. Code changes look fine to me.
> Thanks James. I am just trying to determine if we may have any issues/bugs/ > undesired behavior based on this today. Consider the cases mentioned above: > mbind - I do not think this would cause any user visible issues. mbind is > only dealing with newly allocated pages. We do not unshare as the > result of a mbind call today. > madvise(MADV_DONTDUMP) - It looks like this results in a flag (VM_DONTDUMP) > being set on the vma. So, I do not believe sharing page tables > would cause any user visible issue. > > One somewhat strange things about two vmas after split sharing a PMD is > that operations on one VMA can impact the other. For example, suppose > A VMA split via mbind happens. Then later, mprotect is done on one of > the VMAs in the range that is shared. That would result in the area being > unshared in both VMAs. So, the 'other' vma could see minor faults after > the mprotect. > > Just curious if you (or anyone) knows of a user visible issue caused by this > today. Trying to determine if we need a Fixes: tag. I think I've come up with one... :) It only took many many hours of staring at code to come up with: 1. Fault in PUD_SIZE-aligned hugetlb mapping 2. fork() (to actually share the PMDs) 3. Split VMA with MADV_DONTDUMP 4. Register the lower piece of the newly split VMA with UFFDIO_REGISTER_MODE_WRITEPROTECT (this will call hugetlb_unshare_all_pmds, but it will not attempt to unshare in the unaligned bits now) 5. Now calling UFFDIO_WRITEPROTECT will drop into hugetlb_change_protection and succeed in unsharing. That will hit the WARN_ON_ONCE and *not write-protect anything*. I'll see if I can confirm that this is indeed possible and send a repro if it is. 60dfaad65a ("mm/hugetlb: allow uffd wr-protect none ptes") is the commit that introduced the WARN_ON_ONCE; perhaps it's a good choice for a Fixes: tag (if above is indeed true). > > Code changes look fine to me. Thanks Mike! - James
> I think I've come up with one... :) It only took many many hours of > staring at code to come up with: > > 1. Fault in PUD_SIZE-aligned hugetlb mapping > 2. fork() (to actually share the PMDs) Erm, I mean: mmap(), then fork(), then fault in both processes.
On 01/03/23 20:26, James Houghton wrote: > > Thanks James. I am just trying to determine if we may have any issues/bugs/ > > undesired behavior based on this today. Consider the cases mentioned above: > > mbind - I do not think this would cause any user visible issues. mbind is > > only dealing with newly allocated pages. We do not unshare as the > > result of a mbind call today. > > madvise(MADV_DONTDUMP) - It looks like this results in a flag (VM_DONTDUMP) > > being set on the vma. So, I do not believe sharing page tables > > would cause any user visible issue. > > > > One somewhat strange things about two vmas after split sharing a PMD is > > that operations on one VMA can impact the other. For example, suppose > > A VMA split via mbind happens. Then later, mprotect is done on one of > > the VMAs in the range that is shared. That would result in the area being > > unshared in both VMAs. So, the 'other' vma could see minor faults after > > the mprotect. > > > > Just curious if you (or anyone) knows of a user visible issue caused by this > > today. Trying to determine if we need a Fixes: tag. > > I think I've come up with one... :) It only took many many hours of > staring at code to come up with: > > 1. Fault in PUD_SIZE-aligned hugetlb mapping > 2. fork() (to actually share the PMDs) > Erm, I mean: mmap(), then fork(), then fault in both processes. > 3. Split VMA with MADV_DONTDUMP > 4. Register the lower piece of the newly split VMA with > UFFDIO_REGISTER_MODE_WRITEPROTECT (this will call > hugetlb_unshare_all_pmds, but it will not attempt to unshare in the > unaligned bits now) > 5. Now calling UFFDIO_WRITEPROTECT will drop into > hugetlb_change_protection and succeed in unsharing. That will hit the > WARN_ON_ONCE and *not write-protect anything*. > > I'll see if I can confirm that this is indeed possible and send a > repro if it is. I think your analysis above is correct. The key being the failure to unshare in the non-PUD_SIZE vma after the split. To me, the fact it was somewhat difficult to come up with this scenario is an argument what we should just unshare at split time as you propose. Who knows what other issues may exist. > 60dfaad65a ("mm/hugetlb: allow uffd wr-protect none ptes") is the > commit that introduced the WARN_ON_ONCE; perhaps it's a good choice > for a Fixes: tag (if above is indeed true). If the key issue in your above scenario is indeed the failure of hugetlb_unshare_all_pmds in the non-PUD_SIZE vma, then perhaps we tag? 6dfeaff93be1 ("hugetlb/userfaultfd: unshare all pmds for hugetlbfs when register wp")
On Sun, Jan 01, 2023 at 11:00:42PM +0000, James Houghton wrote: > PMD sharing can only be done in PUD_SIZE-aligned pieces of VMAs; > however, it is possible that HugeTLB VMAs are split without unsharing > the PMDs first. > > In some (most?) cases, this is a non-issue, like userfaultfd_register > and mprotect, where PMDs are unshared before anything is done. However, > mbind() and madvise() (like MADV_DONTDUMP) can cause a split without > unsharing first. > > It might seem ideal to unshare in hugetlb_vm_op_open, but that would > only unshare PMDs in the new VMA. > > Signed-off-by: James Houghton <jthoughton@google.com> > --- > mm/hugetlb.c | 42 +++++++++++++++++++++++++++++++++--------- > 1 file changed, 33 insertions(+), 9 deletions(-) > > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > index b39b74e0591a..bf7a1f628357 100644 > --- a/mm/hugetlb.c > +++ b/mm/hugetlb.c > @@ -94,6 +94,8 @@ static int hugetlb_acct_memory(struct hstate *h, long delta); > static void hugetlb_vma_lock_free(struct vm_area_struct *vma); > static void hugetlb_vma_lock_alloc(struct vm_area_struct *vma); > static void __hugetlb_vma_unlock_write_free(struct vm_area_struct *vma); > +static void hugetlb_unshare_pmds(struct vm_area_struct *vma, > + unsigned long start, unsigned long end); > > static inline bool subpool_is_free(struct hugepage_subpool *spool) > { > @@ -4828,6 +4830,23 @@ static int hugetlb_vm_op_split(struct vm_area_struct *vma, unsigned long addr) > { > if (addr & ~(huge_page_mask(hstate_vma(vma)))) > return -EINVAL; > + > + /* We require PUD_SIZE VMA alignment for PMD sharing. */ I can get the point, but it reads slightly awkward. How about: /* * If the address to split can be in the middle of a shared pmd * range, unshare before split the vma. */ I remember you had a helper to check pmd sharing possibility. Can use here depending on whether that existed in the code base or in your hgm series (or just pick that up with this one?). > + if (addr & ~PUD_MASK) { > + /* > + * hugetlb_vm_op_split is called right before we attempt to > + * split the VMA. We will need to unshare PMDs in the old and > + * new VMAs, so let's unshare before we split. > + */ > + unsigned long floor = addr & PUD_MASK; > + unsigned long ceil = floor + PUD_SIZE; > + > + if (floor < vma->vm_start || ceil >= vma->vm_end) s/>=/>/? > + /* PMD sharing is already impossible. */ > + return 0; IMHO slightly cleaner to write in the reversed way and let it fall through: if (floor >= vma->vm_start && ceil <= vma->vm_end) hugetlb_unshare_pmds(vma, floor, ceil); Thanks, > + hugetlb_unshare_pmds(vma, floor, ceil); > + } > + > return 0; > } > > @@ -7313,26 +7332,21 @@ void move_hugetlb_state(struct folio *old_folio, struct folio *new_folio, int re > } > } > > -/* > - * This function will unconditionally remove all the shared pmd pgtable entries > - * within the specific vma for a hugetlbfs memory range. > - */ > -void hugetlb_unshare_all_pmds(struct vm_area_struct *vma) > +static void hugetlb_unshare_pmds(struct vm_area_struct *vma, > + unsigned long start, > + unsigned long end) > { > struct hstate *h = hstate_vma(vma); > unsigned long sz = huge_page_size(h); > struct mm_struct *mm = vma->vm_mm; > struct mmu_notifier_range range; > - unsigned long address, start, end; > + unsigned long address; > spinlock_t *ptl; > pte_t *ptep; > > if (!(vma->vm_flags & VM_MAYSHARE)) > return; > > - start = ALIGN(vma->vm_start, PUD_SIZE); > - end = ALIGN_DOWN(vma->vm_end, PUD_SIZE); > - > if (start >= end) > return; > > @@ -7364,6 +7378,16 @@ void hugetlb_unshare_all_pmds(struct vm_area_struct *vma) > mmu_notifier_invalidate_range_end(&range); > } > > +/* > + * This function will unconditionally remove all the shared pmd pgtable entries > + * within the specific vma for a hugetlbfs memory range. > + */ > +void hugetlb_unshare_all_pmds(struct vm_area_struct *vma) > +{ > + hugetlb_unshare_pmds(vma, ALIGN(vma->vm_start, PUD_SIZE), > + ALIGN_DOWN(vma->vm_end, PUD_SIZE)); > +} > + > #ifdef CONFIG_CMA > static bool cma_reserve_called __initdata; > > -- > 2.39.0.314.g84b9a713c41-goog >
> > I'll see if I can confirm that this is indeed possible and send a > > repro if it is. > > I think your analysis above is correct. The key being the failure to unshare > in the non-PUD_SIZE vma after the split. I do indeed hit the WARN_ON_ONCE (repro attached), and the MADV wasn't even needed (the UFFDIO_REGISTER does the VMA split before "unsharing all PMDs"). With the fix, we avoid the WARN_ON_ONCE, but the behavior is still incorrect: I expect the address range to be write-protected, but it isn't. The reason why is that hugetlb_change_protection uses huge_pte_offset, even if it's being called for a UFFDIO_WRITEPROTECT with UFFDIO_WRITEPROTECT_MODE_WP. In that particular case, I'm pretty sure we should be using huge_pte_alloc, but even so, it's not trivial to get an allocation failure back up to userspace. The non-hugetlb implementation of UFFDIO_WRITEPROTECT seems to also have this problem. Peter, what do you think? > > To me, the fact it was somewhat difficult to come up with this scenario is an > argument what we should just unshare at split time as you propose. Who > knows what other issues may exist. > > > 60dfaad65a ("mm/hugetlb: allow uffd wr-protect none ptes") is the > > commit that introduced the WARN_ON_ONCE; perhaps it's a good choice > > for a Fixes: tag (if above is indeed true). > > If the key issue in your above scenario is indeed the failure of > hugetlb_unshare_all_pmds in the non-PUD_SIZE vma, then perhaps we tag? > > 6dfeaff93be1 ("hugetlb/userfaultfd: unshare all pmds for hugetlbfs when > register wp") SGTM. Thanks Mike.
> > @@ -4828,6 +4830,23 @@ static int hugetlb_vm_op_split(struct vm_area_struct *vma, unsigned long addr) > > { > > if (addr & ~(huge_page_mask(hstate_vma(vma)))) > > return -EINVAL; > > + > > + /* We require PUD_SIZE VMA alignment for PMD sharing. */ > > I can get the point, but it reads slightly awkward. How about: > > /* > * If the address to split can be in the middle of a shared pmd > * range, unshare before split the vma. > */ > How about: /* * PMD sharing is only possible for PUD_SIZE-aligned address ranges * in HugeTLB VMAs. If we will lose PUD_SIZE alignment due to this split, * unshare PMDs in the PUD_SIZE interval surrounding addr now. */ > I remember you had a helper to check pmd sharing possibility. Can use here > depending on whether that existed in the code base or in your hgm series > (or just pick that up with this one?). Right, it introduces `pmd_sharing_possible` but I don't think it helps here. > > > + if (addr & ~PUD_MASK) { > > + /* > > + * hugetlb_vm_op_split is called right before we attempt to > > + * split the VMA. We will need to unshare PMDs in the old and > > + * new VMAs, so let's unshare before we split. > > + */ > > + unsigned long floor = addr & PUD_MASK; > > + unsigned long ceil = floor + PUD_SIZE; > > + > > + if (floor < vma->vm_start || ceil >= vma->vm_end) > > s/>=/>/? Indeed, thanks. > > > + /* PMD sharing is already impossible. */ > > + return 0; > > IMHO slightly cleaner to write in the reversed way and let it fall through: > > if (floor >= vma->vm_start && ceil <= vma->vm_end) > hugetlb_unshare_pmds(vma, floor, ceil); Will do. > > Thanks, > Thanks Peter!
On Wed, Jan 04, 2023 at 07:10:11PM +0000, James Houghton wrote: > > > I'll see if I can confirm that this is indeed possible and send a > > > repro if it is. > > > > I think your analysis above is correct. The key being the failure to unshare > > in the non-PUD_SIZE vma after the split. > > I do indeed hit the WARN_ON_ONCE (repro attached), and the MADV wasn't > even needed (the UFFDIO_REGISTER does the VMA split before "unsharing > all PMDs"). With the fix, we avoid the WARN_ON_ONCE, but the behavior > is still incorrect: I expect the address range to be write-protected, > but it isn't. > > The reason why is that hugetlb_change_protection uses huge_pte_offset, > even if it's being called for a UFFDIO_WRITEPROTECT with > UFFDIO_WRITEPROTECT_MODE_WP. In that particular case, I'm pretty sure > we should be using huge_pte_alloc, but even so, it's not trivial to > get an allocation failure back up to userspace. The non-hugetlb > implementation of UFFDIO_WRITEPROTECT seems to also have this problem. > > Peter, what do you think? Indeed. Thanks for spotting that, James. Non-hugetlb should be fine with having empty pgtable entries. Anon doesn't need to care about no-pgtable-populated ranges so far. Shmem does it with a few change_prepare() calls to populate the entries so the markers can be installed later on. However I think the fault handling is still not well handled as you pointed out even for shmem: that's the path I probably never triggered myself yet before and the code stayed there since a very early version: #define change_pmd_prepare(vma, pmd, cp_flags) \ do { \ if (unlikely(uffd_wp_protect_file(vma, cp_flags))) { \ if (WARN_ON_ONCE(pte_alloc(vma->vm_mm, pmd))) \ break; \ } \ } while (0) I think a better thing we can do here (instead of warning and stop the UFFDIO_WRITEPROTECT at the current stage) is returning with -ENOMEM properly so the user can know the error. We'll need to touch the stacks up to uffd_wp_range() as it's the only one that can trigger the -ENOMEM so far, so as to not ignore retval from change_protection(). Meanwhile, I'd also wonder whether we should call pagefault_out_of_memory() because it should be the same as when pgtable allocation failure happens in page faults, we may want to OOM already. I can take care of hugetlb part too along the way. Man page of UFFDIO_WRITEPROTECT may need a fixup too to introduce -ENOMEM. I can quickly prepare some patches for this, and hopefully it doesn't need to block the current fix on split. Any thoughts? > > > > > To me, the fact it was somewhat difficult to come up with this scenario is an > > argument what we should just unshare at split time as you propose. Who > > knows what other issues may exist. > > > > > 60dfaad65a ("mm/hugetlb: allow uffd wr-protect none ptes") is the > > > commit that introduced the WARN_ON_ONCE; perhaps it's a good choice > > > for a Fixes: tag (if above is indeed true). > > > > If the key issue in your above scenario is indeed the failure of > > hugetlb_unshare_all_pmds in the non-PUD_SIZE vma, then perhaps we tag? > > > > 6dfeaff93be1 ("hugetlb/userfaultfd: unshare all pmds for hugetlbfs when > > register wp") > > SGTM. Thanks Mike. Looks good here too. Thanks,
On Wed, Jan 04, 2023 at 07:34:00PM +0000, James Houghton wrote: > How about: > > /* > * PMD sharing is only possible for PUD_SIZE-aligned address ranges > * in HugeTLB VMAs. If we will lose PUD_SIZE alignment due to this split, > * unshare PMDs in the PUD_SIZE interval surrounding addr now. > */ Even better, thanks.
On Wed, Jan 4, 2023 at 8:03 PM Peter Xu <peterx@redhat.com> wrote: > > On Wed, Jan 04, 2023 at 07:10:11PM +0000, James Houghton wrote: > > > > I'll see if I can confirm that this is indeed possible and send a > > > > repro if it is. > > > > > > I think your analysis above is correct. The key being the failure to unshare > > > in the non-PUD_SIZE vma after the split. > > > > I do indeed hit the WARN_ON_ONCE (repro attached), and the MADV wasn't > > even needed (the UFFDIO_REGISTER does the VMA split before "unsharing > > all PMDs"). With the fix, we avoid the WARN_ON_ONCE, but the behavior > > is still incorrect: I expect the address range to be write-protected, > > but it isn't. > > > > The reason why is that hugetlb_change_protection uses huge_pte_offset, > > even if it's being called for a UFFDIO_WRITEPROTECT with > > UFFDIO_WRITEPROTECT_MODE_WP. In that particular case, I'm pretty sure > > we should be using huge_pte_alloc, but even so, it's not trivial to > > get an allocation failure back up to userspace. The non-hugetlb > > implementation of UFFDIO_WRITEPROTECT seems to also have this problem. > > > > Peter, what do you think? > > Indeed. Thanks for spotting that, James. > > Non-hugetlb should be fine with having empty pgtable entries. Anon doesn't > need to care about no-pgtable-populated ranges so far. Shmem does it with a > few change_prepare() calls to populate the entries so the markers can be > installed later on. Ah ok! :) > > However I think the fault handling is still not well handled as you pointed > out even for shmem: that's the path I probably never triggered myself yet > before and the code stayed there since a very early version: > > #define change_pmd_prepare(vma, pmd, cp_flags) \ > do { \ > if (unlikely(uffd_wp_protect_file(vma, cp_flags))) { \ > if (WARN_ON_ONCE(pte_alloc(vma->vm_mm, pmd))) \ > break; \ > } \ > } while (0) > > I think a better thing we can do here (instead of warning and stop the > UFFDIO_WRITEPROTECT at the current stage) is returning with -ENOMEM > properly so the user can know the error. We'll need to touch the stacks up > to uffd_wp_range() as it's the only one that can trigger the -ENOMEM so > far, so as to not ignore retval from change_protection(). > > Meanwhile, I'd also wonder whether we should call pagefault_out_of_memory() > because it should be the same as when pgtable allocation failure happens in > page faults, we may want to OOM already. I can take care of hugetlb part > too along the way. I might be misunderstanding, the only case where hugetlb_change_protection() would *need* to allocate is when it is called from UFFDIO_WRITEPROTECT, not while handling a #pf. So I don't think any calls to pagefault_out_of_memory() need to be added. > > Man page of UFFDIO_WRITEPROTECT may need a fixup too to introduce -ENOMEM. > > I can quickly prepare some patches for this, and hopefully it doesn't need > to block the current fix on split. I don't think it should block this splitting fix. I'll send another version of this fix soon. > > Any thoughts? > > > > > > > > > To me, the fact it was somewhat difficult to come up with this scenario is an > > > argument what we should just unshare at split time as you propose. Who > > > knows what other issues may exist. > > > > > > > 60dfaad65a ("mm/hugetlb: allow uffd wr-protect none ptes") is the > > > > commit that introduced the WARN_ON_ONCE; perhaps it's a good choice > > > > for a Fixes: tag (if above is indeed true). > > > > > > If the key issue in your above scenario is indeed the failure of > > > hugetlb_unshare_all_pmds in the non-PUD_SIZE vma, then perhaps we tag? > > > > > > 6dfeaff93be1 ("hugetlb/userfaultfd: unshare all pmds for hugetlbfs when > > > register wp") > > > > SGTM. Thanks Mike. > > Looks good here too. Thanks, Peter!
diff --git a/mm/hugetlb.c b/mm/hugetlb.c index b39b74e0591a..bf7a1f628357 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -94,6 +94,8 @@ static int hugetlb_acct_memory(struct hstate *h, long delta); static void hugetlb_vma_lock_free(struct vm_area_struct *vma); static void hugetlb_vma_lock_alloc(struct vm_area_struct *vma); static void __hugetlb_vma_unlock_write_free(struct vm_area_struct *vma); +static void hugetlb_unshare_pmds(struct vm_area_struct *vma, + unsigned long start, unsigned long end); static inline bool subpool_is_free(struct hugepage_subpool *spool) { @@ -4828,6 +4830,23 @@ static int hugetlb_vm_op_split(struct vm_area_struct *vma, unsigned long addr) { if (addr & ~(huge_page_mask(hstate_vma(vma)))) return -EINVAL; + + /* We require PUD_SIZE VMA alignment for PMD sharing. */ + if (addr & ~PUD_MASK) { + /* + * hugetlb_vm_op_split is called right before we attempt to + * split the VMA. We will need to unshare PMDs in the old and + * new VMAs, so let's unshare before we split. + */ + unsigned long floor = addr & PUD_MASK; + unsigned long ceil = floor + PUD_SIZE; + + if (floor < vma->vm_start || ceil >= vma->vm_end) + /* PMD sharing is already impossible. */ + return 0; + hugetlb_unshare_pmds(vma, floor, ceil); + } + return 0; } @@ -7313,26 +7332,21 @@ void move_hugetlb_state(struct folio *old_folio, struct folio *new_folio, int re } } -/* - * This function will unconditionally remove all the shared pmd pgtable entries - * within the specific vma for a hugetlbfs memory range. - */ -void hugetlb_unshare_all_pmds(struct vm_area_struct *vma) +static void hugetlb_unshare_pmds(struct vm_area_struct *vma, + unsigned long start, + unsigned long end) { struct hstate *h = hstate_vma(vma); unsigned long sz = huge_page_size(h); struct mm_struct *mm = vma->vm_mm; struct mmu_notifier_range range; - unsigned long address, start, end; + unsigned long address; spinlock_t *ptl; pte_t *ptep; if (!(vma->vm_flags & VM_MAYSHARE)) return; - start = ALIGN(vma->vm_start, PUD_SIZE); - end = ALIGN_DOWN(vma->vm_end, PUD_SIZE); - if (start >= end) return; @@ -7364,6 +7378,16 @@ void hugetlb_unshare_all_pmds(struct vm_area_struct *vma) mmu_notifier_invalidate_range_end(&range); } +/* + * This function will unconditionally remove all the shared pmd pgtable entries + * within the specific vma for a hugetlbfs memory range. + */ +void hugetlb_unshare_all_pmds(struct vm_area_struct *vma) +{ + hugetlb_unshare_pmds(vma, ALIGN(vma->vm_start, PUD_SIZE), + ALIGN_DOWN(vma->vm_end, PUD_SIZE)); +} + #ifdef CONFIG_CMA static bool cma_reserve_called __initdata;