Message ID | 20230619231044.112894-4-peterx@redhat.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp3304114vqr; Mon, 19 Jun 2023 16:13:32 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ4DGjeNVhKnLAZM94yLoNfv4mlARynGQ8zO9ucK88RetGxoDvIBAn39VKxR72sQ789BbB7D X-Received: by 2002:a1f:c14e:0:b0:46e:8724:5dbb with SMTP id r75-20020a1fc14e000000b0046e87245dbbmr1436684vkf.2.1687216412350; Mon, 19 Jun 2023 16:13:32 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1687216412; cv=none; d=google.com; s=arc-20160816; b=PbPH6ncZC4h1/95npa7POhzy9Tve68O02dSBKhini9y65tf5woufYpWSNqwxQTEZ+J i3WUoOao6FXhzDz2uZZ6rt1SRZpfIarJ7jDBaYJsHfHjXuwPWT8cpTAZzxDP+qZGZDRR r6z/8XVzKX8SavHYtib74MqImkMIVGeJC4aGiLgi2j15YPD2wcSCYhNtoeUZ+SFhjB7s rAdZJs/QzmaxBBStUfoHz04PG1Nw7cnTXxv+AS73pplJ1DaXrTpDRsuknT+plY+LhYSp PT8AEfcj9+KK2eqmPRHd2PqD83L4qyZk/oMcPkX62dDOp0DwULo+/jYt9hzcuXvxlOMM 3EFA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=6bDfMbeRbNTGIwHWXqT4W/WFCKBAZr5szTZ9WWVoPyY=; b=DFYJM1893flnt2Nqdh65CLIa/0eQbdZlokI9UUEVA4FDyiCzs9dW+kn6Yrgmms8IJ6 U1axEf/sxqRiC7QWyk/4n564irBOMtsZx/f4CZYN/nKNLQr0OBOxtC2To1mrVLoQcHaX l9ZyQRNLU8KVA3328AgtfQh6tf0fvHTXKaNtbIi8HtUi4+KXE2+Ng+EOi9X51pouOxRt 3VbQfJLBWMUjU5sUHfm6CYgpiiFHaRMvd5uvRiB9jnA6l7xSDyPesA3asQs8Inyl9RKM 209xpDAuf8x8x3U/63ATYKnAU9TNntCoWptAtlWYR1ckAswC6htLgRcIBuZ8lWMrV3A3 oHEg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=VBTHFw9r; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id v4-20020a637a04000000b005533c55dc3esi431598pgc.573.2023.06.19.16.13.19; Mon, 19 Jun 2023 16:13:32 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=VBTHFw9r; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229823AbjFSXLv (ORCPT <rfc822;duw91626@gmail.com> + 99 others); Mon, 19 Jun 2023 19:11:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39848 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229716AbjFSXLj (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Mon, 19 Jun 2023 19:11:39 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 28A35E65 for <linux-kernel@vger.kernel.org>; Mon, 19 Jun 2023 16:10:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1687216253; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=6bDfMbeRbNTGIwHWXqT4W/WFCKBAZr5szTZ9WWVoPyY=; b=VBTHFw9rqImxlVYOTjywvRhb+WR+8TuAosfwmepDv5/KoUh5wgUK5Kw4aEeL84kRWJh+4B 2qnWAhCZPmq3MPB8hFCzUuihZTsvovTM5pHhkevUl4v44h5azSE5aYLniAz7BHe06c1gV1 tCu8HPj0nosq9mpru/9YMqxaeAIpOqE= Received: from mail-qk1-f199.google.com (mail-qk1-f199.google.com [209.85.222.199]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-590-8LLJQvs8PQ2IOvpAGgBglQ-1; Mon, 19 Jun 2023 19:10:51 -0400 X-MC-Unique: 8LLJQvs8PQ2IOvpAGgBglQ-1 Received: by mail-qk1-f199.google.com with SMTP id af79cd13be357-7623c0f2856so48475885a.0 for <linux-kernel@vger.kernel.org>; Mon, 19 Jun 2023 16:10:51 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687216251; x=1689808251; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=6bDfMbeRbNTGIwHWXqT4W/WFCKBAZr5szTZ9WWVoPyY=; b=MrqYRMpoJK/zqUu1ub1WPbrKsoc9D2cwRgIxBHUuJ5ErbclU+5DwrfvrzcVS1Cjtrf 1n8KCaKh0fUXh3+1AZVrj3/GZ0Ov1J/G5mAsQ9wkPfggTAlFjPVkZ6H9JAUmL87HVxAt SLKAQeOK/NTY6d6COU+c4y2xfZHi2TflXAKeW4/2d5ldmKEr06B7Vbo8ea8ziZHG3R9D h2zni0178QxMHlMMDBhjDNO823x4schvnYAqWRx55gY2aFuqlvll9yiCtEe3Vuqo1B7V cZP4n6KSZ2di2tzd71CulfYUiAlVmoDJ/BAb5Xk4yG/H59YevKvO1xdkEO6dP4bwI+TV drlQ== X-Gm-Message-State: AC+VfDxBweceyxHxpnJksCIxaGwmoXrVs/YTxnGWk4KTM0D9Cvr72x0l Li5VZRj5SRhVFPqIZ+JRFUrVlH/6MHJ3LR7PBof3GjPokjR/l7DI/GDFOfV8+t7SWveioyldq1+ AgIs/wLKpOsFRdEqK+n7GW4IV X-Received: by 2002:a05:620a:171f:b0:75d:e31a:a015 with SMTP id az31-20020a05620a171f00b0075de31aa015mr12743197qkb.2.1687216251119; Mon, 19 Jun 2023 16:10:51 -0700 (PDT) X-Received: by 2002:a05:620a:171f:b0:75d:e31a:a015 with SMTP id az31-20020a05620a171f00b0075de31aa015mr12743182qkb.2.1687216250831; Mon, 19 Jun 2023 16:10:50 -0700 (PDT) Received: from x1n.. (cpe5c7695f3aee0-cm5c7695f3aede.cpe.net.cable.rogers.com. [99.254.144.39]) by smtp.gmail.com with ESMTPSA id t15-20020a05620a034f00b007592f2016f4sm405864qkm.110.2023.06.19.16.10.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 Jun 2023 16:10:50 -0700 (PDT) From: Peter Xu <peterx@redhat.com> To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Andrea Arcangeli <aarcange@redhat.com>, Mike Rapoport <rppt@kernel.org>, David Hildenbrand <david@redhat.com>, Matthew Wilcox <willy@infradead.org>, Vlastimil Babka <vbabka@suse.cz>, John Hubbard <jhubbard@nvidia.com>, "Kirill A . Shutemov" <kirill@shutemov.name>, James Houghton <jthoughton@google.com>, Andrew Morton <akpm@linux-foundation.org>, Lorenzo Stoakes <lstoakes@gmail.com>, Hugh Dickins <hughd@google.com>, Mike Kravetz <mike.kravetz@oracle.com>, peterx@redhat.com, Jason Gunthorpe <jgg@nvidia.com> Subject: [PATCH v2 3/8] mm/hugetlb: Add page_mask for hugetlb_follow_page_mask() Date: Mon, 19 Jun 2023 19:10:39 -0400 Message-Id: <20230619231044.112894-4-peterx@redhat.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230619231044.112894-1-peterx@redhat.com> References: <20230619231044.112894-1-peterx@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H5,RCVD_IN_MSPIKE_WL,SPF_HELO_NONE,SPF_NONE, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1769174636893815027?= X-GMAIL-MSGID: =?utf-8?q?1769174636893815027?= |
Series |
mm/gup: Unify hugetlb, speed up thp
|
|
Commit Message
Peter Xu
June 19, 2023, 11:10 p.m. UTC
follow_page() doesn't need it, but we'll start to need it when unifying gup
for hugetlb.
Signed-off-by: Peter Xu <peterx@redhat.com>
---
include/linux/hugetlb.h | 8 +++++---
mm/gup.c | 3 ++-
mm/hugetlb.c | 5 ++++-
3 files changed, 11 insertions(+), 5 deletions(-)
Comments
On 20.06.23 01:10, Peter Xu wrote: > follow_page() doesn't need it, but we'll start to need it when unifying gup > for hugetlb. > > Signed-off-by: Peter Xu <peterx@redhat.com> > --- > include/linux/hugetlb.h | 8 +++++--- > mm/gup.c | 3 ++- > mm/hugetlb.c | 5 ++++- > 3 files changed, 11 insertions(+), 5 deletions(-) > > diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h > index beb7c63d2871..2e2d89e79d6c 100644 > --- a/include/linux/hugetlb.h > +++ b/include/linux/hugetlb.h > @@ -131,7 +131,8 @@ int move_hugetlb_page_tables(struct vm_area_struct *vma, > int copy_hugetlb_page_range(struct mm_struct *, struct mm_struct *, > struct vm_area_struct *, struct vm_area_struct *); > struct page *hugetlb_follow_page_mask(struct vm_area_struct *vma, > - unsigned long address, unsigned int flags); > + unsigned long address, unsigned int flags, > + unsigned int *page_mask); > long follow_hugetlb_page(struct mm_struct *, struct vm_area_struct *, > struct page **, unsigned long *, unsigned long *, > long, unsigned int, int *); > @@ -297,8 +298,9 @@ static inline void adjust_range_if_pmd_sharing_possible( > { > } > > -static inline struct page *hugetlb_follow_page_mask(struct vm_area_struct *vma, > - unsigned long address, unsigned int flags) > +static inline struct page *hugetlb_follow_page_mask( > + struct vm_area_struct *vma, unsigned long address, unsigned int flags, > + unsigned int *page_mask) > { > BUILD_BUG(); /* should never be compiled in if !CONFIG_HUGETLB_PAGE*/ > } > diff --git a/mm/gup.c b/mm/gup.c > index abcd841d94b7..9fc9271cba8d 100644 > --- a/mm/gup.c > +++ b/mm/gup.c > @@ -780,7 +780,8 @@ static struct page *follow_page_mask(struct vm_area_struct *vma, > * Ordinary GUP uses follow_hugetlb_page for hugetlb processing. > */ > if (is_vm_hugetlb_page(vma)) > - return hugetlb_follow_page_mask(vma, address, flags); > + return hugetlb_follow_page_mask(vma, address, flags, > + &ctx->page_mask); > > pgd = pgd_offset(mm, address); > > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > index 9a6918c4250a..fbf6a09c0ec4 100644 > --- a/mm/hugetlb.c > +++ b/mm/hugetlb.c > @@ -6454,7 +6454,8 @@ static inline bool __follow_hugetlb_must_fault(struct vm_area_struct *vma, > } > > struct page *hugetlb_follow_page_mask(struct vm_area_struct *vma, > - unsigned long address, unsigned int flags) > + unsigned long address, unsigned int flags, > + unsigned int *page_mask) > { > struct hstate *h = hstate_vma(vma); > struct mm_struct *mm = vma->vm_mm; > @@ -6499,6 +6500,8 @@ struct page *hugetlb_follow_page_mask(struct vm_area_struct *vma, > page = NULL; > goto out; > } > + > + *page_mask = ~huge_page_mask(h) >> PAGE_SHIFT; As discussed, can be simplified. But can be done on top (or not at all, but it is confusing code). Reviewed-by: David Hildenbrand <david@redhat.com>
On Tue, Jun 20, 2023 at 05:23:09PM +0200, David Hildenbrand wrote: > On 20.06.23 01:10, Peter Xu wrote: > > follow_page() doesn't need it, but we'll start to need it when unifying gup > > for hugetlb. > > > > Signed-off-by: Peter Xu <peterx@redhat.com> > > --- > > include/linux/hugetlb.h | 8 +++++--- > > mm/gup.c | 3 ++- > > mm/hugetlb.c | 5 ++++- > > 3 files changed, 11 insertions(+), 5 deletions(-) > > > > diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h > > index beb7c63d2871..2e2d89e79d6c 100644 > > --- a/include/linux/hugetlb.h > > +++ b/include/linux/hugetlb.h > > @@ -131,7 +131,8 @@ int move_hugetlb_page_tables(struct vm_area_struct *vma, > > int copy_hugetlb_page_range(struct mm_struct *, struct mm_struct *, > > struct vm_area_struct *, struct vm_area_struct *); > > struct page *hugetlb_follow_page_mask(struct vm_area_struct *vma, > > - unsigned long address, unsigned int flags); > > + unsigned long address, unsigned int flags, > > + unsigned int *page_mask); > > long follow_hugetlb_page(struct mm_struct *, struct vm_area_struct *, > > struct page **, unsigned long *, unsigned long *, > > long, unsigned int, int *); > > @@ -297,8 +298,9 @@ static inline void adjust_range_if_pmd_sharing_possible( > > { > > } > > -static inline struct page *hugetlb_follow_page_mask(struct vm_area_struct *vma, > > - unsigned long address, unsigned int flags) > > +static inline struct page *hugetlb_follow_page_mask( > > + struct vm_area_struct *vma, unsigned long address, unsigned int flags, > > + unsigned int *page_mask) > > { > > BUILD_BUG(); /* should never be compiled in if !CONFIG_HUGETLB_PAGE*/ > > } > > diff --git a/mm/gup.c b/mm/gup.c > > index abcd841d94b7..9fc9271cba8d 100644 > > --- a/mm/gup.c > > +++ b/mm/gup.c > > @@ -780,7 +780,8 @@ static struct page *follow_page_mask(struct vm_area_struct *vma, > > * Ordinary GUP uses follow_hugetlb_page for hugetlb processing. > > */ > > if (is_vm_hugetlb_page(vma)) > > - return hugetlb_follow_page_mask(vma, address, flags); > > + return hugetlb_follow_page_mask(vma, address, flags, > > + &ctx->page_mask); > > pgd = pgd_offset(mm, address); > > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > > index 9a6918c4250a..fbf6a09c0ec4 100644 > > --- a/mm/hugetlb.c > > +++ b/mm/hugetlb.c > > @@ -6454,7 +6454,8 @@ static inline bool __follow_hugetlb_must_fault(struct vm_area_struct *vma, > > } > > struct page *hugetlb_follow_page_mask(struct vm_area_struct *vma, > > - unsigned long address, unsigned int flags) > > + unsigned long address, unsigned int flags, > > + unsigned int *page_mask) > > { > > struct hstate *h = hstate_vma(vma); > > struct mm_struct *mm = vma->vm_mm; > > @@ -6499,6 +6500,8 @@ struct page *hugetlb_follow_page_mask(struct vm_area_struct *vma, > > page = NULL; > > goto out; > > } > > + > > + *page_mask = ~huge_page_mask(h) >> PAGE_SHIFT; > > As discussed, can be simplified. But can be done on top (or not at all, but > it is confusing code). Since we decided to make this prettier.. At last I decided to go with this: *page_mask = (1U << huge_page_order(h)) - 1; The previous suggestion of PHYS_PFN() will do two shifts over PAGE_SIZE (the other one in huge_page_size()) which might be unnecessary, also, PHYS_ can be slightly misleading too as prefix. > > Reviewed-by: David Hildenbrand <david@redhat.com> I'll take this with above change, please shoot if not applicable. Thanks,
On 20.06.23 18:28, Peter Xu wrote: > On Tue, Jun 20, 2023 at 05:23:09PM +0200, David Hildenbrand wrote: >> On 20.06.23 01:10, Peter Xu wrote: >>> follow_page() doesn't need it, but we'll start to need it when unifying gup >>> for hugetlb. >>> >>> Signed-off-by: Peter Xu <peterx@redhat.com> >>> --- >>> include/linux/hugetlb.h | 8 +++++--- >>> mm/gup.c | 3 ++- >>> mm/hugetlb.c | 5 ++++- >>> 3 files changed, 11 insertions(+), 5 deletions(-) >>> >>> diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h >>> index beb7c63d2871..2e2d89e79d6c 100644 >>> --- a/include/linux/hugetlb.h >>> +++ b/include/linux/hugetlb.h >>> @@ -131,7 +131,8 @@ int move_hugetlb_page_tables(struct vm_area_struct *vma, >>> int copy_hugetlb_page_range(struct mm_struct *, struct mm_struct *, >>> struct vm_area_struct *, struct vm_area_struct *); >>> struct page *hugetlb_follow_page_mask(struct vm_area_struct *vma, >>> - unsigned long address, unsigned int flags); >>> + unsigned long address, unsigned int flags, >>> + unsigned int *page_mask); >>> long follow_hugetlb_page(struct mm_struct *, struct vm_area_struct *, >>> struct page **, unsigned long *, unsigned long *, >>> long, unsigned int, int *); >>> @@ -297,8 +298,9 @@ static inline void adjust_range_if_pmd_sharing_possible( >>> { >>> } >>> -static inline struct page *hugetlb_follow_page_mask(struct vm_area_struct *vma, >>> - unsigned long address, unsigned int flags) >>> +static inline struct page *hugetlb_follow_page_mask( >>> + struct vm_area_struct *vma, unsigned long address, unsigned int flags, >>> + unsigned int *page_mask) >>> { >>> BUILD_BUG(); /* should never be compiled in if !CONFIG_HUGETLB_PAGE*/ >>> } >>> diff --git a/mm/gup.c b/mm/gup.c >>> index abcd841d94b7..9fc9271cba8d 100644 >>> --- a/mm/gup.c >>> +++ b/mm/gup.c >>> @@ -780,7 +780,8 @@ static struct page *follow_page_mask(struct vm_area_struct *vma, >>> * Ordinary GUP uses follow_hugetlb_page for hugetlb processing. >>> */ >>> if (is_vm_hugetlb_page(vma)) >>> - return hugetlb_follow_page_mask(vma, address, flags); >>> + return hugetlb_follow_page_mask(vma, address, flags, >>> + &ctx->page_mask); >>> pgd = pgd_offset(mm, address); >>> diff --git a/mm/hugetlb.c b/mm/hugetlb.c >>> index 9a6918c4250a..fbf6a09c0ec4 100644 >>> --- a/mm/hugetlb.c >>> +++ b/mm/hugetlb.c >>> @@ -6454,7 +6454,8 @@ static inline bool __follow_hugetlb_must_fault(struct vm_area_struct *vma, >>> } >>> struct page *hugetlb_follow_page_mask(struct vm_area_struct *vma, >>> - unsigned long address, unsigned int flags) >>> + unsigned long address, unsigned int flags, >>> + unsigned int *page_mask) >>> { >>> struct hstate *h = hstate_vma(vma); >>> struct mm_struct *mm = vma->vm_mm; >>> @@ -6499,6 +6500,8 @@ struct page *hugetlb_follow_page_mask(struct vm_area_struct *vma, >>> page = NULL; >>> goto out; >>> } >>> + >>> + *page_mask = ~huge_page_mask(h) >> PAGE_SHIFT; >> >> As discussed, can be simplified. But can be done on top (or not at all, but >> it is confusing code). > > Since we decided to make this prettier.. At last I decided to go with this: > > *page_mask = (1U << huge_page_order(h)) - 1; > > The previous suggestion of PHYS_PFN() will do two shifts over PAGE_SIZE > (the other one in huge_page_size()) which might be unnecessary, also, PHYS_ > can be slightly misleading too as prefix. > >> >> Reviewed-by: David Hildenbrand <david@redhat.com> > > I'll take this with above change, please shoot if not applicable. Thanks, > Perfectly fine :)
diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index beb7c63d2871..2e2d89e79d6c 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -131,7 +131,8 @@ int move_hugetlb_page_tables(struct vm_area_struct *vma, int copy_hugetlb_page_range(struct mm_struct *, struct mm_struct *, struct vm_area_struct *, struct vm_area_struct *); struct page *hugetlb_follow_page_mask(struct vm_area_struct *vma, - unsigned long address, unsigned int flags); + unsigned long address, unsigned int flags, + unsigned int *page_mask); long follow_hugetlb_page(struct mm_struct *, struct vm_area_struct *, struct page **, unsigned long *, unsigned long *, long, unsigned int, int *); @@ -297,8 +298,9 @@ static inline void adjust_range_if_pmd_sharing_possible( { } -static inline struct page *hugetlb_follow_page_mask(struct vm_area_struct *vma, - unsigned long address, unsigned int flags) +static inline struct page *hugetlb_follow_page_mask( + struct vm_area_struct *vma, unsigned long address, unsigned int flags, + unsigned int *page_mask) { BUILD_BUG(); /* should never be compiled in if !CONFIG_HUGETLB_PAGE*/ } diff --git a/mm/gup.c b/mm/gup.c index abcd841d94b7..9fc9271cba8d 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -780,7 +780,8 @@ static struct page *follow_page_mask(struct vm_area_struct *vma, * Ordinary GUP uses follow_hugetlb_page for hugetlb processing. */ if (is_vm_hugetlb_page(vma)) - return hugetlb_follow_page_mask(vma, address, flags); + return hugetlb_follow_page_mask(vma, address, flags, + &ctx->page_mask); pgd = pgd_offset(mm, address); diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 9a6918c4250a..fbf6a09c0ec4 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -6454,7 +6454,8 @@ static inline bool __follow_hugetlb_must_fault(struct vm_area_struct *vma, } struct page *hugetlb_follow_page_mask(struct vm_area_struct *vma, - unsigned long address, unsigned int flags) + unsigned long address, unsigned int flags, + unsigned int *page_mask) { struct hstate *h = hstate_vma(vma); struct mm_struct *mm = vma->vm_mm; @@ -6499,6 +6500,8 @@ struct page *hugetlb_follow_page_mask(struct vm_area_struct *vma, page = NULL; goto out; } + + *page_mask = ~huge_page_mask(h) >> PAGE_SHIFT; } out: spin_unlock(ptl);