Message ID | 20230613215346.1022773-7-peterx@redhat.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp846653vqr; Tue, 13 Jun 2023 14:58:55 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ7jAuA8XHWQLcC89jqc3KViQ2rQ6xEvaBk2gHvpSIsjP/7JTSJBkCfCLaoQwN5eMDXQBrsR X-Received: by 2002:aa7:c258:0:b0:518:7d7e:d538 with SMTP id y24-20020aa7c258000000b005187d7ed538mr243605edo.24.1686693535595; Tue, 13 Jun 2023 14:58:55 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1686693535; cv=none; d=google.com; s=arc-20160816; b=JYZEKmYA20tSDs5jO8EJVtutGOJEGDYnTgvk5AJFs1Ph/RCNNd9dR+qYgouwQYnlYl 6Jia557ymfgik7C97CIf9UDm1QHbtPrL6tq7IaQ1mNsBCdaQtBX7l6dJkDHNI/vOeTFp cfqhjVfrPNsUcIIstpjkYjV+x+cT/PePP3+TtQtojb3LsRhjN3KFSWWE8I3boA5cntsC 9fvwmhGDPRu/KFcAusjauSi091bVnLaiIkqN0BNJ66vzUmHdtWVlzGz0zxIYR/YZNryr ObwSY0ZteoPYS5Y2wby05LYVBMaRMChxW1Oe9LbHMnRuAw04SWv1NHh9DmUqRpB2rpNT Qb8g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=HwdaHYLXZAtZykSCGumfFt7VLYdtQYhxN3KKFt+1ERs=; b=KdTDzw6QCpy30VebqrtbQvOrFmD6MYj0IgTTEC2y7zFrgYQW5cjGs899U8ffJ+73G4 IoVMSmYpNk4+5t3LrLxg0fUyhPYF+bt7MAQViiVKamK5872+z+WnhXV1NuTq2D/6Bv91 NvtDyitoNLs4IBYHlAazgZfF0xmsgCEYyXYTDLBG7l0L/MxesG6/fmjFBO2R6GZkGaeQ 5b39OTqtmeLn6nBTQmER1ux2UoPtPRd6YEN1OFPB++gmEMr6YZ96rppIQwsjC66orYku sYNtAWsN9ZidrWd+tRtTNxxLxOn9sfdl7IT7wTI3C2GSwND53BfpZqmn7EaVy/qRMijh K9jw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b="WCWQc/c0"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id e23-20020a056402089700b005158563be4bsi7982932edy.291.2023.06.13.14.58.31; Tue, 13 Jun 2023 14:58:55 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b="WCWQc/c0"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231937AbjFMVzI (ORCPT <rfc822;lekhanya01809@gmail.com> + 99 others); Tue, 13 Jun 2023 17:55:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45998 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231490AbjFMVzA (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Tue, 13 Jun 2023 17:55:00 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 23C7E1BDB for <linux-kernel@vger.kernel.org>; Tue, 13 Jun 2023 14:54:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1686693252; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=HwdaHYLXZAtZykSCGumfFt7VLYdtQYhxN3KKFt+1ERs=; b=WCWQc/c0JjyggKyki1eLvLVeLlV58MZVpK3znCwWwCp1jUVKCxEZInvijX+ONix375sTnW 1nCUHuCQMYj5M4cCnmqF9tmhXBS2IjKrH4zKLmxBZHDIArR1UktMUNiNA+WM/hBnvNKBAb emU35NDQ4v5zHY8oXHI/loMZewxG6vc= Received: from mail-qk1-f200.google.com (mail-qk1-f200.google.com [209.85.222.200]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-99-Qquw00nBMdWYlcgd7xZzzw-1; Tue, 13 Jun 2023 17:54:08 -0400 X-MC-Unique: Qquw00nBMdWYlcgd7xZzzw-1 Received: by mail-qk1-f200.google.com with SMTP id af79cd13be357-7606a134623so44621685a.1 for <linux-kernel@vger.kernel.org>; Tue, 13 Jun 2023 14:54:06 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1686693245; x=1689285245; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=HwdaHYLXZAtZykSCGumfFt7VLYdtQYhxN3KKFt+1ERs=; b=gwFvz4E+SPJwdeBQGrRqqR6ghu72gSd0V+97Ws+Ad+rQbeEMv7x8LtNjp7BTGkaBKM Ba8Kzl/tlHg1tChRj6kw6h5g9jhaa0zw2T6kPydgkuj8NyCu2p1LwSWyFdZFzoYBMFe4 EPo5zbKzRfJSn3MGvYdReev7xCjz2WdjSG/nWsEd5w44RxLed4oY1Ws8DjBhWiLW+LXQ /rxwJMX3DVS2GbETOwhrovbONH0JkLMjlaTc6Fd9LzUAJnqBcEalsyzVK+QD0s/9M69A X5XO1yOYyOWDTiFTFJgawgUC6J2oiq1gwXz/16LrPw0VquV/5T9zUo4fCghLuoNjjQus S7tQ== X-Gm-Message-State: AC+VfDx4ZeuQeh7r+4cPopGcKmpzQdTEovHOQk6neFh9kwgmV2fc4rld C9/FrZmYxY4QBbYMxZSyW3qe4TGBIDgVaYkIKW/oPZSdBNV82gfnqMxMR3RD94e4fZz4n+kn8VW rAwicqEb266o3Zi6ejeJKjnffbrjKz+wEz7typFaku1rNLiHCaFOhn3/91QZFBrIQta+1r02nEO SWSG91ig== X-Received: by 2002:ac8:5c4e:0:b0:3f6:b556:7c97 with SMTP id j14-20020ac85c4e000000b003f6b5567c97mr18597060qtj.4.1686693245305; Tue, 13 Jun 2023 14:54:05 -0700 (PDT) X-Received: by 2002:ac8:5c4e:0:b0:3f6:b556:7c97 with SMTP id j14-20020ac85c4e000000b003f6b5567c97mr18597030qtj.4.1686693244966; Tue, 13 Jun 2023 14:54:04 -0700 (PDT) Received: from x1n.redhat.com (cpe5c7695f3aee0-cm5c7695f3aede.cpe.net.cable.rogers.com. [99.254.144.39]) by smtp.gmail.com with ESMTPSA id fz24-20020a05622a5a9800b003f9bccc3182sm4522330qtb.32.2023.06.13.14.54.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 13 Jun 2023 14:54:04 -0700 (PDT) From: Peter Xu <peterx@redhat.com> To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Matthew Wilcox <willy@infradead.org>, Andrea Arcangeli <aarcange@redhat.com>, John Hubbard <jhubbard@nvidia.com>, Mike Rapoport <rppt@kernel.org>, David Hildenbrand <david@redhat.com>, Vlastimil Babka <vbabka@suse.cz>, peterx@redhat.com, "Kirill A . Shutemov" <kirill@shutemov.name>, Andrew Morton <akpm@linux-foundation.org>, Mike Kravetz <mike.kravetz@oracle.com>, James Houghton <jthoughton@google.com>, Hugh Dickins <hughd@google.com> Subject: [PATCH 6/7] mm/gup: Accelerate thp gup even for "pages != NULL" Date: Tue, 13 Jun 2023 17:53:45 -0400 Message-Id: <20230613215346.1022773-7-peterx@redhat.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230613215346.1022773-1-peterx@redhat.com> References: <20230613215346.1022773-1-peterx@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1768626360667109996?= X-GMAIL-MSGID: =?utf-8?q?1768626360667109996?= |
Series |
mm/gup: Unify hugetlb, speed up thp
|
|
Commit Message
Peter Xu
June 13, 2023, 9:53 p.m. UTC
The acceleration of THP was done with ctx.page_mask, however it'll be
ignored if **pages is non-NULL.
The old optimization was introduced in 2013 in 240aadeedc4a ("mm:
accelerate mm_populate() treatment of THP pages"). It didn't explain why
we can't optimize the **pages non-NULL case. It's possible that at that
time the major goal was for mm_populate() which should be enough back then.
Optimize thp for all cases, by properly looping over each subpage, doing
cache flushes, and boost refcounts / pincounts where needed in one go.
This can be verified using gup_test below:
# chrt -f 1 ./gup_test -m 512 -t -L -n 1024 -r 10
Before: 13992.50 ( +-8.75%)
After: 378.50 (+-69.62%)
Signed-off-by: Peter Xu <peterx@redhat.com>
---
mm/gup.c | 36 +++++++++++++++++++++++++++++-------
1 file changed, 29 insertions(+), 7 deletions(-)
Comments
On Tue, Jun 13, 2023 at 05:53:45PM -0400, Peter Xu wrote: > + if (page_increm > 1) > + WARN_ON_ONCE( > + try_grab_folio(compound_head(page), You don't need to call compound_head() here; try_grab_folio() works on tail pages just fine. > + page_increm - 1, > + foll_flags) == NULL); > + > + for (j = 0; j < page_increm; j++) { > + subpage = nth_page(page, j); > + pages[i+j] = subpage; > + flush_anon_page(vma, subpage, start + j * PAGE_SIZE); > + flush_dcache_page(subpage); You're better off calling flush_dcache_folio() right at the end.
On Wed, Jun 14, 2023 at 03:58:34PM +0100, Matthew Wilcox wrote: > On Tue, Jun 13, 2023 at 05:53:45PM -0400, Peter Xu wrote: > > + if (page_increm > 1) > > + WARN_ON_ONCE( > > + try_grab_folio(compound_head(page), > > You don't need to call compound_head() here; try_grab_folio() works > on tail pages just fine. I did it with caution because two things I'm not sure: either is_pci_p2pdma_page() or is_longterm_pinnable_page() inside, both calls is_zone_device_page() on the page*. But I just noticed try_grab_folio() is also used in gup_pte_range() where the thp can be pte mapped, so I assume we at least need that to handle tail page well. Do we perhaps need the compound_head() in try_grab_folio() as a separate patch? Or maybe I was wrong on is_zone_device_page()? > > > + page_increm - 1, > > + foll_flags) == NULL); > > + > > + for (j = 0; j < page_increm; j++) { > > + subpage = nth_page(page, j); > > + pages[i+j] = subpage; > > + flush_anon_page(vma, subpage, start + j * PAGE_SIZE); > > + flush_dcache_page(subpage); > > You're better off calling flush_dcache_folio() right at the end. Will do. Thanks,
On Wed, Jun 14, 2023 at 11:19:48AM -0400, Peter Xu wrote: > > > + for (j = 0; j < page_increm; j++) { > > > + subpage = nth_page(page, j); > > > + pages[i+j] = subpage; > > > + flush_anon_page(vma, subpage, start + j * PAGE_SIZE); > > > + flush_dcache_page(subpage); > > > > You're better off calling flush_dcache_folio() right at the end. > > Will do. Ah when I start to modify it I noticed it's a two-sided sword: we'll then also do flush dcache over the whole folio even if we gup one page. We'll start to get benefit only if some arch at least starts to impl flush_dcache_folio() (which seems to be none, right now..), and we'll already start to lose on amplifying the flush when gup on partial folio. Perhaps I still keep it as-is which will still be accurate, always faster than old code, and definitely not regress in any form?
On Tue, Jun 13, 2023 at 05:53:45PM -0400, Peter Xu wrote: > The acceleration of THP was done with ctx.page_mask, however it'll be > ignored if **pages is non-NULL. > > The old optimization was introduced in 2013 in 240aadeedc4a ("mm: > accelerate mm_populate() treatment of THP pages"). It didn't explain why > we can't optimize the **pages non-NULL case. It's possible that at that > time the major goal was for mm_populate() which should be enough back then. > > Optimize thp for all cases, by properly looping over each subpage, doing > cache flushes, and boost refcounts / pincounts where needed in one go. > > This can be verified using gup_test below: > > # chrt -f 1 ./gup_test -m 512 -t -L -n 1024 -r 10 > > Before: 13992.50 ( +-8.75%) > After: 378.50 (+-69.62%) > > Signed-off-by: Peter Xu <peterx@redhat.com> > --- > mm/gup.c | 36 +++++++++++++++++++++++++++++------- > 1 file changed, 29 insertions(+), 7 deletions(-) > > diff --git a/mm/gup.c b/mm/gup.c > index a2d1b3c4b104..cdabc8ea783b 100644 > --- a/mm/gup.c > +++ b/mm/gup.c > @@ -1210,16 +1210,38 @@ static long __get_user_pages(struct mm_struct *mm, > goto out; > } > next_page: > - if (pages) { > - pages[i] = page; > - flush_anon_page(vma, page, start); > - flush_dcache_page(page); > - ctx.page_mask = 0; > - } > - > page_increm = 1 + (~(start >> PAGE_SHIFT) & ctx.page_mask); > if (page_increm > nr_pages) > page_increm = nr_pages; > + > + if (pages) { > + struct page *subpage; > + unsigned int j; > + > + /* > + * This must be a large folio (and doesn't need to > + * be the whole folio; it can be part of it), do > + * the refcount work for all the subpages too. > + * Since we already hold refcount on the head page, > + * it should never fail. > + * > + * NOTE: here the page may not be the head page > + * e.g. when start addr is not thp-size aligned. > + */ > + if (page_increm > 1) > + WARN_ON_ONCE( > + try_grab_folio(compound_head(page), > + page_increm - 1, > + foll_flags) == NULL); I'm not sure this should be warning but otherwise ignoring this returning NULL? This feels like a case that could come up in realtiy, e.g. folio_ref_try_add_rcu() fails, or !folio_is_longterm_pinnable(). Side-note: I _hate_ the semantics of GUP such that try_grab_folio() (invoked, other than for huge page cases, by the GUP-fast logic) will explicitly fail if neither FOLL_GET or FOLL_PIN are specified, differentiating it from try_grab_page() in this respect. This is a side-note and not relevant here, as all callers to __get_user_pages() either explicitly set FOLL_GET if not set by user (in __get_user_pages_locked()) or don't set pages (e.g. in faultin_vma_page_range()) > + > + for (j = 0; j < page_increm; j++) { > + subpage = nth_page(page, j); > + pages[i+j] = subpage; > + flush_anon_page(vma, subpage, start + j * PAGE_SIZE); > + flush_dcache_page(subpage); > + } > + } > + > i += page_increm; > start += page_increm * PAGE_SIZE; > nr_pages -= page_increm; > -- > 2.40.1 >
On Sat, Jun 17, 2023 at 09:27:22PM +0100, Lorenzo Stoakes wrote: > On Tue, Jun 13, 2023 at 05:53:45PM -0400, Peter Xu wrote: > > The acceleration of THP was done with ctx.page_mask, however it'll be > > ignored if **pages is non-NULL. > > > > The old optimization was introduced in 2013 in 240aadeedc4a ("mm: > > accelerate mm_populate() treatment of THP pages"). It didn't explain why > > we can't optimize the **pages non-NULL case. It's possible that at that > > time the major goal was for mm_populate() which should be enough back then. > > > > Optimize thp for all cases, by properly looping over each subpage, doing > > cache flushes, and boost refcounts / pincounts where needed in one go. > > > > This can be verified using gup_test below: > > > > # chrt -f 1 ./gup_test -m 512 -t -L -n 1024 -r 10 > > > > Before: 13992.50 ( +-8.75%) > > After: 378.50 (+-69.62%) > > > > Signed-off-by: Peter Xu <peterx@redhat.com> > > --- > > mm/gup.c | 36 +++++++++++++++++++++++++++++------- > > 1 file changed, 29 insertions(+), 7 deletions(-) > > > > diff --git a/mm/gup.c b/mm/gup.c > > index a2d1b3c4b104..cdabc8ea783b 100644 > > --- a/mm/gup.c > > +++ b/mm/gup.c > > @@ -1210,16 +1210,38 @@ static long __get_user_pages(struct mm_struct *mm, > > goto out; > > } > > next_page: > > - if (pages) { > > - pages[i] = page; > > - flush_anon_page(vma, page, start); > > - flush_dcache_page(page); > > - ctx.page_mask = 0; > > - } > > - > > page_increm = 1 + (~(start >> PAGE_SHIFT) & ctx.page_mask); > > if (page_increm > nr_pages) > > page_increm = nr_pages; > > + > > + if (pages) { > > + struct page *subpage; > > + unsigned int j; > > + > > + /* > > + * This must be a large folio (and doesn't need to > > + * be the whole folio; it can be part of it), do > > + * the refcount work for all the subpages too. > > + * Since we already hold refcount on the head page, > > + * it should never fail. > > + * > > + * NOTE: here the page may not be the head page > > + * e.g. when start addr is not thp-size aligned. > > + */ > > + if (page_increm > 1) > > + WARN_ON_ONCE( > > + try_grab_folio(compound_head(page), > > + page_increm - 1, > > + foll_flags) == NULL); > > I'm not sure this should be warning but otherwise ignoring this returning > NULL? This feels like a case that could come up in realtiy, > e.g. folio_ref_try_add_rcu() fails, or !folio_is_longterm_pinnable(). Note that we hold already at least 1 refcount on the folio (also mentioned in the comment above this chunk of code), so both folio_ref_try_add_rcu() and folio_is_longterm_pinnable() should already have been called on the same folio and passed. If it will fail it should have already, afaict. I still don't see how that would trigger if the refcount won't overflow. Here what I can do is still guard this try_grab_folio() and fail the GUP if for any reason it failed. Perhaps then it means I'll also keep that one untouched in hugetlb_follow_page_mask() too. But I suppose keeping the WARN_ON_ONCE() seems still proper. Thanks,
On Mon, Jun 19, 2023 at 03:37:30PM -0400, Peter Xu wrote: > Here what I can do is still guard this try_grab_folio() and fail the GUP if > for any reason it failed. Perhaps then it means I'll also keep that one > untouched in hugetlb_follow_page_mask() too. But I suppose keeping the > WARN_ON_ONCE() seems still proper. Here's the outcome that I plan to post in the new version, taking care of try_grab_folio() failures even if it happens, meanwhile remove the compound_head() redundancy on the page. __get_user_pages(): ... ===8<=== /* * This must be a large folio (and doesn't need to * be the whole folio; it can be part of it), do * the refcount work for all the subpages too. * * NOTE: here the page may not be the head page * e.g. when start addr is not thp-size aligned. * try_grab_folio() should have taken care of tail * pages. */ if (page_increm > 1) { struct folio *folio; /* * Since we already hold refcount on the * large folio, this should never fail. */ folio = try_grab_folio(page, page_increm - 1, foll_flags); if (WARN_ON_ONCE(!folio)) { /* * Release the 1st page ref if the * folio is problematic, fail hard. */ gup_put_folio(page_folio(page), 1, foll_flags); ret = -EFAULT; goto out; } } ===8<=== Thanks,
diff --git a/mm/gup.c b/mm/gup.c index a2d1b3c4b104..cdabc8ea783b 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -1210,16 +1210,38 @@ static long __get_user_pages(struct mm_struct *mm, goto out; } next_page: - if (pages) { - pages[i] = page; - flush_anon_page(vma, page, start); - flush_dcache_page(page); - ctx.page_mask = 0; - } - page_increm = 1 + (~(start >> PAGE_SHIFT) & ctx.page_mask); if (page_increm > nr_pages) page_increm = nr_pages; + + if (pages) { + struct page *subpage; + unsigned int j; + + /* + * This must be a large folio (and doesn't need to + * be the whole folio; it can be part of it), do + * the refcount work for all the subpages too. + * Since we already hold refcount on the head page, + * it should never fail. + * + * NOTE: here the page may not be the head page + * e.g. when start addr is not thp-size aligned. + */ + if (page_increm > 1) + WARN_ON_ONCE( + try_grab_folio(compound_head(page), + page_increm - 1, + foll_flags) == NULL); + + for (j = 0; j < page_increm; j++) { + subpage = nth_page(page, j); + pages[i+j] = subpage; + flush_anon_page(vma, subpage, start + j * PAGE_SIZE); + flush_dcache_page(subpage); + } + } + i += page_increm; start += page_increm * PAGE_SIZE; nr_pages -= page_increm;