From patchwork Wed Jan 3 09:14:20 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 184683 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7301:6f82:b0:100:9c79:88ff with SMTP id tb2csp4912349dyb; Wed, 3 Jan 2024 01:19:53 -0800 (PST) X-Google-Smtp-Source: AGHT+IElHrdVlSSKZXuKR9bFBqxYV6+0axnH9Y3HXBcstU/ELRZXB04448obf3Srngm3cHvqDiDl X-Received: by 2002:a17:902:d58a:b0:1d4:67f1:5daf with SMTP id k10-20020a170902d58a00b001d467f15dafmr9091015plh.132.1704273592772; Wed, 03 Jan 2024 01:19:52 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1704273592; cv=none; d=google.com; s=arc-20160816; b=raTbplLlpkhwMyKHDUBTPdPVlH/uiE3w0cvl7bD/tM2s44zJQB6tu+c5mtYKLR10QM QqnomJDKH2O+n6C9swdQJv9t6OtOhQWfpvAt4KvCYbYFxqqn+fARxgDskIhYQRmxxZ4y JIjZ9IW1/fuTR43AvxSmVcnTr1NxmQfr/4VQNqUPeovvUAYbbdHH3kOmIgcFqdDoQaDO MZQn5n5T9u/L7oqmahnwVth7ykqLPWpIJGHWcPbf/p1lv1FuTcTCmvmNvcMGVdw0wbQY gz9bTbhHwLgkZrITkFCbw8P2ucuep5sXxMYTI4ZZ9UZTzOJ6vJ8eE6WVNl9/FEa3xDo5 Q3Jg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=QzOVwYUc2Smk4Qva0KIv5/d0xcVhZ//s5msSCcZkJA4=; fh=JvQ3nGflNTIwPBfhSW2OJAIjHOHR+R1SiFkwzYoQoWY=; b=rNekSgGDYPHyaLl6yiWuqGKIw+4OOpXD6nsqmBy8xo/xQyxjmzP5PWgNZfgPfHkzA4 e12OgE/MpDBGe90ygydbuYEgBh82bRSqkX8aB6BeKOgSGbGdl/k68EX7DJ73QWpIYfoH 5NbB3RIVFWiN0Nu42/45pxDHA3tQYNex5YERIHezGvS1o9lYxRG4INcCbwCN4itAjO+3 9c8gGUWGyNAwOgDzpfl4F/o/oOaQiWFzXWbCTFLL4ejjkovaRa21siZ0odxF4m2LCAk6 oNMXv7f26OI+almkbMGtoMkyidsQ+/FDcvmW+g8C3Oqybqm7KQsexbOc4+TLe6PSTIvV THwA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=ilsoYJpy; spf=pass (google.com: domain of linux-kernel+bounces-15325-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) smtp.mailfrom="linux-kernel+bounces-15325-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from sy.mirrors.kernel.org (sy.mirrors.kernel.org. [147.75.48.161]) by mx.google.com with ESMTPS id k6-20020a170902c40600b001d4c2bdf001si3060423plk.436.2024.01.03.01.19.52 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 03 Jan 2024 01:19:52 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-15325-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) client-ip=147.75.48.161; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=ilsoYJpy; spf=pass (google.com: domain of linux-kernel+bounces-15325-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) smtp.mailfrom="linux-kernel+bounces-15325-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sy.mirrors.kernel.org (Postfix) with ESMTPS id 0CCAAB2377C for ; Wed, 3 Jan 2024 09:18:10 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 251071A59F; Wed, 3 Jan 2024 09:16:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="ilsoYJpy" X-Original-To: linux-kernel@vger.kernel.org Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 194711A582 for ; Wed, 3 Jan 2024 09:16:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1704273406; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=QzOVwYUc2Smk4Qva0KIv5/d0xcVhZ//s5msSCcZkJA4=; b=ilsoYJpyCpGaGej/HBuTvgo0tbU3UdYNCBt7YatgrR5sKLsrW5qcugxQQaCR1mG7be6VCc 3vKPTiJo7mUCnToGgvgSVX2jNQiykcYt3iCOZiGQLKUX5xa3ht7PmZvdxWk8cvn39T7LuZ Zf+2urV6K0A7yEHE4kUsz0+IDh5QmzA= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-117-3JInRm3gNpSknDs2l1d3Mg-1; Wed, 03 Jan 2024 04:16:40 -0500 X-MC-Unique: 3JInRm3gNpSknDs2l1d3Mg-1 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id BD778101A58E; Wed, 3 Jan 2024 09:16:39 +0000 (UTC) Received: from x1n.redhat.com (unknown [10.72.116.69]) by smtp.corp.redhat.com (Postfix) with ESMTP id 61C82492BFA; Wed, 3 Jan 2024 09:16:27 +0000 (UTC) From: peterx@redhat.com To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: James Houghton , David Hildenbrand , "Kirill A . Shutemov" , Yang Shi , peterx@redhat.com, linux-riscv@lists.infradead.org, Andrew Morton , "Aneesh Kumar K . V" , Rik van Riel , Andrea Arcangeli , Axel Rasmussen , Mike Rapoport , John Hubbard , Vlastimil Babka , Michael Ellerman , Christophe Leroy , Andrew Jones , linuxppc-dev@lists.ozlabs.org, Mike Kravetz , Muchun Song , linux-arm-kernel@lists.infradead.org, Jason Gunthorpe , Christoph Hellwig , Lorenzo Stoakes , Matthew Wilcox Subject: [PATCH v2 10/13] mm/gup: Handle huge pud for follow_pud_mask() Date: Wed, 3 Jan 2024 17:14:20 +0800 Message-ID: <20240103091423.400294-11-peterx@redhat.com> In-Reply-To: <20240103091423.400294-1-peterx@redhat.com> References: <20240103091423.400294-1-peterx@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.10 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1787060386499295733 X-GMAIL-MSGID: 1787060386499295733 From: Peter Xu Teach follow_pud_mask() to be able to handle normal PUD pages like hugetlb. Rename follow_devmap_pud() to follow_huge_pud() so that it can process either huge devmap or hugetlb. Move it out of TRANSPARENT_HUGEPAGE_PUD and and huge_memory.c (which relies on CONFIG_THP). In the new follow_huge_pud(), taking care of possible CoR for hugetlb if necessary. touch_pud() needs to be moved out of huge_memory.c to be accessable from gup.c even if !THP. Since at it, optimize the non-present check by adding a pud_present() early check before taking the pgtable lock, failing the follow_page() early if PUD is not present: that is required by both devmap or hugetlb. Use pud_huge() to also cover the pud_devmap() case. One more trivial thing to mention is, introduce "pud_t pud" in the code paths along the way, so the code doesn't dereference *pudp multiple time. Not only because that looks less straightforward, but also because if the dereference really happened, it's not clear whether there can be race to see different *pudp values when it's being modified at the same time. Setting ctx->page_mask properly for a PUD entry. As a side effect, this patch should also be able to optimize devmap GUP on PUD to be able to jump over the whole PUD range, but not yet verified. Hugetlb already can do so prior to this patch. Signed-off-by: Peter Xu Reviewed-by: Jason Gunthorpe --- include/linux/huge_mm.h | 8 ----- mm/gup.c | 70 +++++++++++++++++++++++++++++++++++++++-- mm/huge_memory.c | 47 ++------------------------- mm/internal.h | 2 ++ 4 files changed, 71 insertions(+), 56 deletions(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 96bd4b5d027e..3b73d20d537e 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -345,8 +345,6 @@ static inline bool folio_test_pmd_mappable(struct folio *folio) struct page *follow_devmap_pmd(struct vm_area_struct *vma, unsigned long addr, pmd_t *pmd, int flags, struct dev_pagemap **pgmap); -struct page *follow_devmap_pud(struct vm_area_struct *vma, unsigned long addr, - pud_t *pud, int flags, struct dev_pagemap **pgmap); vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf); @@ -502,12 +500,6 @@ static inline struct page *follow_devmap_pmd(struct vm_area_struct *vma, return NULL; } -static inline struct page *follow_devmap_pud(struct vm_area_struct *vma, - unsigned long addr, pud_t *pud, int flags, struct dev_pagemap **pgmap) -{ - return NULL; -} - static inline bool thp_migration_supported(void) { return false; diff --git a/mm/gup.c b/mm/gup.c index 63845b3ec44f..760406180222 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -525,6 +525,70 @@ static struct page *no_page_table(struct vm_area_struct *vma, return NULL; } +#ifdef CONFIG_PGTABLE_HAS_HUGE_LEAVES +static struct page *follow_huge_pud(struct vm_area_struct *vma, + unsigned long addr, pud_t *pudp, + int flags, struct follow_page_context *ctx) +{ + struct mm_struct *mm = vma->vm_mm; + struct page *page; + pud_t pud = *pudp; + unsigned long pfn = pud_pfn(pud); + int ret; + + assert_spin_locked(pud_lockptr(mm, pudp)); + + if ((flags & FOLL_WRITE) && !pud_write(pud)) + return NULL; + + if (!pud_present(pud)) + return NULL; + + pfn += (addr & ~PUD_MASK) >> PAGE_SHIFT; + +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD + if (pud_devmap(pud)) { + /* + * device mapped pages can only be returned if the caller + * will manage the page reference count. + * + * At least one of FOLL_GET | FOLL_PIN must be set, so + * assert that here: + */ + if (!(flags & (FOLL_GET | FOLL_PIN))) + return ERR_PTR(-EEXIST); + + if (flags & FOLL_TOUCH) + touch_pud(vma, addr, pudp, flags & FOLL_WRITE); + + ctx->pgmap = get_dev_pagemap(pfn, ctx->pgmap); + if (!ctx->pgmap) + return ERR_PTR(-EFAULT); + } +#endif /* CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */ + page = pfn_to_page(pfn); + + if (!pud_devmap(pud) && !pud_write(pud) && + gup_must_unshare(vma, flags, page)) + return ERR_PTR(-EMLINK); + + ret = try_grab_page(page, flags); + if (ret) + page = ERR_PTR(ret); + else + ctx->page_mask = HPAGE_PUD_NR - 1; + + return page; +} +#else /* CONFIG_PGTABLE_HAS_HUGE_LEAVES */ +static struct page *follow_huge_pud(struct vm_area_struct *vma, + unsigned long addr, pud_t *pudp, + int flags, struct follow_page_context *ctx) +{ + return NULL; +} +#endif /* CONFIG_PGTABLE_HAS_HUGE_LEAVES */ + static int follow_pfn_pte(struct vm_area_struct *vma, unsigned long address, pte_t *pte, unsigned int flags) { @@ -760,11 +824,11 @@ static struct page *follow_pud_mask(struct vm_area_struct *vma, pudp = pud_offset(p4dp, address); pud = READ_ONCE(*pudp); - if (pud_none(pud)) + if (pud_none(pud) || !pud_present(pud)) return no_page_table(vma, flags, address); - if (pud_devmap(pud)) { + if (pud_huge(pud)) { ptl = pud_lock(mm, pudp); - page = follow_devmap_pud(vma, address, pudp, flags, &ctx->pgmap); + page = follow_huge_pud(vma, address, pudp, flags, ctx); spin_unlock(ptl); if (page) return page; diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 94ef5c02b459..9993d2b18809 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1373,8 +1373,8 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm, } #ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD -static void touch_pud(struct vm_area_struct *vma, unsigned long addr, - pud_t *pud, bool write) +void touch_pud(struct vm_area_struct *vma, unsigned long addr, + pud_t *pud, bool write) { pud_t _pud; @@ -1386,49 +1386,6 @@ static void touch_pud(struct vm_area_struct *vma, unsigned long addr, update_mmu_cache_pud(vma, addr, pud); } -struct page *follow_devmap_pud(struct vm_area_struct *vma, unsigned long addr, - pud_t *pud, int flags, struct dev_pagemap **pgmap) -{ - unsigned long pfn = pud_pfn(*pud); - struct mm_struct *mm = vma->vm_mm; - struct page *page; - int ret; - - assert_spin_locked(pud_lockptr(mm, pud)); - - if (flags & FOLL_WRITE && !pud_write(*pud)) - return NULL; - - if (pud_present(*pud) && pud_devmap(*pud)) - /* pass */; - else - return NULL; - - if (flags & FOLL_TOUCH) - touch_pud(vma, addr, pud, flags & FOLL_WRITE); - - /* - * device mapped pages can only be returned if the - * caller will manage the page reference count. - * - * At least one of FOLL_GET | FOLL_PIN must be set, so assert that here: - */ - if (!(flags & (FOLL_GET | FOLL_PIN))) - return ERR_PTR(-EEXIST); - - pfn += (addr & ~PUD_MASK) >> PAGE_SHIFT; - *pgmap = get_dev_pagemap(pfn, *pgmap); - if (!*pgmap) - return ERR_PTR(-EFAULT); - page = pfn_to_page(pfn); - - ret = try_grab_page(page, flags); - if (ret) - page = ERR_PTR(ret); - - return page; -} - int copy_huge_pud(struct mm_struct *dst_mm, struct mm_struct *src_mm, pud_t *dst_pud, pud_t *src_pud, unsigned long addr, struct vm_area_struct *vma) diff --git a/mm/internal.h b/mm/internal.h index f309a010d50f..5821b7a14b62 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -1007,6 +1007,8 @@ int __must_check try_grab_page(struct page *page, unsigned int flags); /* * mm/huge_memory.c */ +void touch_pud(struct vm_area_struct *vma, unsigned long addr, + pud_t *pud, bool write); struct page *follow_trans_huge_pmd(struct vm_area_struct *vma, unsigned long addr, pmd_t *pmd, unsigned int flags);