From patchwork Sat Feb 18 00:28:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Houghton X-Patchwork-Id: 58845 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp143334wrn; Fri, 17 Feb 2023 16:32:37 -0800 (PST) X-Google-Smtp-Source: AK7set84E+cCx85ZAXIGUjv5z/FoT9LYOdZMh7Qvg/OnZa6MIMfXRjO/7HAOYCWYvdSEcPOlxqBN X-Received: by 2002:a62:3802:0:b0:5a8:adc8:6de1 with SMTP id f2-20020a623802000000b005a8adc86de1mr2166548pfa.29.1676680356762; Fri, 17 Feb 2023 16:32:36 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1676680356; cv=none; d=google.com; s=arc-20160816; b=sC9JXwhguh2Lsz40cwAdX/D9UUy4JATBrH+rSFZ84KXoZf1IAtVVJnIoLcS2Burz5J NS9mVx1dzRqRInnEO1DAzFcD0LiXHCw4X7glwhEHTHO5Ls3/mx9oB1SV9ZrgNbwpNZyk b6cNkkns0TcAzHnDBg7ixtkggEhyCGvCUppL6IfCd4UJNF70qkYjfNMF/oHm5ayaepu+ MLrJsILDioaVSt6XLEKmDbk5bw/ahm59tyuj3ybBsumwyZa64iuk8CsPk/dtrH9vLLxF nNfGMknLkxXke+IZhPUEAr7Mx8n3tQWBcaSLnCe5YV2qN8lcROL/DUuAEWGU3/vLPVh9 4+Sg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:dkim-signature; bh=HPj8If6i9Ge054c2yi88NtPUxX7MkhEBgezIKsLW8z8=; b=k6fWCA5BquuCflMQ7qprVaYk0SS+dMqxIlndMwhLNcyncleSIs22f2hhUblHyj8M+d 86nvc/H/ILFsRNVN8kmork6kGrIY659YvAnvEBV88OLaY/A9NR6It7ZkeEepKmS+JA3G 06pWpUbO9ubcFf7ZZhVVij1PlWRjTDip1HUV7S/J0yGe6wRviDYg32dfvODa2dN9cMK9 8n4lACXFXOxsB7kzGwzSiBBhggn1Lnj6QKh9iV4n5K3Vdmd4YGx3Q7rDN9HzlZNYdjYr ZKrPsHx9Ipp+8vvC8/2ujQ6BvwgPCEcB2EQl18Lf/2xFTyk2BPoSVaDi9CR7fwdEftTp QZiw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=Kd84UzoU; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id x128-20020a628686000000b005a9d5b4e724si3595989pfd.124.2023.02.17.16.32.24; Fri, 17 Feb 2023 16:32:36 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=Kd84UzoU; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230313AbjBRAbb (ORCPT + 99 others); Fri, 17 Feb 2023 19:31:31 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43030 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230174AbjBRAaM (ORCPT ); Fri, 17 Feb 2023 19:30:12 -0500 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 349BA6CA1D for ; Fri, 17 Feb 2023 16:29:33 -0800 (PST) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-5366333bdb5so15830587b3.19 for ; Fri, 17 Feb 2023 16:29:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=HPj8If6i9Ge054c2yi88NtPUxX7MkhEBgezIKsLW8z8=; b=Kd84UzoUTRPGM2pruNTbVhhDlhlYmwG9wMjKAkrrxmwn/CJAyxsS6jZ/LzaQVwYC3z S69dpVgrGNsnQ2VWeasqchjAp3jDCn9n8jxsJDYFL1UCwLksmO5Kw+6uUwl7zSSgvaCl CihuI8Yf+eChioQnTWfweqD4fr36yuVoZcedwO4ze5S8nNa5G9axS81vDsPaWdtLrroV 8UuCTi7HBc8+DG1q9YpGK/r09QKm4gOgG0V2UzM7mQ71xE6K5hJlSxLD1cNbFDXLleV5 yEaFpVguoifalvGf8MAWgne1cnF863zM85vjZWzRVitIqTT0Ms2eNQkJwx5cA7n7LbdD 9FRQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=HPj8If6i9Ge054c2yi88NtPUxX7MkhEBgezIKsLW8z8=; b=cdAD0PJMBpZmSMX9kpEE1e591YBc8vBZk/37zYRi+/nIjI/M5bLOGnX28nO3fSOTWr rRtap00aSisfJ/zGbAGrFRG0gwl3qc3dUvxAloHaOuMB3N5dcx4QonWSs5DJP65JQS2N CLfCnDCPpIG2OncnKrQYeUHZMKYFw8qQAXWP4PgRabHyCrJaqwG3p+suxUZQ1E+k1e8E NSWFkTSKknYZf7/s5+jBohunRl6B/yCnDQC9yJX82Xtax+zu1jRX3D2fHW613QadXebb Tg7YlmwWnYRbDrK77hImSQkOB6vAzel/af+kuV3AcGAoRHUqQCP94EZfaw+ehbg6R3CX Y1Cg== X-Gm-Message-State: AO0yUKWEIqqYnKcARBAUWaU5VPoEqvWxKtvYPMBhDWU8zbXu3ZuPQU/M SpS1biMp+v1hVYDgtUqgXuv5e4g16cQCfdgz X-Received: from jthoughton.c.googlers.com ([fda3:e722:ac3:cc00:14:4d90:c0a8:2a4f]) (user=jthoughton job=sendgmr) by 2002:a05:6902:1cd:b0:985:3b30:f27 with SMTP id u13-20020a05690201cd00b009853b300f27mr245191ybh.13.1676680160446; Fri, 17 Feb 2023 16:29:20 -0800 (PST) Date: Sat, 18 Feb 2023 00:28:09 +0000 In-Reply-To: <20230218002819.1486479-1-jthoughton@google.com> Mime-Version: 1.0 References: <20230218002819.1486479-1-jthoughton@google.com> X-Mailer: git-send-email 2.39.2.637.g21b0678d19-goog Message-ID: <20230218002819.1486479-37-jthoughton@google.com> Subject: [PATCH v2 36/46] hugetlb: remove huge_pte_lock and huge_pte_lockptr From: James Houghton To: Mike Kravetz , Muchun Song , Peter Xu , Andrew Morton Cc: David Hildenbrand , David Rientjes , Axel Rasmussen , Mina Almasry , "Zach O'Keefe" , Manish Mishra , Naoya Horiguchi , "Dr . David Alan Gilbert" , "Matthew Wilcox (Oracle)" , Vlastimil Babka , Baolin Wang , Miaohe Lin , Yang Shi , Frank van der Linden , Jiaqi Yan , linux-mm@kvack.org, linux-kernel@vger.kernel.org, James Houghton X-Spam-Status: No, score=-9.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1758126781395748801?= X-GMAIL-MSGID: =?utf-8?q?1758126781395748801?= They are replaced with hugetlb_pte_lock{,ptr}. All callers that haven't already been replaced don't get called when using HGM, so we handle them by populating hugetlb_ptes with the standard, hstate-sized huge PTEs. Signed-off-by: James Houghton diff --git a/arch/powerpc/mm/pgtable.c b/arch/powerpc/mm/pgtable.c index 035a0df47af0..c90ac06dc8d9 100644 --- a/arch/powerpc/mm/pgtable.c +++ b/arch/powerpc/mm/pgtable.c @@ -258,11 +258,14 @@ int huge_ptep_set_access_flags(struct vm_area_struct *vma, #ifdef CONFIG_PPC_BOOK3S_64 struct hstate *h = hstate_vma(vma); + struct hugetlb_pte hpte; psize = hstate_get_psize(h); #ifdef CONFIG_DEBUG_VM - assert_spin_locked(huge_pte_lockptr(huge_page_shift(h), - vma->vm_mm, ptep)); + /* HGM is not supported for powerpc yet. */ + hugetlb_pte_init(&hpte, ptep, huge_page_shift(h), + hpage_size_to_level(psize)); + assert_spin_locked(hpte.ptl); #endif #else diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 6cd4ae08d84d..742e7f2cb170 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -1012,14 +1012,6 @@ static inline gfp_t htlb_modify_alloc_mask(struct hstate *h, gfp_t gfp_mask) return modified_mask; } -static inline spinlock_t *huge_pte_lockptr(unsigned int shift, - struct mm_struct *mm, pte_t *pte) -{ - if (shift == PMD_SHIFT) - return pmd_lockptr(mm, (pmd_t *) pte); - return &mm->page_table_lock; -} - #ifndef hugepages_supported /* * Some platform decide whether they support huge pages at boot @@ -1228,12 +1220,6 @@ static inline gfp_t htlb_modify_alloc_mask(struct hstate *h, gfp_t gfp_mask) return 0; } -static inline spinlock_t *huge_pte_lockptr(unsigned int shift, - struct mm_struct *mm, pte_t *pte) -{ - return &mm->page_table_lock; -} - static inline void hugetlb_count_init(struct mm_struct *mm) { } @@ -1308,16 +1294,6 @@ int hugetlb_collapse(struct mm_struct *mm, unsigned long start, } #endif -static inline spinlock_t *huge_pte_lock(struct hstate *h, - struct mm_struct *mm, pte_t *pte) -{ - spinlock_t *ptl; - - ptl = huge_pte_lockptr(huge_page_shift(h), mm, pte); - spin_lock(ptl); - return ptl; -} - static inline spinlock_t *hugetlb_pte_lockptr(struct hugetlb_pte *hpte) { @@ -1353,8 +1329,22 @@ void hugetlb_pte_init(struct mm_struct *mm, struct hugetlb_pte *hpte, pte_t *ptep, unsigned int shift, enum hugetlb_level level) { - __hugetlb_pte_init(hpte, ptep, shift, level, - huge_pte_lockptr(shift, mm, ptep)); + spinlock_t *ptl; + + /* + * For contiguous HugeTLB PTEs that can contain other HugeTLB PTEs + * on the same level, the same PTL for both must be used. + * + * For some architectures that implement hugetlb_walk_step, this + * version of hugetlb_pte_populate() may not be correct to use for + * high-granularity PTEs. Instead, call __hugetlb_pte_populate() + * directly. + */ + if (level == HUGETLB_LEVEL_PMD) + ptl = pmd_lockptr(mm, (pmd_t *) ptep); + else + ptl = &mm->page_table_lock; + __hugetlb_pte_init(hpte, ptep, shift, level, ptl); } #if defined(CONFIG_HUGETLB_PAGE) && defined(CONFIG_CMA) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 34368072dabe..e0a92e7c1755 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -5454,9 +5454,8 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src, put_page(hpage); /* Install the new hugetlb folio if src pte stable */ - dst_ptl = huge_pte_lock(h, dst, dst_pte); - src_ptl = huge_pte_lockptr(huge_page_shift(h), - src, src_pte); + dst_ptl = hugetlb_pte_lock(&dst_hpte); + src_ptl = hugetlb_pte_lockptr(&src_hpte); spin_lock_nested(src_ptl, SINGLE_DEPTH_NESTING); entry = huge_ptep_get(src_pte); if (!pte_same(src_pte_old, entry)) { @@ -7582,7 +7581,8 @@ pte_t *huge_pmd_share(struct mm_struct *mm, struct vm_area_struct *vma, unsigned long saddr; pte_t *spte = NULL; pte_t *pte; - spinlock_t *ptl; + struct hugetlb_pte hpte; + struct hstate *shstate; i_mmap_lock_read(mapping); vma_interval_tree_foreach(svma, &mapping->i_mmap, idx, idx) { @@ -7603,7 +7603,11 @@ pte_t *huge_pmd_share(struct mm_struct *mm, struct vm_area_struct *vma, if (!spte) goto out; - ptl = huge_pte_lock(hstate_vma(vma), mm, spte); + shstate = hstate_vma(svma); + + hugetlb_pte_init(mm, &hpte, spte, huge_page_shift(shstate), + hpage_size_to_level(huge_page_size(shstate))); + spin_lock(hpte.ptl); if (pud_none(*pud)) { pud_populate(mm, pud, (pmd_t *)((unsigned long)spte & PAGE_MASK)); @@ -7611,7 +7615,7 @@ pte_t *huge_pmd_share(struct mm_struct *mm, struct vm_area_struct *vma, } else { put_page(virt_to_page(spte)); } - spin_unlock(ptl); + spin_unlock(hpte.ptl); out: pte = (pte_t *)pmd_alloc(mm, pud, addr); i_mmap_unlock_read(mapping); @@ -8315,6 +8319,7 @@ static void hugetlb_unshare_pmds(struct vm_area_struct *vma, unsigned long address; spinlock_t *ptl; pte_t *ptep; + struct hugetlb_pte hpte; if (!(vma->vm_flags & VM_MAYSHARE)) return; @@ -8336,7 +8341,10 @@ static void hugetlb_unshare_pmds(struct vm_area_struct *vma, ptep = hugetlb_walk(vma, address, sz); if (!ptep) continue; - ptl = huge_pte_lock(h, mm, ptep); + + hugetlb_pte_init(mm, &hpte, ptep, huge_page_shift(h), + hpage_size_to_level(sz)); + ptl = hugetlb_pte_lock(&hpte); huge_pmd_unshare(mm, vma, address, ptep); spin_unlock(ptl); }