From patchwork Sat Oct 22 07:19:14 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg KH X-Patchwork-Id: 7835 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4242:0:0:0:0:0 with SMTP id s2csp1151452wrr; Sat, 22 Oct 2022 04:01:11 -0700 (PDT) X-Google-Smtp-Source: AMsMyM6ZZAmPDZlH9j1a+wCOcwWVeWSEF9VIS1Tfpt2NnNuy+L6Ndylgj4qFmsl8qPGPR3ey2J5b X-Received: by 2002:a17:906:844f:b0:78d:8bd1:ee8c with SMTP id e15-20020a170906844f00b0078d8bd1ee8cmr20354761ejy.262.1666436471536; Sat, 22 Oct 2022 04:01:11 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1666436471; cv=none; d=google.com; s=arc-20160816; b=E/em6t75aaF4L3xWqOmF9hyQx11+nyknMVxCjJE3U5V5Ckm75BNXrDeuyHDUBwGJIq d2VGndJ1bJKMjsD/zLCVXF6FAXh152jMnClNFV9eUJ3yuBrmlcviibihqwDun8wAmDHK W3EWpFXRDFRGRp+w15of3Rg4bQfvEMw+x1xzRrsSq0SYCSwwR6r39iCxokK/KJgLPjOO LWk1ANiEBX+/O53eAvTGU8K+j8EsLX4nVbecEa6pgtRzuzFr3gEHtvMXZnq6bMlohZiQ fW+nDdnAn5pde2CS4frnT86+66Sabq38fa+VrM3unawKm9tHCueN9q26ua4iGcF6s/3H BsPg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=jhIgk7uNzqM0FSskvpzwWbwrdzjmca4W8jrG0Wda5cE=; b=wrMgTUW9fEn3RjMPthPJssVHPCSy0JKc99R2e0Tfclrh5hrlyaDe0DrjWB9k0eOyKG ouodHjmo8kbjlvj/kqbQIe1OmTujpMrDdX8yO+olNzq3vTM127+ei+oSWbbmj1OA4EC5 cQuthSXcvCsWlLjeG9YXmCdYkmagGEh6Ns6l0XJj7i8BZapbu8XZ5sfn7WUBHF+0lj2b CJq94YcQg/IiR8ux4R/y9YEkvk82dZCbL1dj8jmdZvCE3gmYBoRESA1ujAjGaN5GHhks WVdZR5SomVAcSddU8lBIIwQ9T/kSFCOIPukjIEqZWUBP6csnwzIt55TIJGaH6PZ2ymPr buKg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b="KYcS/q45"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id a3-20020a50e703000000b00458d2bdcb30si20778547edn.96.2022.10.22.04.00.46; Sat, 22 Oct 2022 04:01:11 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b="KYcS/q45"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230473AbiJVKvG (ORCPT + 99 others); Sat, 22 Oct 2022 06:51:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46458 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230454AbiJVKul (ORCPT ); Sat, 22 Oct 2022 06:50:41 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 06D00285105; Sat, 22 Oct 2022 03:07:45 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id E722260A5C; Sat, 22 Oct 2022 07:37:25 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 09275C433D6; Sat, 22 Oct 2022 07:37:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1666424245; bh=NsVJ/BIVWYCUiIPM2sLFZV0xqyU63Ea0NVA+bdB/+IU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=KYcS/q455gyqXRzbZgftArqYYz2LDohCjZu7YgFEzY46YijmBNTYC6dMoCFZZJtdc XnFTTCJFwKfqucDkpgmjxYBUIrBRCZFs8BazOasxngvz8Kj7aI/gQZjYJTk06bTpR3 ymweYP+3xhstbcYKx63Fv0Mf5oJzunhiQoLZkAf0= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Baolin Wang , Mike Kravetz , David Hildenbrand , Muchun Song , Andrew Morton Subject: [PATCH 5.19 075/717] mm/hugetlb: fix races when looking up a CONT-PTE/PMD size hugetlb page Date: Sat, 22 Oct 2022 09:19:14 +0200 Message-Id: <20221022072428.550961015@linuxfoundation.org> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221022072415.034382448@linuxfoundation.org> References: <20221022072415.034382448@linuxfoundation.org> User-Agent: quilt/0.67 MIME-Version: 1.0 X-Spam-Status: No, score=-7.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_HI, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1747385289411210083?= X-GMAIL-MSGID: =?utf-8?q?1747385289411210083?= From: Baolin Wang commit fac35ba763ed07ba93154c95ffc0c4a55023707f upstream. On some architectures (like ARM64), it can support CONT-PTE/PMD size hugetlb, which means it can support not only PMD/PUD size hugetlb (2M and 1G), but also CONT-PTE/PMD size(64K and 32M) if a 4K page size specified. So when looking up a CONT-PTE size hugetlb page by follow_page(), it will use pte_offset_map_lock() to get the pte entry lock for the CONT-PTE size hugetlb in follow_page_pte(). However this pte entry lock is incorrect for the CONT-PTE size hugetlb, since we should use huge_pte_lock() to get the correct lock, which is mm->page_table_lock. That means the pte entry of the CONT-PTE size hugetlb under current pte lock is unstable in follow_page_pte(), we can continue to migrate or poison the pte entry of the CONT-PTE size hugetlb, which can cause some potential race issues, even though they are under the 'pte lock'. For example, suppose thread A is trying to look up a CONT-PTE size hugetlb page by move_pages() syscall under the lock, however antoher thread B can migrate the CONT-PTE hugetlb page at the same time, which will cause thread A to get an incorrect page, if thread A also wants to do page migration, then data inconsistency error occurs. Moreover we have the same issue for CONT-PMD size hugetlb in follow_huge_pmd(). To fix above issues, rename the follow_huge_pmd() as follow_huge_pmd_pte() to handle PMD and PTE level size hugetlb, which uses huge_pte_lock() to get the correct pte entry lock to make the pte entry stable. Mike said: Support for CONT_PMD/_PTE was added with bb9dd3df8ee9 ("arm64: hugetlb: refactor find_num_contig()"). Patch series "Support for contiguous pte hugepages", v4. However, I do not believe these code paths were executed until migration support was added with 5480280d3f2d ("arm64/mm: enable HugeTLB migration for contiguous bit HugeTLB pages") I would go with 5480280d3f2d for the Fixes: targe. Link: https://lkml.kernel.org/r/635f43bdd85ac2615a58405da82b4d33c6e5eb05.1662017562.git.baolin.wang@linux.alibaba.com Fixes: 5480280d3f2d ("arm64/mm: enable HugeTLB migration for contiguous bit HugeTLB pages") Signed-off-by: Baolin Wang Suggested-by: Mike Kravetz Reviewed-by: Mike Kravetz Cc: David Hildenbrand Cc: Muchun Song Cc: Signed-off-by: Andrew Morton Signed-off-by: Greg Kroah-Hartman --- include/linux/hugetlb.h | 8 ++++---- mm/gup.c | 14 +++++++++++++- mm/hugetlb.c | 27 +++++++++++++-------------- 3 files changed, 30 insertions(+), 19 deletions(-) --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -203,8 +203,8 @@ struct page *follow_huge_addr(struct mm_ struct page *follow_huge_pd(struct vm_area_struct *vma, unsigned long address, hugepd_t hpd, int flags, int pdshift); -struct page *follow_huge_pmd(struct mm_struct *mm, unsigned long address, - pmd_t *pmd, int flags); +struct page *follow_huge_pmd_pte(struct vm_area_struct *vma, unsigned long address, + int flags); struct page *follow_huge_pud(struct mm_struct *mm, unsigned long address, pud_t *pud, int flags); struct page *follow_huge_pgd(struct mm_struct *mm, unsigned long address, @@ -308,8 +308,8 @@ static inline struct page *follow_huge_p return NULL; } -static inline struct page *follow_huge_pmd(struct mm_struct *mm, - unsigned long address, pmd_t *pmd, int flags) +static inline struct page *follow_huge_pmd_pte(struct vm_area_struct *vma, + unsigned long address, int flags) { return NULL; } --- a/mm/gup.c +++ b/mm/gup.c @@ -531,6 +531,18 @@ static struct page *follow_page_pte(stru if (WARN_ON_ONCE((flags & (FOLL_PIN | FOLL_GET)) == (FOLL_PIN | FOLL_GET))) return ERR_PTR(-EINVAL); + + /* + * Considering PTE level hugetlb, like continuous-PTE hugetlb on + * ARM64 architecture. + */ + if (is_vm_hugetlb_page(vma)) { + page = follow_huge_pmd_pte(vma, address, flags); + if (page) + return page; + return no_page_table(vma, flags); + } + retry: if (unlikely(pmd_bad(*pmd))) return no_page_table(vma, flags); @@ -663,7 +675,7 @@ static struct page *follow_pmd_mask(stru if (pmd_none(pmdval)) return no_page_table(vma, flags); if (pmd_huge(pmdval) && is_vm_hugetlb_page(vma)) { - page = follow_huge_pmd(mm, address, pmd, flags); + page = follow_huge_pmd_pte(vma, address, flags); if (page) return page; return no_page_table(vma, flags); --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -6906,12 +6906,13 @@ follow_huge_pd(struct vm_area_struct *vm } struct page * __weak -follow_huge_pmd(struct mm_struct *mm, unsigned long address, - pmd_t *pmd, int flags) +follow_huge_pmd_pte(struct vm_area_struct *vma, unsigned long address, int flags) { + struct hstate *h = hstate_vma(vma); + struct mm_struct *mm = vma->vm_mm; struct page *page = NULL; spinlock_t *ptl; - pte_t pte; + pte_t *ptep, pte; /* * FOLL_PIN is not supported for follow_page(). Ordinary GUP goes via @@ -6921,17 +6922,15 @@ follow_huge_pmd(struct mm_struct *mm, un return NULL; retry: - ptl = pmd_lockptr(mm, pmd); - spin_lock(ptl); - /* - * make sure that the address range covered by this pmd is not - * unmapped from other threads. - */ - if (!pmd_huge(*pmd)) - goto out; - pte = huge_ptep_get((pte_t *)pmd); + ptep = huge_pte_offset(mm, address, huge_page_size(h)); + if (!ptep) + return NULL; + + ptl = huge_pte_lock(h, mm, ptep); + pte = huge_ptep_get(ptep); if (pte_present(pte)) { - page = pmd_page(*pmd) + ((address & ~PMD_MASK) >> PAGE_SHIFT); + page = pte_page(pte) + + ((address & ~huge_page_mask(h)) >> PAGE_SHIFT); /* * try_grab_page() should always succeed here, because: a) we * hold the pmd (ptl) lock, and b) we've just checked that the @@ -6947,7 +6946,7 @@ retry: } else { if (is_hugetlb_entry_migration(pte)) { spin_unlock(ptl); - __migration_entry_wait_huge((pte_t *)pmd, ptl); + __migration_entry_wait_huge(ptep, ptl); goto retry; } /*