Message ID | 20230715032802.2508163-1-linmiaohe@huawei.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:c923:0:b0:3e4:2afc:c1 with SMTP id j3csp3657vqt; Fri, 14 Jul 2023 20:54:45 -0700 (PDT) X-Google-Smtp-Source: APBJJlFRirsNzSf7E0UjukZ/FSmjAd0hfFGwLvry0ezE4sXVbENk8tgHYeasF1LRfEA2p2gIFSXZ X-Received: by 2002:a17:906:519e:b0:994:5503:69c1 with SMTP id y30-20020a170906519e00b00994550369c1mr2354667ejk.18.1689393285066; Fri, 14 Jul 2023 20:54:45 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1689393285; cv=none; d=google.com; s=arc-20160816; b=t4frIDMmYstxKQR0XuXaHefL3Vf05DqGnHazMITSjDmniZlCoVLaxZKh41rPopchIg wWlRh4AipEmtvFJStllMjFlS1yvuGEVPSv/cx6C3qWiIQBNfTp/QFeRnJmCvm73Q0Kfm 6jpkIzEQbwyPGuRi9ECbAIKwzo0OOXIZeerqp2EOhIkw+CS5jJUZxIlrZI0lbYD73hvs /QYlztgqyOhzZIQ3drMWmHTlvKt9I9Y3Zs5U1jZ9sONIq0il0j2NoNh/Hgj8a3FWk6P4 NckqlZVzzr47oyzuiRZtXVEm4Kmg1qx10TRHeYtkDZLukMK6jhnFJn3umxuathcyUvsf fp/A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from; bh=KRVLBcG9m4yYhiMB9t/2G/o3r3KkJyScUmCMgyBmalY=; fh=qzFN4U0kifqIfd7P07Gk/0coMp1+a18+XFLbwN1xCUo=; b=VsXsIVbLVsgg3gWha/naMTqcaA5hPkQmBRrUJmjrT7nl5N2UEUU01R+LcSL15Yf/Lx 11uqhbjYoD9TEdZxYXFqT6y8VvWrwLw6/JkXVdrbb70IUKhvDVnnTaOOCk4bg/ljqSk2 thK2PFWu8wI1/BV66KrMXBXJLyRHEk8y4KuF1zUxUIguP+jE/x5nZqpuxVHmHyvofyxa JYkNdxMfIqk7hdOsM4N3egmBKJ2d/WYdE3KZjqhJ+CfIHy4IB0x6gc7Eh4ClqJmcCkXW nRDiKSvbfKDdSHZXsIi+r6sLyXW0J7Ml/bnwC7z+hukOZiPDAp/5pDYHTGiDJ/ImTxHw cXNg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id lf30-20020a170907175e00b00992c5cf17f9si10142232ejc.43.2023.07.14.20.54.22; Fri, 14 Jul 2023 20:54:45 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229891AbjGOD17 (ORCPT <rfc822;tebrre53rla2o@gmail.com> + 99 others); Fri, 14 Jul 2023 23:27:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60958 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229482AbjGOD14 (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Fri, 14 Jul 2023 23:27:56 -0400 Received: from szxga03-in.huawei.com (szxga03-in.huawei.com [45.249.212.189]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 273153A96; Fri, 14 Jul 2023 20:27:55 -0700 (PDT) Received: from canpemm500002.china.huawei.com (unknown [172.30.72.54]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4R2tyd5zMGzLnlZ; Sat, 15 Jul 2023 11:25:29 +0800 (CST) Received: from huawei.com (10.174.151.185) by canpemm500002.china.huawei.com (7.192.104.244) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.27; Sat, 15 Jul 2023 11:27:52 +0800 From: Miaohe Lin <linmiaohe@huawei.com> To: <akpm@linux-foundation.org>, <hannes@cmpxchg.org>, <mhocko@kernel.org>, <roman.gushchin@linux.dev>, <shakeelb@google.com> CC: <muchun.song@linux.dev>, <linux-mm@kvack.org>, <cgroups@vger.kernel.org>, <linux-kernel@vger.kernel.org>, <linmiaohe@huawei.com> Subject: [PATCH] mm/memcg: use get_page() for device private pages in mc_handle_swap_pte() Date: Sat, 15 Jul 2023 11:28:02 +0800 Message-ID: <20230715032802.2508163-1-linmiaohe@huawei.com> X-Mailer: git-send-email 2.33.0 MIME-Version: 1.0 Content-Transfer-Encoding: 7BIT Content-Type: text/plain; charset=US-ASCII X-Originating-IP: [10.174.151.185] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To canpemm500002.china.huawei.com (7.192.104.244) X-CFilter-Loop: Reflected X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1771457252874732593 X-GMAIL-MSGID: 1771457252874732593 |
Series |
mm/memcg: use get_page() for device private pages in mc_handle_swap_pte()
|
|
Commit Message
Miaohe Lin
July 15, 2023, 3:28 a.m. UTC
When page table locked is held, the page can't be freed from under us.
So use get_page() to get the extra page reference to simplify the code.
No functional change intended.
Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
---
mm/memcontrol.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
Comments
On Sat, Jul 15, 2023 at 11:28:02AM +0800, Miaohe Lin wrote: > When page table locked is held, the page can't be freed from under us. But the page isn't mapped into the page table ... there's a swap entry in the page table, so I don't think your logic holds. > So use get_page() to get the extra page reference to simplify the code. > No functional change intended. > > Signed-off-by: Miaohe Lin <linmiaohe@huawei.com> > --- > mm/memcontrol.c | 5 +++-- > 1 file changed, 3 insertions(+), 2 deletions(-) > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > index 93e3cc581b51..4ca382efb1ca 100644 > --- a/mm/memcontrol.c > +++ b/mm/memcontrol.c > @@ -5670,8 +5670,9 @@ static struct page *mc_handle_swap_pte(struct vm_area_struct *vma, > */ > if (is_device_private_entry(ent)) { > page = pfn_swap_entry_to_page(ent); > - if (!get_page_unless_zero(page)) > - return NULL; > + /* Get a page reference while we know the page can't be freed. */ > + get_page(page); > + > return page; > } > > -- > 2.33.0 > >
On 2023/7/15 11:56, Matthew Wilcox wrote: > On Sat, Jul 15, 2023 at 11:28:02AM +0800, Miaohe Lin wrote: >> When page table locked is held, the page can't be freed from under us. > > But the page isn't mapped into the page table ... there's a swap entry > in the page table, so I don't think your logic holds. > IIUC, device_private_entry will hold one page refcnt when it's set to page table. And there's similar code in do_swap_page(): vm_fault_t do_swap_page(struct vm_fault *vmf) if (unlikely(non_swap_entry(entry))) { if (is_device_private_entry(entry)) /* * Get a page reference while we know the page can't be * freed. */ get_page(vmf->page); pte_unmap_unlock(vmf->pte, vmf->ptl); ret = vmf->page->pgmap->ops->migrate_to_ram(vmf); put_page(vmf->page); ... If my logic doesn't hold, do_swap_page() will need to fix the code. Or am I miss something? Thanks Matthew.
On 2023/7/17 10:28, Miaohe Lin wrote: > On 2023/7/15 11:56, Matthew Wilcox wrote: >> On Sat, Jul 15, 2023 at 11:28:02AM +0800, Miaohe Lin wrote: >>> When page table locked is held, the page can't be freed from under us. >> >> But the page isn't mapped into the page table ... there's a swap entry >> in the page table, so I don't think your logic holds. >> > > IIUC, device_private_entry will hold one page refcnt when it's set to page table. Take remove_migration_pte() as example, it will hold extra one page refcnt when set device private entry: remove_migration_pte() ... folio_get(folio); ... if (unlikely(is_device_private_page(new))) { make_[writable|readable]_device_private_entry(); } ... set_pte_at > And there's similar code in do_swap_page(): > > vm_fault_t do_swap_page(struct vm_fault *vmf) > if (unlikely(non_swap_entry(entry))) { > if (is_device_private_entry(entry)) > /* > * Get a page reference while we know the page can't be > * freed. > */ > get_page(vmf->page); > pte_unmap_unlock(vmf->pte, vmf->ptl); > ret = vmf->page->pgmap->ops->migrate_to_ram(vmf); > put_page(vmf->page); > ... > > If my logic doesn't hold, do_swap_page() will need to fix the code. Or am I miss something? Can I have your opinion? Thanks.
diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 93e3cc581b51..4ca382efb1ca 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -5670,8 +5670,9 @@ static struct page *mc_handle_swap_pte(struct vm_area_struct *vma, */ if (is_device_private_entry(ent)) { page = pfn_swap_entry_to_page(ent); - if (!get_page_unless_zero(page)) - return NULL; + /* Get a page reference while we know the page can't be freed. */ + get_page(page); + return page; }