Message ID | 20231107135216.415926-1-wangkefeng.wang@huawei.com |
---|---|
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:aa0b:0:b0:403:3b70:6f57 with SMTP id k11csp246960vqo; Tue, 7 Nov 2023 05:54:40 -0800 (PST) X-Google-Smtp-Source: AGHT+IF/6UitAQRNqv80EDxW9jnAggEiidIYKz7ZeQ71bb+khNedvlm4akG3QQKrWomy+zcPKukp X-Received: by 2002:a05:6a00:93aa:b0:6be:bf7:fda5 with SMTP id ka42-20020a056a0093aa00b006be0bf7fda5mr30546038pfb.12.1699365280667; Tue, 07 Nov 2023 05:54:40 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1699365280; cv=none; d=google.com; s=arc-20160816; b=FyxUmckuOYBAvHPU+wOr7MW3K/deuWkrFZGoc2ZKdpQhRiuzrZwxg2PW/rWKJbnsN5 uPnLKl6vZWXV+FLgQzUQDZ5PhhpbcAKm1IHp1sa53ZhneahYwe1MvUQfoM5XgSquiX7V Lk12hl4HIaa1JX/MAtRry11U6yl2BeVj8qaPfyzuP/T/o8VUcjZNZ6jTWd4bPqP5iL5V 8VBxALwuuuARU1nKhyNwGxOF31m6w8gYnsTSjp/c0CGKSR4Lm7s0Lw+YDEB7QAUYYJLZ 0mg+YvzAGNrm286ZmodvI/ccGXtLcXWV0xvfp2W+PAYSZ3WdcH39f1kcwpfHyqpPbQdq e2kA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from; bh=40UsECvJB6aNamsNaSmRhAWy99tvkGhr9Xggo/xn0ys=; fh=mT5bKTeask8yf/HL65t+eda2+hCp7isEA6/Uj2VbVV4=; b=bdV5ZCQEkRtJ7/WoCo14AMk93DHrlD3xqxrqM+zrQawKK4XLtKMVAw7+XDQ4b9U8mA dq8d4hMt7s5Nl2T+ED68HSxea0z2JywLL1pkXs/TnpMRIBQ9/FG6VjWLgtlFlOTvIJEt kbWBkwkA+fUVy49BEOWbCUeveYfN0E1PumW7VSNfRvsQJ/5A8FUuWkdv2G7Nzy7j9u4D YR0ksdJhsVexVeRg7L40T6PSAFdsdEO6ASSqur7+KCjK+nRFmF8I3VNOU24EZnzraEJW VAJOqFZCZH3LZfmoob23amSKZOg/l1DHExyCsd0hCumQlSIUf7ol03l2ajlM3Ec+zJ+L 0hxQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:3 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: from lipwig.vger.email (lipwig.vger.email. [2620:137:e000::3:3]) by mx.google.com with ESMTPS id s15-20020a056a00194f00b00690f9ca0f99si11166978pfk.12.2023.11.07.05.54.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 07 Nov 2023 05:54:40 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:3 as permitted sender) client-ip=2620:137:e000::3:3; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:3 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by lipwig.vger.email (Postfix) with ESMTP id AD0CA807EAC8; Tue, 7 Nov 2023 05:54:38 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at lipwig.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234693AbjKGNya (ORCPT <rfc822;lhua1029@gmail.com> + 32 others); Tue, 7 Nov 2023 08:54:30 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36876 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234533AbjKGNxa (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Tue, 7 Nov 2023 08:53:30 -0500 Received: from szxga08-in.huawei.com (szxga08-in.huawei.com [45.249.212.255]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B4AE91B2 for <linux-kernel@vger.kernel.org>; Tue, 7 Nov 2023 05:52:54 -0800 (PST) Received: from dggpemm100001.china.huawei.com (unknown [172.30.72.55]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4SPqMp2Rz3z1P7mb; Tue, 7 Nov 2023 21:49:42 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.31; Tue, 7 Nov 2023 21:52:51 +0800 From: Kefeng Wang <wangkefeng.wang@huawei.com> To: Andrew Morton <akpm@linux-foundation.org> CC: <linux-kernel@vger.kernel.org>, <linux-mm@kvack.org>, Matthew Wilcox <willy@infradead.org>, David Hildenbrand <david@redhat.com>, Kefeng Wang <wangkefeng.wang@huawei.com> Subject: [PATCH 0/6] mm: cleanup and use more folio in page fault Date: Tue, 7 Nov 2023 21:52:10 +0800 Message-ID: <20231107135216.415926-1-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 MIME-Version: 1.0 Content-Transfer-Encoding: 7BIT Content-Type: text/plain; charset=US-ASCII X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To dggpemm100001.china.huawei.com (7.185.36.93) X-CFilter-Loop: Reflected X-Spam-Status: No, score=-0.8 required=5.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lipwig.vger.email Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (lipwig.vger.email [0.0.0.0]); Tue, 07 Nov 2023 05:54:38 -0800 (PST) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1781913648591785447 X-GMAIL-MSGID: 1781913648591785447 |
Series |
mm: cleanup and use more folio in page fault
|
|
Message
Kefeng Wang
Nov. 7, 2023, 1:52 p.m. UTC
Rename page_copy_prealloc() to folio_prealloc(), which is used by more functions, also do more folio conversion in page fault. Kefeng Wang (6): mm: ksm: use more folio api in ksm_might_need_to_copy() mm: memory: use a folio in validate_page_before_insert() mm: memory: rename page_copy_prealloc() to folio_prealloc() mm: memory: use a folio in do_cow_page() mm: memory: use folio_prealloc() in wp_page_copy() mm: memory: use folio_prealloc() in do_anonymous_page() include/linux/ksm.h | 4 +-- mm/ksm.c | 36 +++++++++++------------ mm/memory.c | 72 +++++++++++++++++++-------------------------- 3 files changed, 50 insertions(+), 62 deletions(-)
Comments
On Tue, Nov 07, 2023 at 09:52:11PM +0800, Kefeng Wang wrote: > struct page *ksm_might_need_to_copy(struct page *page, > - struct vm_area_struct *vma, unsigned long address) > + struct vm_area_struct *vma, unsigned long addr) > { > struct folio *folio = page_folio(page); > struct anon_vma *anon_vma = folio_anon_vma(folio); > - struct page *new_page; > + struct folio *new_folio; > > - if (PageKsm(page)) { > - if (page_stable_node(page) && > + if (folio_test_ksm(folio)) { > + if (folio_stable_node(folio) && > !(ksm_run & KSM_RUN_UNMERGE)) > return page; /* no need to copy it */ > } else if (!anon_vma) { > return page; /* no need to copy it */ > - } else if (page->index == linear_page_index(vma, address) && > + } else if (page->index == linear_page_index(vma, addr) && Hmm. page->index is going away. What should we do here instead? The rest of this looks good.
On 2023/11/7 22:24, Matthew Wilcox wrote: > On Tue, Nov 07, 2023 at 09:52:11PM +0800, Kefeng Wang wrote: >> struct page *ksm_might_need_to_copy(struct page *page, >> - struct vm_area_struct *vma, unsigned long address) >> + struct vm_area_struct *vma, unsigned long addr) >> { >> struct folio *folio = page_folio(page); >> struct anon_vma *anon_vma = folio_anon_vma(folio); >> - struct page *new_page; >> + struct folio *new_folio; >> >> - if (PageKsm(page)) { >> - if (page_stable_node(page) && >> + if (folio_test_ksm(folio)) { >> + if (folio_stable_node(folio) && >> !(ksm_run & KSM_RUN_UNMERGE)) >> return page; /* no need to copy it */ >> } else if (!anon_vma) { >> return page; /* no need to copy it */ >> - } else if (page->index == linear_page_index(vma, address) && >> + } else if (page->index == linear_page_index(vma, addr) && > > Hmm. page->index is going away. What should we do here instead? Do you mean to replace page->index to folio->index, or kill index from struct page? > > The rest of this looks good. >
On Wed, Nov 08, 2023 at 09:40:09AM +0800, Kefeng Wang wrote: > > > On 2023/11/7 22:24, Matthew Wilcox wrote: > > On Tue, Nov 07, 2023 at 09:52:11PM +0800, Kefeng Wang wrote: > > > struct page *ksm_might_need_to_copy(struct page *page, > > > - struct vm_area_struct *vma, unsigned long address) > > > + struct vm_area_struct *vma, unsigned long addr) > > > { > > > struct folio *folio = page_folio(page); > > > struct anon_vma *anon_vma = folio_anon_vma(folio); > > > - struct page *new_page; > > > + struct folio *new_folio; > > > - if (PageKsm(page)) { > > > - if (page_stable_node(page) && > > > + if (folio_test_ksm(folio)) { > > > + if (folio_stable_node(folio) && > > > !(ksm_run & KSM_RUN_UNMERGE)) > > > return page; /* no need to copy it */ > > > } else if (!anon_vma) { > > > return page; /* no need to copy it */ > > > - } else if (page->index == linear_page_index(vma, address) && > > > + } else if (page->index == linear_page_index(vma, addr) && > > > > Hmm. page->index is going away. What should we do here instead? > > Do you mean to replace page->index to folio->index, or kill index from > struct page? I'm asking you what we should do. Tail pages already don't have a valid ->index (or ->mapping). So presumably we can't see a tail page here today. But will we in future? Just to remind you, the goal here is: struct page { unsigned long memdesc; }; so folios will be the only thing that have a ->index. I haven't looked at this code; I know nothing about it. But you're changing it, so you must have some understanding of it.
On 2023/11/8 21:59, Matthew Wilcox wrote: > On Wed, Nov 08, 2023 at 09:40:09AM +0800, Kefeng Wang wrote: >> >> >> On 2023/11/7 22:24, Matthew Wilcox wrote: >>> On Tue, Nov 07, 2023 at 09:52:11PM +0800, Kefeng Wang wrote: >>>> struct page *ksm_might_need_to_copy(struct page *page, >>>> - struct vm_area_struct *vma, unsigned long address) >>>> + struct vm_area_struct *vma, unsigned long addr) >>>> { >>>> struct folio *folio = page_folio(page); >>>> struct anon_vma *anon_vma = folio_anon_vma(folio); >>>> - struct page *new_page; >>>> + struct folio *new_folio; >>>> - if (PageKsm(page)) { >>>> - if (page_stable_node(page) && >>>> + if (folio_test_ksm(folio)) { >>>> + if (folio_stable_node(folio) && >>>> !(ksm_run & KSM_RUN_UNMERGE)) >>>> return page; /* no need to copy it */ >>>> } else if (!anon_vma) { >>>> return page; /* no need to copy it */ >>>> - } else if (page->index == linear_page_index(vma, address) && >>>> + } else if (page->index == linear_page_index(vma, addr) && >>> >>> Hmm. page->index is going away. What should we do here instead? >> >> Do you mean to replace page->index to folio->index, or kill index from >> struct page? > > I'm asking you what we should do. > > Tail pages already don't have a valid ->index (or ->mapping). > So presumably we can't see a tail page here today. But will we in future? I think we could replace page->index to page_to_pgoff(page). > > Just to remind you, the goal here is: > > struct page { > unsigned long memdesc; > }; > Get your point, that will be great. > so folios will be the only thing that have a ->index. I haven't looked > at this code; I know nothing about it. But you're changing it, so you > must have some understanding of it. >
On 09.11.23 08:09, Kefeng Wang wrote: > > > On 2023/11/8 21:59, Matthew Wilcox wrote: >> On Wed, Nov 08, 2023 at 09:40:09AM +0800, Kefeng Wang wrote: >>> >>> >>> On 2023/11/7 22:24, Matthew Wilcox wrote: >>>> On Tue, Nov 07, 2023 at 09:52:11PM +0800, Kefeng Wang wrote: >>>>> struct page *ksm_might_need_to_copy(struct page *page, >>>>> - struct vm_area_struct *vma, unsigned long address) >>>>> + struct vm_area_struct *vma, unsigned long addr) >>>>> { >>>>> struct folio *folio = page_folio(page); >>>>> struct anon_vma *anon_vma = folio_anon_vma(folio); >>>>> - struct page *new_page; >>>>> + struct folio *new_folio; >>>>> - if (PageKsm(page)) { >>>>> - if (page_stable_node(page) && >>>>> + if (folio_test_ksm(folio)) { >>>>> + if (folio_stable_node(folio) && >>>>> !(ksm_run & KSM_RUN_UNMERGE)) >>>>> return page; /* no need to copy it */ >>>>> } else if (!anon_vma) { >>>>> return page; /* no need to copy it */ >>>>> - } else if (page->index == linear_page_index(vma, address) && >>>>> + } else if (page->index == linear_page_index(vma, addr) && >>>> >>>> Hmm. page->index is going away. What should we do here instead? >>> >>> Do you mean to replace page->index to folio->index, or kill index from >>> struct page? >> >> I'm asking you what we should do. >> >> Tail pages already don't have a valid ->index (or ->mapping). >> So presumably we can't see a tail page here today. But will we in future? > > I think we could replace page->index to page_to_pgoff(page). What the second part of that code does is check whether a page might have been a KSM page before swapout. Once a KSM page is swapped out, we lose the KSM marker. To recover, we have to check whether the new page logically "fits" into the VMA. Large folios are never KSM folios, and we only swap in small folios (and in the future, once we would swap in large folios, they couldn't have been KSM folios before). So you could return early in the function if we have a large folio and make all operations based on the (small) folio.
On 2023/11/13 16:32, David Hildenbrand wrote: > On 09.11.23 08:09, Kefeng Wang wrote: >> >> >> On 2023/11/8 21:59, Matthew Wilcox wrote: >>> On Wed, Nov 08, 2023 at 09:40:09AM +0800, Kefeng Wang wrote: >>>> >>>> >>>> On 2023/11/7 22:24, Matthew Wilcox wrote: >>>>> On Tue, Nov 07, 2023 at 09:52:11PM +0800, Kefeng Wang wrote: >>>>>> struct page *ksm_might_need_to_copy(struct page *page, >>>>>> - struct vm_area_struct *vma, unsigned long address) >>>>>> + struct vm_area_struct *vma, unsigned long addr) >>>>>> { >>>>>> struct folio *folio = page_folio(page); >>>>>> struct anon_vma *anon_vma = folio_anon_vma(folio); >>>>>> - struct page *new_page; >>>>>> + struct folio *new_folio; >>>>>> - if (PageKsm(page)) { >>>>>> - if (page_stable_node(page) && >>>>>> + if (folio_test_ksm(folio)) { >>>>>> + if (folio_stable_node(folio) && >>>>>> !(ksm_run & KSM_RUN_UNMERGE)) >>>>>> return page; /* no need to copy it */ >>>>>> } else if (!anon_vma) { >>>>>> return page; /* no need to copy it */ >>>>>> - } else if (page->index == linear_page_index(vma, address) && >>>>>> + } else if (page->index == linear_page_index(vma, addr) && >>>>> >>>>> Hmm. page->index is going away. What should we do here instead? >>>> >>>> Do you mean to replace page->index to folio->index, or kill index from >>>> struct page? >>> >>> I'm asking you what we should do. >>> >>> Tail pages already don't have a valid ->index (or ->mapping). >>> So presumably we can't see a tail page here today. But will we in >>> future? >> >> I think we could replace page->index to page_to_pgoff(page). > > What the second part of that code does is check whether a page might > have been a KSM page before swapout. > > Once a KSM page is swapped out, we lose the KSM marker. To recover, we > have to check whether the new page logically "fits" into the VMA. > > Large folios are never KSM folios, and we only swap in small folios (and > in the future, once we would swap in large folios, they couldn't have > been KSM folios before). > > So you could return early in the function if we have a large folio and > make all operations based on the (small) folio. Sure, I will add folio_test_large check ahead and convert page->index to folio->index, and adjust the logical if ksm and swapin support large folio, thanks.