From patchwork Wed Aug 9 06:11:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yin Fengwei X-Patchwork-Id: 133091 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:c44e:0:b0:3f2:4152:657d with SMTP id w14csp2625880vqr; Wed, 9 Aug 2023 00:39:10 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEHot/I8hLNHVqpga8AnqEIqjczNJVx1nYx30DLXPvDfiudTVGOe5awfvtFn7fO/LpyYvm0 X-Received: by 2002:a17:902:c24d:b0:1b8:6cab:db7f with SMTP id 13-20020a170902c24d00b001b86cabdb7fmr1673240plg.53.1691566750397; Wed, 09 Aug 2023 00:39:10 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1691566750; cv=none; d=google.com; s=arc-20160816; b=os+hGJCPB5P/142Gy2YlCxfrQTacHPd/3yTJx953iLtaCl7IWuS6oOnYzHQmvBeIkM TyrAKxOqxVoqQlyvHmN5IXcjJda9F1cAx9mv0xJ5AnvowLdqOHUMTrCoC5FFHUCJCXc8 aD5hZ70gPS+HvbyGsDQ/SeTfF4s6cvvIYdBl7Czw8VmSyjLs0NfU/9WO5r2vCQkgF0n7 rqVl9AoRIT69/5m9BnjYmjSSdMQ04XsIbwi+vaNAQYhygpxw0uYWqjJvAXVkNGns6NYO Ic9e1raNS7VNXP69VoWDC99pRpod5a28IFzK2x2ivaALk2QDSG4KZHbUxeq2+fd7drkJ Q5uA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=deQV58wLgruMOqZkOxyWxKwxQfdyNdJFgULcqvib22I=; fh=xUMIRRgMsUSKhgTzjxzqukLot55ozP1+p0hGLfqOhCw=; b=KOKYHsx7yQYRvVJIaz09e9OhHLxVLfr0rSVg583LaJcQNtptTsOsZOyFfdjsrzwIK9 t79WJQaXFoI2TF2gUtSNALlhQQMjayheylP/yn6/nnXZE4f0QQuKsfizT7KgpnxCzCSI bDTR99k47/Zlh01ZLOzHHtYA4DmaOnkfZenrHbZs7U3Nz9oLtBY0M8KUnlQrQXnEC1M1 2IrGFY3ne0YGp1NVHS90XHWI9EVi+uxhPhrCIwSItjcPnRBBDmgTUGeKdZz8DepS51De SZTS2b71PS6KpQ5ixmlDNUzz+q68G5GoIreQQb7M6ZtHdHthM1bBLtsdIDzAWeua9tbY SsvQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=fUFlKeop; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id jy16-20020a17090342d000b001bbc138af15si8595449plb.157.2023.08.09.00.38.57; Wed, 09 Aug 2023 00:39:10 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=fUFlKeop; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229944AbjHIGMp (ORCPT + 99 others); Wed, 9 Aug 2023 02:12:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33628 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229623AbjHIGMo (ORCPT ); Wed, 9 Aug 2023 02:12:44 -0400 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.20]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EB5CE1BF3 for ; Tue, 8 Aug 2023 23:12:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1691561563; x=1723097563; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=KsgBZKiNNdVU9A9eEIcQRL879OD2WdX41NJw/witcyM=; b=fUFlKeopPGmCfmEs86Z3McO27wPCFyxaBspYLPg44SEQvj8iXhGXveYO ylPBFgo5wzIf/lUv0TNlFuylOBJoV8/bOEeDtxws1bkmSNytKau2a5EMv VirrNS08c67jBpzHMejirOQGN0wTB3dpGpHIoX9Ca4MVcP4i+asjCdDtR aZRbNmFhHBPBZEz+Aez1l48HDUdi+cDSauoEQoqGezm76vRof1tmJLAQs SZyPFITEs40iaXwQb55l932k6aDoo2ebdCPt82oEk2ANqIT1zWqZg2TRu TlwLXB/obCkGfZFWeaKPjU+l+YVMEjbT9A9D/RbZ8QC6sCOrZLmXynydo Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10795"; a="361159622" X-IronPort-AV: E=Sophos;i="6.01,158,1684825200"; d="scan'208";a="361159622" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Aug 2023 23:12:43 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10795"; a="681553232" X-IronPort-AV: E=Sophos;i="6.01,158,1684825200"; d="scan'208";a="681553232" Received: from fyin-dev.sh.intel.com ([10.239.159.32]) by orsmga003.jf.intel.com with ESMTP; 08 Aug 2023 23:12:37 -0700 From: Yin Fengwei To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, akpm@linux-foundation.org, yuzhao@google.com, willy@infradead.org, hughd@google.com, yosryahmed@google.com, ryan.roberts@arm.com, david@redhat.com, shy828301@gmail.com Cc: fengwei.yin@intel.com Subject: [PATCH v2 1/3] mm: add functions folio_in_range() and folio_within_vma() Date: Wed, 9 Aug 2023 14:11:03 +0800 Message-Id: <20230809061105.3369958-2-fengwei.yin@intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230809061105.3369958-1-fengwei.yin@intel.com> References: <20230809061105.3369958-1-fengwei.yin@intel.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, RCVD_IN_MSPIKE_H3,RCVD_IN_MSPIKE_WL,SPF_HELO_NONE,SPF_NONE, URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1773736296705486879 X-GMAIL-MSGID: 1773736296705486879 It will be used to check whether the folio is mapped to specific VMA and whether the mapping address of folio is in the range. Also a helper function folio_within_vma() to check whether folio is in the range of vma based on folio_in_range(). Signed-off-by: Yin Fengwei --- mm/internal.h | 35 +++++++++++++++++++++++++++++++++++ 1 file changed, 35 insertions(+) diff --git a/mm/internal.h b/mm/internal.h index 154da4f0d557..5d1b71010fd2 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -585,6 +585,41 @@ extern long faultin_vma_page_range(struct vm_area_struct *vma, bool write, int *locked); extern bool mlock_future_ok(struct mm_struct *mm, unsigned long flags, unsigned long bytes); + +static inline bool +folio_in_range(struct folio *folio, struct vm_area_struct *vma, + unsigned long start, unsigned long end) +{ + pgoff_t pgoff, addr; + unsigned long vma_pglen = (vma->vm_end - vma->vm_start) >> PAGE_SHIFT; + + VM_WARN_ON_FOLIO(folio_test_ksm(folio), folio); + if (start > end) + return false; + + if (start < vma->vm_start) + start = vma->vm_start; + + if (end > vma->vm_end) + end = vma->vm_end; + + pgoff = folio_pgoff(folio); + + /* if folio start address is not in vma range */ + if (!in_range(pgoff, vma->vm_pgoff, vma_pglen)) + return false; + + addr = vma->vm_start + ((pgoff - vma->vm_pgoff) << PAGE_SHIFT); + + return !(addr < start || end - addr < folio_size(folio)); +} + +static inline bool +folio_within_vma(struct folio *folio, struct vm_area_struct *vma) +{ + return folio_in_range(folio, vma, vma->vm_start, vma->vm_end); +} + /* * mlock_vma_folio() and munlock_vma_folio(): * should be called with vma's mmap_lock held for read or write, From patchwork Wed Aug 9 06:11:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yin Fengwei X-Patchwork-Id: 133076 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:c44e:0:b0:3f2:4152:657d with SMTP id w14csp2608065vqr; Tue, 8 Aug 2023 23:54:14 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEEqlM59qlOS3KSzwcDwFZbDyQTgeDSoR+xmMjAs4hEBmQm88raapF9es5Gh/57dlfzYSKo X-Received: by 2002:a17:90a:fd0d:b0:262:fb5d:147b with SMTP id cv13-20020a17090afd0d00b00262fb5d147bmr1537404pjb.19.1691564054257; Tue, 08 Aug 2023 23:54:14 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1691564054; cv=none; d=google.com; s=arc-20160816; b=qEfusiOX2WO8OFqkW7/w2Hyf/DGw5agVFgCnq0LC2MiY41T9+J7L8ueilnUgYB98AI gBRwPweLA3Zl4wAc5WpU6C8FEVDLkhvQbsQUoQbDUv+RFntH5ruohK03KOaF73BpGkCH /6P+enwKQ1vafTUBrzb239J0feZW4hhbMsjIIz/Sk+aTYWPSd4GNdY2TDbU3A/VmxgoN x2xGQ35PrpiDev8p28qnLkw9BO27I2RhMtT11hLW3Tm7nMOZ8y+OqQ6k9IHvlUOtmSXX 63tAoO1aFCQIehWZWaMrUnSRxaSFy4bxWE64zIQ2kkksUmU2o+yAua7ONCvBuHJHnETU ul4g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=RUCeLQjLdszSfEdTvthQheVh8rqU7yspEEIx65TwktA=; fh=xUMIRRgMsUSKhgTzjxzqukLot55ozP1+p0hGLfqOhCw=; b=bLDEkT71s24/wqppvRh2JxCTXulFJXe+kj4gF9g9vU9kk7bIKjU/XX+wV7WcvgYw9L aDcJ1uZClVXHjxWyU4mAu9X5sdydVyL8B/9rZv/6ViCCpgyRUAUDF5LYJuMA9WJXzjfn xkrZ5XQAcZvuHcgucyVdaXIb2WQcLclFk11DVsbAEcLyI2zM8v8LaJvaLAnWJUgb8G+c qY+WIf9QB4lnkM1KqaGlHhyJPNcOg58yqgwtlWNujUr10hhf+pPqN8ER9nzRll6RUAsL pD4GV90AlXTsBTkpdtxI4bPjhV6xQjqiyvqL2TbFBES5ogsXAf4tB++vWF2GdAlrHKP6 UNIw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=b4vGX4Hs; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id az9-20020a17090b028900b0026948ad4d0asi786895pjb.123.2023.08.08.23.54.02; Tue, 08 Aug 2023 23:54:14 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=b4vGX4Hs; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229537AbjHIGNA (ORCPT + 99 others); Wed, 9 Aug 2023 02:13:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46202 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229812AbjHIGM7 (ORCPT ); Wed, 9 Aug 2023 02:12:59 -0400 Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.43]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 814CB1BFA for ; Tue, 8 Aug 2023 23:12:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1691561578; x=1723097578; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=fb4ma1U7pPZ93GdCuAzqiSTIky7nmfYMnw9psMcatoY=; b=b4vGX4HsVOD8V/6kHpqiO72ng9e88cYc6I9Wi9mLwFzugaZa99WU8PTW v+LKTZbuH0UKkASU84Mn59bX7DapwDnCCX3vmMhoC/wsD5988JAVjOj9C cIYOhTKGT7JnZ+XL6bjnqoe44UJi8hB7J1e4ZqYMKKTvIkuMLKsmDqLa9 qybJLje8bA8w2z6oHVj9XzmOMyQGKpJtjJzdSPBeCaxSQdlhiAvT5Oe5A LuSncL44moth8EcL8PAj6Jq26eCKu+ZymnVvx/rkfo84kutavCK20FbTv h4prksyVvT14cL7ndjQjtak7w7BuuNAaW1h+VLEmpQ4k3xxnXsl+QKNbI w==; X-IronPort-AV: E=McAfee;i="6600,9927,10795"; a="457410052" X-IronPort-AV: E=Sophos;i="6.01,158,1684825200"; d="scan'208";a="457410052" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Aug 2023 23:12:58 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10795"; a="731680568" X-IronPort-AV: E=Sophos;i="6.01,158,1684825200"; d="scan'208";a="731680568" Received: from fyin-dev.sh.intel.com ([10.239.159.32]) by orsmga002.jf.intel.com with ESMTP; 08 Aug 2023 23:12:54 -0700 From: Yin Fengwei To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, akpm@linux-foundation.org, yuzhao@google.com, willy@infradead.org, hughd@google.com, yosryahmed@google.com, ryan.roberts@arm.com, david@redhat.com, shy828301@gmail.com Cc: fengwei.yin@intel.com Subject: [PATCH v2 2/3] mm: handle large folio when large folio in VM_LOCKED VMA range Date: Wed, 9 Aug 2023 14:11:04 +0800 Message-Id: <20230809061105.3369958-3-fengwei.yin@intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230809061105.3369958-1-fengwei.yin@intel.com> References: <20230809061105.3369958-1-fengwei.yin@intel.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1773733469674297375 X-GMAIL-MSGID: 1773733469674297375 If large folio is in the range of VM_LOCKED VMA, it should be mlocked to avoid being picked by page reclaim. Which may split the large folio and then mlock each pages again. Mlock this kind of large folio to prevent them being picked by page reclaim. For the large folio which cross the boundary of VM_LOCKED VMA or not fully mapped to VM_LOCKED VMA, we'd better not to mlock it. So if the system is under memory pressure, this kind of large folio will be split and the pages ouf of VM_LOCKED VMA can be reclaimed. Ideally, for large folio, we should mlock it when the large folio is fully mapped to VMA and munlock it if any page are unmampped from VMA. But it's not easy to detect whether the large folio is fully mapped to VMA in some cases (like add/remove rmap). So we update mlock_vma_folio() and munlock_vma_folio() to mlock/munlock the folio according to vma->vm_flags. Let caller to decide whether they should call these two functions. For add rmap, only mlock normal 4K folio and postpone large folio handling to page reclaim phase. It is possible to reuse page table iterator to detect whether folio is fully mapped or not during page reclaim phase. For remove rmap, invoke munlock_vma_folio() to munlock folio unconditionly because rmap makes folio not fully mapped to VMA. Signed-off-by: Yin Fengwei --- mm/internal.h | 23 ++++++++++-------- mm/rmap.c | 66 ++++++++++++++++++++++++++++++++++++++++++--------- 2 files changed, 68 insertions(+), 21 deletions(-) diff --git a/mm/internal.h b/mm/internal.h index 5d1b71010fd2..b14fb2d8b04c 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -628,14 +628,10 @@ folio_within_vma(struct folio *folio, struct vm_area_struct *vma) * mlock is usually called at the end of page_add_*_rmap(), munlock at * the end of page_remove_rmap(); but new anon folios are managed by * folio_add_lru_vma() calling mlock_new_folio(). - * - * @compound is used to include pmd mappings of THPs, but filter out - * pte mappings of THPs, which cannot be consistently counted: a pte - * mapping of the THP head cannot be distinguished by the page alone. */ void mlock_folio(struct folio *folio); static inline void mlock_vma_folio(struct folio *folio, - struct vm_area_struct *vma, bool compound) + struct vm_area_struct *vma) { /* * The VM_SPECIAL check here serves two purposes. @@ -645,17 +641,24 @@ static inline void mlock_vma_folio(struct folio *folio, * file->f_op->mmap() is using vm_insert_page(s), when VM_LOCKED may * still be set while VM_SPECIAL bits are added: so ignore it then. */ - if (unlikely((vma->vm_flags & (VM_LOCKED|VM_SPECIAL)) == VM_LOCKED) && - (compound || !folio_test_large(folio))) + if (unlikely((vma->vm_flags & (VM_LOCKED|VM_SPECIAL)) == VM_LOCKED)) mlock_folio(folio); } void munlock_folio(struct folio *folio); static inline void munlock_vma_folio(struct folio *folio, - struct vm_area_struct *vma, bool compound) + struct vm_area_struct *vma) { - if (unlikely(vma->vm_flags & VM_LOCKED) && - (compound || !folio_test_large(folio))) + /* + * munlock if the function is called. Ideally, we should only + * do munlock if any page of folio is unmapped from VMA and + * cause folio not fully mapped to VMA. + * + * But it's not easy to confirm that's the situation. So we + * always munlock the folio and page reclaim will correct it + * if it's wrong. + */ + if (unlikely(vma->vm_flags & VM_LOCKED)) munlock_folio(folio); } diff --git a/mm/rmap.c b/mm/rmap.c index 3c20d0d79905..dae0443e9ab0 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -798,6 +798,7 @@ struct folio_referenced_arg { unsigned long vm_flags; struct mem_cgroup *memcg; }; + /* * arg: folio_referenced_arg will be passed */ @@ -807,17 +808,33 @@ static bool folio_referenced_one(struct folio *folio, struct folio_referenced_arg *pra = arg; DEFINE_FOLIO_VMA_WALK(pvmw, folio, vma, address, 0); int referenced = 0; + unsigned long start = address, ptes = 0; while (page_vma_mapped_walk(&pvmw)) { address = pvmw.address; - if ((vma->vm_flags & VM_LOCKED) && - (!folio_test_large(folio) || !pvmw.pte)) { - /* Restore the mlock which got missed */ - mlock_vma_folio(folio, vma, !pvmw.pte); - page_vma_mapped_walk_done(&pvmw); - pra->vm_flags |= VM_LOCKED; - return false; /* To break the loop */ + if (vma->vm_flags & VM_LOCKED) { + if (!folio_test_large(folio) || !pvmw.pte) { + /* Restore the mlock which got missed */ + mlock_vma_folio(folio, vma); + page_vma_mapped_walk_done(&pvmw); + pra->vm_flags |= VM_LOCKED; + return false; /* To break the loop */ + } + /* + * For large folio fully mapped to VMA, will + * be handled after the pvmw loop. + * + * For large folio cross VMA boundaries, it's + * expected to be picked by page reclaim. But + * should skip reference of pages which are in + * the range of VM_LOCKED vma. As page reclaim + * should just count the reference of pages out + * the range of VM_LOCKED vma. + */ + ptes++; + pra->mapcount--; + continue; } if (pvmw.pte) { @@ -842,6 +859,23 @@ static bool folio_referenced_one(struct folio *folio, pra->mapcount--; } + if ((vma->vm_flags & VM_LOCKED) && + folio_test_large(folio) && + folio_within_vma(folio, vma)) { + unsigned long s_align, e_align; + + s_align = ALIGN_DOWN(start, PMD_SIZE); + e_align = ALIGN_DOWN(start + folio_size(folio) - 1, PMD_SIZE); + + /* folio doesn't cross page table boundary and fully mapped */ + if ((s_align == e_align) && (ptes == folio_nr_pages(folio))) { + /* Restore the mlock which got missed */ + mlock_vma_folio(folio, vma); + pra->vm_flags |= VM_LOCKED; + return false; /* To break the loop */ + } + } + if (referenced) folio_clear_idle(folio); if (folio_test_clear_young(folio)) @@ -1260,7 +1294,14 @@ void page_add_anon_rmap(struct page *page, struct vm_area_struct *vma, __page_check_anon_rmap(folio, page, vma, address); } - mlock_vma_folio(folio, vma, compound); + /* + * For large folio, only mlock it if it's fully mapped to VMA. It's + * not easy to check whether the large folio is fully mapped to VMA + * here. Only mlock normal 4K folio and leave page reclaim to handle + * large folio. + */ + if (!folio_test_large(folio)) + mlock_vma_folio(folio, vma); } void folio_add_new_anon_rmap_range(struct folio *folio, @@ -1371,7 +1412,9 @@ void folio_add_file_rmap_range(struct folio *folio, struct page *page, if (nr) __lruvec_stat_mod_folio(folio, NR_FILE_MAPPED, nr); - mlock_vma_folio(folio, vma, compound); + /* See comments in page_add_anon_rmap() */ + if (!folio_test_large(folio)) + mlock_vma_folio(folio, vma); } /** @@ -1482,7 +1525,7 @@ void page_remove_rmap(struct page *page, struct vm_area_struct *vma, * it's only reliable while mapped. */ - munlock_vma_folio(folio, vma, compound); + munlock_vma_folio(folio, vma); } /* @@ -1543,7 +1586,8 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, if (!(flags & TTU_IGNORE_MLOCK) && (vma->vm_flags & VM_LOCKED)) { /* Restore the mlock which got missed */ - mlock_vma_folio(folio, vma, false); + if (!folio_test_large(folio)) + mlock_vma_folio(folio, vma); page_vma_mapped_walk_done(&pvmw); ret = false; break; From patchwork Wed Aug 9 06:11:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yin Fengwei X-Patchwork-Id: 133089 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:c44e:0:b0:3f2:4152:657d with SMTP id w14csp2624309vqr; Wed, 9 Aug 2023 00:35:25 -0700 (PDT) X-Google-Smtp-Source: AGHT+IH6NcBmQ601BJvUzJA7Gt2m5NJEOXZ1+T/M+/gYbAlHIb3+wqmOWk8WeKr0RyyIs6TRFeyB X-Received: by 2002:a17:902:c942:b0:1b0:6e16:b92c with SMTP id i2-20020a170902c94200b001b06e16b92cmr2282256pla.54.1691566524703; Wed, 09 Aug 2023 00:35:24 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1691566524; cv=none; d=google.com; s=arc-20160816; b=yhSuyzfSESlUDDNUi3sQYWDybjE77ik/hHtkRC1+EkfeI0v9M2SmvHx+o21KpHhgbl s7j8oQxv/Uu3hxCljcUoNzSCG1pEnBnIJbtF0tWcAro/ZGhhcbgUBIYaDwq3hN623MF+ YoJiPeUn2kZUsSd1Z/qLDKhPbT/j0uylMfMQn1IIfAtE5fn/0n+5QTYasarkZzxcvD9B pRBj3EZEqqlige44cax1kHu61LnHPZ+Uw7XMmWpWc4yQNrkj9INZAtzU/psxaMi4TtYt KE4xPoCHw9HeUULxdjVuBMYflKY43jsoxiDJnwovWYzSVaH/ZU5TwKnIzkduSF9uOS1c P+oA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=nTat4hjNwabT9RgLwLQU/73bsULgdhGM/jU6a6BKHXE=; fh=xUMIRRgMsUSKhgTzjxzqukLot55ozP1+p0hGLfqOhCw=; b=IKyOD9rdymm7svPedKT0fcQ8RAGQxCz+Y3BccKZClgw+uYY1abMW2UUWRgovesD6l4 0Lrs9P1SOiMta1zxae3AAxumFWog4aS2nzorPwrPxysDbPJUomByUfkgvTSlwfwGeqmy DLBcUMbH/pM+dl44MtcXz/3TA/m65MDyOVTDEQW9PIXXiBMd6sr0h8kW1doV+z6Ef7vW H50/CjAizSCOz3ATJpaYGTiYlUvKILiXLLNBkTCxaKiZ9804lkeKOmtfwH9BGYpSZBaN +GzYZU3VxQvwBwF1NxADG+hY/wNI6YaDl5aOcX2I4rLIennshlqNzOvE6ZkWSXnIUs/O qNng== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=Hr2JpL1c; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id x15-20020a170902ec8f00b001b8ae8ed8c3si9125288plg.535.2023.08.09.00.35.12; Wed, 09 Aug 2023 00:35:24 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=Hr2JpL1c; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230464AbjHIGNO (ORCPT + 99 others); Wed, 9 Aug 2023 02:13:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34398 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230461AbjHIGNN (ORCPT ); Wed, 9 Aug 2023 02:13:13 -0400 Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.43]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6DC5B1FC2 for ; Tue, 8 Aug 2023 23:13:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1691561592; x=1723097592; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=LGJZorJ3E6t3J456VFnC+I2HIJq1tAE/jnA+v0QWpWc=; b=Hr2JpL1caYU705SMZJSn6bm/iZhOlO5AAPLcbuD9xr8OjXbu1EDWlpOa AIhx26qSfqQb7ymCNu2QitQOD7DuJ5ahaNVNAQmQWeuL3vsdRJG6GK8oY GStsYhvXugcLtMGkWpc3FLRLEVHC9juAVCGeSoa4aqipIzoyegtYHODAx Fxi/1rEu/QHs2y0svmyO4UgMxotEmf31pcGTrcb9mqyK2ryQFqJ8WUXIz teoIIg+2RBhczlBhDXZJfflyK06A759GI38W83KIRRFCidq7QBHJiV7KY mp5hECwR1JV6xy9EWIkIBH42w0QD9zC22PTrgDQulZve8zQnBj31iuNh7 A==; X-IronPort-AV: E=McAfee;i="6600,9927,10795"; a="457410065" X-IronPort-AV: E=Sophos;i="6.01,158,1684825200"; d="scan'208";a="457410065" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Aug 2023 23:13:12 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10795"; a="731680634" X-IronPort-AV: E=Sophos;i="6.01,158,1684825200"; d="scan'208";a="731680634" Received: from fyin-dev.sh.intel.com ([10.239.159.32]) by orsmga002.jf.intel.com with ESMTP; 08 Aug 2023 23:13:08 -0700 From: Yin Fengwei To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, akpm@linux-foundation.org, yuzhao@google.com, willy@infradead.org, hughd@google.com, yosryahmed@google.com, ryan.roberts@arm.com, david@redhat.com, shy828301@gmail.com Cc: fengwei.yin@intel.com Subject: [PATCH v2 3/3] mm: mlock: update mlock_pte_range to handle large folio Date: Wed, 9 Aug 2023 14:11:05 +0800 Message-Id: <20230809061105.3369958-4-fengwei.yin@intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230809061105.3369958-1-fengwei.yin@intel.com> References: <20230809061105.3369958-1-fengwei.yin@intel.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1773736059996870315 X-GMAIL-MSGID: 1773736059996870315 Current kernel only lock base size folio during mlock syscall. Add large folio support with following rules: - Only mlock large folio when it's in VM_LOCKED VMA range and fully mapped to page table. fully mapped folio is required as if folio is not fully mapped to a VM_LOCKED VMA, if system is in memory pressure, page reclaim is allowed to pick up this folio, split it and reclaim the pages which are not in VM_LOCKED VMA. - munlock will apply to the large folio which is in VMA range or cross the VMA boundary. This is required to handle the case that the large folio is mlocked, later the VMA is split in the middle of large folio. Signed-off-by: Yin Fengwei --- mm/mlock.c | 66 ++++++++++++++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 64 insertions(+), 2 deletions(-) diff --git a/mm/mlock.c b/mm/mlock.c index 06bdfab83b58..1da1996745e7 100644 --- a/mm/mlock.c +++ b/mm/mlock.c @@ -305,6 +305,58 @@ void munlock_folio(struct folio *folio) local_unlock(&mlock_fbatch.lock); } +static inline unsigned int folio_mlock_step(struct folio *folio, + pte_t *pte, unsigned long addr, unsigned long end) +{ + unsigned int count, i, nr = folio_nr_pages(folio); + unsigned long pfn = folio_pfn(folio); + pte_t ptent = ptep_get(pte); + + if (!folio_test_large(folio)) + return 1; + + count = pfn + nr - pte_pfn(ptent); + count = min_t(unsigned int, count, (end - addr) >> PAGE_SHIFT); + + for (i = 0; i < count; i++, pte++) { + pte_t entry = ptep_get(pte); + + if (!pte_present(entry)) + break; + if (pte_pfn(entry) - pfn >= nr) + break; + } + + return i; +} + +static inline bool allow_mlock_munlock(struct folio *folio, + struct vm_area_struct *vma, unsigned long start, + unsigned long end, unsigned int step) +{ + /* + * For unlock, allow munlock large folio which is partially + * mapped to VMA. As it's possible that large folio is + * mlocked and VMA is split later. + * + * During memory pressure, such kind of large folio can + * be split. And the pages are not in VM_LOCKed VMA + * can be reclaimed. + */ + if (!(vma->vm_flags & VM_LOCKED)) + return true; + + /* folio not in range [start, end), skip mlock */ + if (!folio_in_range(folio, vma, start, end)) + return false; + + /* folio is not fully mapped, skip mlock */ + if (step != folio_nr_pages(folio)) + return false; + + return true; +} + static int mlock_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, struct mm_walk *walk) @@ -314,6 +366,8 @@ static int mlock_pte_range(pmd_t *pmd, unsigned long addr, pte_t *start_pte, *pte; pte_t ptent; struct folio *folio; + unsigned int step = 1; + unsigned long start = addr; ptl = pmd_trans_huge_lock(pmd, vma); if (ptl) { @@ -334,6 +388,7 @@ static int mlock_pte_range(pmd_t *pmd, unsigned long addr, walk->action = ACTION_AGAIN; return 0; } + for (pte = start_pte; addr != end; pte++, addr += PAGE_SIZE) { ptent = ptep_get(pte); if (!pte_present(ptent)) @@ -341,12 +396,19 @@ static int mlock_pte_range(pmd_t *pmd, unsigned long addr, folio = vm_normal_folio(vma, addr, ptent); if (!folio || folio_is_zone_device(folio)) continue; - if (folio_test_large(folio)) - continue; + + step = folio_mlock_step(folio, pte, addr, end); + if (!allow_mlock_munlock(folio, vma, start, end, step)) + goto next_entry; + if (vma->vm_flags & VM_LOCKED) mlock_folio(folio); else munlock_folio(folio); + +next_entry: + pte += step - 1; + addr += (step - 1) << PAGE_SHIFT; } pte_unmap(start_pte); out: