From patchwork Thu Jan 12 12:39:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Stoakes X-Patchwork-Id: 42404 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4e01:0:0:0:0:0 with SMTP id p1csp3860350wrt; Thu, 12 Jan 2023 04:47:43 -0800 (PST) X-Google-Smtp-Source: AMrXdXsC36fy+Uz2TkvROukbrxwQ+Mp2w3rUvQj4tecwiyZsWp0syiffEM5bYkQ52Vhua2muY6eP X-Received: by 2002:a05:6402:793:b0:493:597e:2191 with SMTP id d19-20020a056402079300b00493597e2191mr20602574edy.35.1673527662988; Thu, 12 Jan 2023 04:47:42 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1673527662; cv=none; d=google.com; s=arc-20160816; b=TWS6wCKpSQiGNsg70fIhIWe568ChLlIoeftQrBbqHMygtTJ8kWJFVVEyawOGx53C5c iaz6FKve5rTEBbOmAIQlgataa5QEes0z5H6XxN6WZGNEiSRMjFPMoBW2n0eKefnI845t tJ0+gNnTSKaXU7eWK7UyampoDwMA2hM97WL0QdNl0KRhHEHONUQPGAEBctaR6kSXer2T bXr3NmKkNAt2UGf/S+GE3masZW/1gQVncnbKYKZtQUU/GxGI0ZsCaeI1oShKcwu/OADR M/AQ058O61DWeHJBIfFmsNQSXdqISnR00JyDJVT21peJRjrXQm7oT7dkA9/NeDwyYl6o Bzaw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=Nvh72pnq+35pgvvJPMgZ8nog2n55QNfKwj25JYsdjLk=; b=z3o0C1Ytjn/GsGOXpKOcNisCa0pcGeJbNJGZXOUTTisg/DxMcHYas5JS43PKG/aQWY knkDAukaHaYfAxjo4z1LuuDb5sn/oD7z6bhRvY9xWsz+AJlQHElgXPgqsgfOJ0P7Un9A 35ywr8XuKTn80Tg8EUMEeDBHuvD9wDZT2SIw5wG3ev6sWPT8rVDZ9ow/tb6jACrX9HoN D09ajBFo/pIQlBBCSXggkrd5ijPfYn8BmQTy0xA2+L4ukK+ibd6G0qxOFvAAJHRoiHqW T2u8c2RMSJuta+8Jc5ukOWzOvoNIhaEaK9atS+1msgMUqkIkZy2k8k8nnRqymoy5GusB ZlYg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20210112 header.b=femgMvhv; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id i7-20020a056402054700b0048eabb857fbsi16786941edx.408.2023.01.12.04.47.19; Thu, 12 Jan 2023 04:47:42 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20210112 header.b=femgMvhv; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230462AbjALMkJ (ORCPT + 99 others); Thu, 12 Jan 2023 07:40:09 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32802 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231140AbjALMjy (ORCPT ); Thu, 12 Jan 2023 07:39:54 -0500 Received: from mail-wm1-x329.google.com (mail-wm1-x329.google.com [IPv6:2a00:1450:4864:20::329]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 93773496CE for ; Thu, 12 Jan 2023 04:39:52 -0800 (PST) Received: by mail-wm1-x329.google.com with SMTP id k22-20020a05600c1c9600b003d1ee3a6289so14926540wms.2 for ; Thu, 12 Jan 2023 04:39:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Nvh72pnq+35pgvvJPMgZ8nog2n55QNfKwj25JYsdjLk=; b=femgMvhvRMHoBUQrdPDL69x3b05v3v15o093QGsG9jpjlB/Y5S0Sbx7BNVz8opBVue UMKh3NQPtcCvjVsWeKoobMleZZDAAvnXaXc/xbzXjsgiBc+98105RM4/FmmvgKnQmP6P gjYsVUBDvBxet9J50eSjKCFWGrAE/CMkiripO4FUXf8PmN+rp6AKLKaLaObV6XI9pQ2Y LXLPmEAQsMCVfHDL5LiYmtFQF/iFYp9M/J75j6s9qtF61YjFCqrG696W1vTyUCV+XqX1 X2qVSpi4MPYCrStXilHNMDWJ6UEZy5s4SjqHObQWaHhwB5fCwBMuN4963ydfh2mBXDLJ 4ySg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Nvh72pnq+35pgvvJPMgZ8nog2n55QNfKwj25JYsdjLk=; b=XpqBtRV0q7cJwJgEPIzCq1qcQK7ZRYiEPmSPOhgFTaaWgaBoXFwFxIROCIJAL8sc5d WGhGoehK3Wk0HHzZxinS4IXomrHc/913naD1gjo7jt6uMC4QuYtcr3poCP5ovPGTLMhS AgFosIFnzxBIN0/aAciwjoOoQTQvK+QWHJMmpTGZ1Gx4JX6Jk51gD0+FbtoprzjGUaG4 N94fdCdRHFn5YEgsAIEC5xyJCEf0pWT08TnR83SkB8vDruWs8jLWSeLy7nEkR2tviQML 3Pqf8WeK1bpdfIbXMMUQ2yVj82y1Dj3qydNqWTo/HK7Vt1LflupX3pR+LYjhQjUlbMTQ 3Hvg== X-Gm-Message-State: AFqh2krfKpZwIe6pCEHlcIvMzGX8QjSBC4vtWt35YbfXgeip/wSs3vw5 yrGZrhVi0YHCmkHTs/nR1tQ= X-Received: by 2002:a05:600c:5008:b0:3cf:6f4d:c25d with SMTP id n8-20020a05600c500800b003cf6f4dc25dmr54909137wmr.21.1673527190866; Thu, 12 Jan 2023 04:39:50 -0800 (PST) Received: from lucifer.home (host86-164-169-89.range86-164.btcentralplus.com. [86.164.169.89]) by smtp.googlemail.com with ESMTPSA id q1-20020a1ce901000000b003b3307fb98fsm20890797wmc.24.2023.01.12.04.39.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 12 Jan 2023 04:39:50 -0800 (PST) From: Lorenzo Stoakes To: linux-mm@kvack.org, Andrew Morton , linux-kernel@vger.kernel.org Cc: Matthew Wilcox , Hugh Dickins , Vlastimil Babka , Liam Howlett , William Kucharski , Christian Brauner , Jonathan Corbet , Mike Rapoport , Joel Fernandes , Geert Uytterhoeven , Lorenzo Stoakes Subject: [PATCH v4 1/5] mm: pagevec: add folio_batch_reinit() Date: Thu, 12 Jan 2023 12:39:28 +0000 Message-Id: <9018cecacb39e34c883540f997f9be8281153613.1673526881.git.lstoakes@gmail.com> X-Mailer: git-send-email 2.39.0 In-Reply-To: References: MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM, RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1754820942554976986?= X-GMAIL-MSGID: =?utf-8?q?1754820942554976986?= This performs the same task as pagevec_reinit(), only modifying a folio batch rather than a pagevec. Signed-off-by: Lorenzo Stoakes Acked-by: Vlastimil Babka --- include/linux/pagevec.h | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/include/linux/pagevec.h b/include/linux/pagevec.h index 215eb6c3bdc9..2a6f61a0c10a 100644 --- a/include/linux/pagevec.h +++ b/include/linux/pagevec.h @@ -103,6 +103,11 @@ static inline void folio_batch_init(struct folio_batch *fbatch) fbatch->percpu_pvec_drained = false; } +static inline void folio_batch_reinit(struct folio_batch *fbatch) +{ + fbatch->nr = 0; +} + static inline unsigned int folio_batch_count(struct folio_batch *fbatch) { return fbatch->nr; From patchwork Thu Jan 12 12:39:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Stoakes X-Patchwork-Id: 42406 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4e01:0:0:0:0:0 with SMTP id p1csp3860683wrt; Thu, 12 Jan 2023 04:48:33 -0800 (PST) X-Google-Smtp-Source: AMrXdXtPnYpz4PQXX7+xc/ZGq8jd/Zv1EB6yXlt/pgzoNUwfRe3/jfKDFs6fl9lXrDQSi+fcd0eg X-Received: by 2002:a17:906:4b0c:b0:864:8ffe:135b with SMTP id y12-20020a1709064b0c00b008648ffe135bmr2702490eju.22.1673527712796; Thu, 12 Jan 2023 04:48:32 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1673527712; cv=none; d=google.com; s=arc-20160816; b=b575GXjBdwrc775jRCY6mRteWbL7lku5Eb2s8YPL7W5WHTTIAum7s20rB7my2Ubi9J QI0HMQ8yqaAYQ6SndiS56S5ae9IIB6BkxBTchBcDTAoVycaBUGbKhYzpt47mtvVbjOqD TTUlYJa+QJuvQrdNMoFyHomPXeFwryI8J5ei7+JKkYn+vfdfKV1SU4rpHQn2BKsyepBk XyCSsnxCNoSH4IPamgGg9yq0TcbTxxoFghmu+BGib8yWEQQV8KynnvpNHs7tS44EuLM/ 5Zgzqh/UpZzlruW2B9MC7BBku/JSRtbF+L0qGNUqd9Ll3vN0fFK7TG+ZwcDPtUc331EI VDMg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=KnX9+IwlYoEli7Cxl1hBSElWY2G0wjT9ONp5xH+mJuY=; b=CaWFnNWqHPnqFzn64uWiB4F7mT1xIjkZ5+1wfbOd4z2w9zlZ5sRF9Lh2b3kBFerCXR 9H6j+ZydbLCtn0rPXmjQXhge5D3iPzTeXluE9NYy6ncCfyz0lclgkFQtBa+cCqS0Or9+ V3ZM3f5kYf9O9wGtsW9KUr8WvKD1Ndv+JjQyYfnTX7G0WxbxIcsraIf+nnbjl4TJUrCu JUevrsn/wb3lWJcp5NW+FH+8EeCKlHT0skVGYoT3wyOazuBhd/YZqpOjbXBRF6TrrzaH 9su2nnsEp3MFVEnMlbPX6uOm1jYAWvKzH5ZaLHNE6Oh7ZY8yDuesjqxzAjH3IqY1Dd7X 2AvQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20210112 header.b=BmEtIm5K; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id dn22-20020a17090794d600b0082033822e26si15965961ejc.487.2023.01.12.04.48.07; Thu, 12 Jan 2023 04:48:32 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20210112 header.b=BmEtIm5K; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231819AbjALMkV (ORCPT + 99 others); Thu, 12 Jan 2023 07:40:21 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32818 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230422AbjALMjz (ORCPT ); Thu, 12 Jan 2023 07:39:55 -0500 Received: from mail-wm1-x32a.google.com (mail-wm1-x32a.google.com [IPv6:2a00:1450:4864:20::32a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 04A9949163 for ; Thu, 12 Jan 2023 04:39:53 -0800 (PST) Received: by mail-wm1-x32a.google.com with SMTP id g19-20020a05600c4ed300b003d9eb1dbc0aso11961418wmq.3 for ; Thu, 12 Jan 2023 04:39:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=KnX9+IwlYoEli7Cxl1hBSElWY2G0wjT9ONp5xH+mJuY=; b=BmEtIm5KxLpYcH2+rFudi2b2EqZYY95/hg9kFAqv1fHXnNg7vHYznS1zI6POhv0Kax URYQLw5zaT9kX1QS0uUF7Z3n4Ua6qaTVab7o3Eu9+7+Hdavh11vvRBZbtBYaID/5e7/9 JMIdRQikFGst75Ynmb2K7bhIkBcwxrAdXIOhoJQl6b+2N50a6vZ+9moOM24ojGx6aTgo XnJR47AlRVA3oGJLNctsYa75KpHOE7y9BWUrFQqzB2L/3ZWfuwPpaUjAAQXivm+h5/Y8 0iWAObRtk71FHnLv57ynfXAdiWdrjnrmG6SMOghTCTZcYt+e+WOIIgn7FeVVFfzygAN/ CtcA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=KnX9+IwlYoEli7Cxl1hBSElWY2G0wjT9ONp5xH+mJuY=; b=oClqBHtIYq9MQ2FWfFL4qu7PBqo7H1DAyjbbsp+wcMDIp39hX36J0bAkL/WFe6Kkgb BgBaUAHu4EvnOUWCWtp7kRMVrU+4nOkYWw0XGhAGKotqR+ER9PUeFCkndboOit+OvSrl kTN8RjapWjvUG2FrtPyQushstFhu88ob7wyYlqWKJheuNJbvuvHBKra5xA3Yd1magXfF C6DF4Xj8rxe7IVdCjzfiPzeUnLLAa+nzcSQO32PYLnAxKMBXi8wr8JiA209J/fQuReGI VG2cHgkm6ZEe+B2xR27asebH2Hyg9IBhIKxvJXHEG+84Ivtmw7+5Zo/SXuqAVnNkxpwf UI/w== X-Gm-Message-State: AFqh2krJ5ewqFKv78614dXrQ0WtWabvbyh4STQ/ffFdrM2io60kHfChi iuNfE0Vfkd/117lfd0MPVIs= X-Received: by 2002:a05:600c:348b:b0:3d1:f16b:30e6 with SMTP id a11-20020a05600c348b00b003d1f16b30e6mr55625635wmq.28.1673527192234; Thu, 12 Jan 2023 04:39:52 -0800 (PST) Received: from lucifer.home (host86-164-169-89.range86-164.btcentralplus.com. [86.164.169.89]) by smtp.googlemail.com with ESMTPSA id q1-20020a1ce901000000b003b3307fb98fsm20890797wmc.24.2023.01.12.04.39.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 12 Jan 2023 04:39:51 -0800 (PST) From: Lorenzo Stoakes To: linux-mm@kvack.org, Andrew Morton , linux-kernel@vger.kernel.org Cc: Matthew Wilcox , Hugh Dickins , Vlastimil Babka , Liam Howlett , William Kucharski , Christian Brauner , Jonathan Corbet , Mike Rapoport , Joel Fernandes , Geert Uytterhoeven , Lorenzo Stoakes Subject: [PATCH v4 2/5] mm: mlock: use folios and a folio batch internally Date: Thu, 12 Jan 2023 12:39:29 +0000 Message-Id: <9f894d54d568773f4ed3cb0eef5f8932f62c95f4.1673526881.git.lstoakes@gmail.com> X-Mailer: git-send-email 2.39.0 In-Reply-To: References: MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM, RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1754820995277414950?= X-GMAIL-MSGID: =?utf-8?q?1754820995277414950?= This brings mlock in line with the folio batches declared in mm/swap.c and makes the code more consistent across the two. The existing mechanism for identifying which operation each folio in the batch is undergoing is maintained, i.e. using the lower 2 bits of the struct folio address (previously struct page address). This should continue to function correctly as folios remain at least system word-aligned. All invoctions of mlock() pass either a non-compound page or the head of a THP-compound page and no tail pages need updating so this functionality works with struct folios being used internally rather than struct pages. In this patch the external interface is kept identical to before in order to maintain separation between patches in the series, using a rather awkward conversion from struct page to struct folio in relevant functions. However, this maintenance of the existing interface is intended to be temporary - the next patch in the series will update the interfaces to accept folios directly. Signed-off-by: Lorenzo Stoakes Acked-by: Vlastimil Babka --- mm/mlock.c | 246 +++++++++++++++++++++++++++-------------------------- 1 file changed, 124 insertions(+), 122 deletions(-) diff --git a/mm/mlock.c b/mm/mlock.c index 7032f6dd0ce1..f8e8d30ab08a 100644 --- a/mm/mlock.c +++ b/mm/mlock.c @@ -28,12 +28,12 @@ #include "internal.h" -struct mlock_pvec { +struct mlock_fbatch { local_lock_t lock; - struct pagevec vec; + struct folio_batch fbatch; }; -static DEFINE_PER_CPU(struct mlock_pvec, mlock_pvec) = { +static DEFINE_PER_CPU(struct mlock_fbatch, mlock_fbatch) = { .lock = INIT_LOCAL_LOCK(lock), }; @@ -48,192 +48,192 @@ bool can_do_mlock(void) EXPORT_SYMBOL(can_do_mlock); /* - * Mlocked pages are marked with PageMlocked() flag for efficient testing + * Mlocked folios are marked with the PG_mlocked flag for efficient testing * in vmscan and, possibly, the fault path; and to support semi-accurate * statistics. * - * An mlocked page [PageMlocked(page)] is unevictable. As such, it will - * be placed on the LRU "unevictable" list, rather than the [in]active lists. - * The unevictable list is an LRU sibling list to the [in]active lists. - * PageUnevictable is set to indicate the unevictable state. + * An mlocked folio [folio_test_mlocked(folio)] is unevictable. As such, it + * will be ostensibly placed on the LRU "unevictable" list (actually no such + * list exists), rather than the [in]active lists. PG_unevictable is set to + * indicate the unevictable state. */ -static struct lruvec *__mlock_page(struct page *page, struct lruvec *lruvec) +static struct lruvec *__mlock_folio(struct folio *folio, struct lruvec *lruvec) { /* There is nothing more we can do while it's off LRU */ - if (!TestClearPageLRU(page)) + if (!folio_test_clear_lru(folio)) return lruvec; - lruvec = folio_lruvec_relock_irq(page_folio(page), lruvec); + lruvec = folio_lruvec_relock_irq(folio, lruvec); - if (unlikely(page_evictable(page))) { + if (unlikely(folio_evictable(folio))) { /* - * This is a little surprising, but quite possible: - * PageMlocked must have got cleared already by another CPU. - * Could this page be on the Unevictable LRU? I'm not sure, - * but move it now if so. + * This is a little surprising, but quite possible: PG_mlocked + * must have got cleared already by another CPU. Could this + * folio be unevictable? I'm not sure, but move it now if so. */ - if (PageUnevictable(page)) { - del_page_from_lru_list(page, lruvec); - ClearPageUnevictable(page); - add_page_to_lru_list(page, lruvec); + if (folio_test_unevictable(folio)) { + lruvec_del_folio(lruvec, folio); + folio_clear_unevictable(folio); + lruvec_add_folio(lruvec, folio); + __count_vm_events(UNEVICTABLE_PGRESCUED, - thp_nr_pages(page)); + folio_nr_pages(folio)); } goto out; } - if (PageUnevictable(page)) { - if (PageMlocked(page)) - page->mlock_count++; + if (folio_test_unevictable(folio)) { + if (folio_test_mlocked(folio)) + folio->mlock_count++; goto out; } - del_page_from_lru_list(page, lruvec); - ClearPageActive(page); - SetPageUnevictable(page); - page->mlock_count = !!PageMlocked(page); - add_page_to_lru_list(page, lruvec); - __count_vm_events(UNEVICTABLE_PGCULLED, thp_nr_pages(page)); + lruvec_del_folio(lruvec, folio); + folio_clear_active(folio); + folio_set_unevictable(folio); + folio->mlock_count = !!folio_test_mlocked(folio); + lruvec_add_folio(lruvec, folio); + __count_vm_events(UNEVICTABLE_PGCULLED, folio_nr_pages(folio)); out: - SetPageLRU(page); + folio_set_lru(folio); return lruvec; } -static struct lruvec *__mlock_new_page(struct page *page, struct lruvec *lruvec) +static struct lruvec *__mlock_new_folio(struct folio *folio, struct lruvec *lruvec) { - VM_BUG_ON_PAGE(PageLRU(page), page); + VM_BUG_ON_FOLIO(folio_test_lru(folio), folio); - lruvec = folio_lruvec_relock_irq(page_folio(page), lruvec); + lruvec = folio_lruvec_relock_irq(folio, lruvec); /* As above, this is a little surprising, but possible */ - if (unlikely(page_evictable(page))) + if (unlikely(folio_evictable(folio))) goto out; - SetPageUnevictable(page); - page->mlock_count = !!PageMlocked(page); - __count_vm_events(UNEVICTABLE_PGCULLED, thp_nr_pages(page)); + folio_set_unevictable(folio); + folio->mlock_count = !!folio_test_mlocked(folio); + __count_vm_events(UNEVICTABLE_PGCULLED, folio_nr_pages(folio)); out: - add_page_to_lru_list(page, lruvec); - SetPageLRU(page); + lruvec_add_folio(lruvec, folio); + folio_set_lru(folio); return lruvec; } -static struct lruvec *__munlock_page(struct page *page, struct lruvec *lruvec) +static struct lruvec *__munlock_folio(struct folio *folio, struct lruvec *lruvec) { - int nr_pages = thp_nr_pages(page); + int nr_pages = folio_nr_pages(folio); bool isolated = false; - if (!TestClearPageLRU(page)) + if (!folio_test_clear_lru(folio)) goto munlock; isolated = true; - lruvec = folio_lruvec_relock_irq(page_folio(page), lruvec); + lruvec = folio_lruvec_relock_irq(folio, lruvec); - if (PageUnevictable(page)) { + if (folio_test_unevictable(folio)) { /* Then mlock_count is maintained, but might undercount */ - if (page->mlock_count) - page->mlock_count--; - if (page->mlock_count) + if (folio->mlock_count) + folio->mlock_count--; + if (folio->mlock_count) goto out; } /* else assume that was the last mlock: reclaim will fix it if not */ munlock: - if (TestClearPageMlocked(page)) { - __mod_zone_page_state(page_zone(page), NR_MLOCK, -nr_pages); - if (isolated || !PageUnevictable(page)) + if (folio_test_clear_mlocked(folio)) { + __zone_stat_mod_folio(folio, NR_MLOCK, -nr_pages); + if (isolated || !folio_test_unevictable(folio)) __count_vm_events(UNEVICTABLE_PGMUNLOCKED, nr_pages); else __count_vm_events(UNEVICTABLE_PGSTRANDED, nr_pages); } - /* page_evictable() has to be checked *after* clearing Mlocked */ - if (isolated && PageUnevictable(page) && page_evictable(page)) { - del_page_from_lru_list(page, lruvec); - ClearPageUnevictable(page); - add_page_to_lru_list(page, lruvec); + /* folio_evictable() has to be checked *after* clearing Mlocked */ + if (isolated && folio_test_unevictable(folio) && folio_evictable(folio)) { + lruvec_del_folio(lruvec, folio); + folio_clear_unevictable(folio); + lruvec_add_folio(lruvec, folio); __count_vm_events(UNEVICTABLE_PGRESCUED, nr_pages); } out: if (isolated) - SetPageLRU(page); + folio_set_lru(folio); return lruvec; } /* - * Flags held in the low bits of a struct page pointer on the mlock_pvec. + * Flags held in the low bits of a struct folio pointer on the mlock_fbatch. */ -#define LRU_PAGE 0x1 -#define NEW_PAGE 0x2 -static inline struct page *mlock_lru(struct page *page) +#define LRU_FOLIO 0x1 +#define NEW_FOLIO 0x2 +static inline struct folio *mlock_lru(struct folio *folio) { - return (struct page *)((unsigned long)page + LRU_PAGE); + return (struct folio *)((unsigned long)folio + LRU_FOLIO); } -static inline struct page *mlock_new(struct page *page) +static inline struct folio *mlock_new(struct folio *folio) { - return (struct page *)((unsigned long)page + NEW_PAGE); + return (struct folio *)((unsigned long)folio + NEW_FOLIO); } /* - * mlock_pagevec() is derived from pagevec_lru_move_fn(): - * perhaps that can make use of such page pointer flags in future, - * but for now just keep it for mlock. We could use three separate - * pagevecs instead, but one feels better (munlocking a full pagevec - * does not need to drain mlocking pagevecs first). + * mlock_folio_batch() is derived from folio_batch_move_lru(): perhaps that can + * make use of such folio pointer flags in future, but for now just keep it for + * mlock. We could use three separate folio batches instead, but one feels + * better (munlocking a full folio batch does not need to drain mlocking folio + * batches first). */ -static void mlock_pagevec(struct pagevec *pvec) +static void mlock_folio_batch(struct folio_batch *fbatch) { struct lruvec *lruvec = NULL; unsigned long mlock; - struct page *page; + struct folio *folio; int i; - for (i = 0; i < pagevec_count(pvec); i++) { - page = pvec->pages[i]; - mlock = (unsigned long)page & (LRU_PAGE | NEW_PAGE); - page = (struct page *)((unsigned long)page - mlock); - pvec->pages[i] = page; + for (i = 0; i < folio_batch_count(fbatch); i++) { + folio = fbatch->folios[i]; + mlock = (unsigned long)folio & (LRU_FOLIO | NEW_FOLIO); + folio = (struct folio *)((unsigned long)folio - mlock); + fbatch->folios[i] = folio; - if (mlock & LRU_PAGE) - lruvec = __mlock_page(page, lruvec); - else if (mlock & NEW_PAGE) - lruvec = __mlock_new_page(page, lruvec); + if (mlock & LRU_FOLIO) + lruvec = __mlock_folio(folio, lruvec); + else if (mlock & NEW_FOLIO) + lruvec = __mlock_new_folio(folio, lruvec); else - lruvec = __munlock_page(page, lruvec); + lruvec = __munlock_folio(folio, lruvec); } if (lruvec) unlock_page_lruvec_irq(lruvec); - release_pages(pvec->pages, pvec->nr); - pagevec_reinit(pvec); + release_pages(fbatch->folios, fbatch->nr); + folio_batch_reinit(fbatch); } void mlock_page_drain_local(void) { - struct pagevec *pvec; + struct folio_batch *fbatch; - local_lock(&mlock_pvec.lock); - pvec = this_cpu_ptr(&mlock_pvec.vec); - if (pagevec_count(pvec)) - mlock_pagevec(pvec); - local_unlock(&mlock_pvec.lock); + local_lock(&mlock_fbatch.lock); + fbatch = this_cpu_ptr(&mlock_fbatch.fbatch); + if (folio_batch_count(fbatch)) + mlock_folio_batch(fbatch); + local_unlock(&mlock_fbatch.lock); } void mlock_page_drain_remote(int cpu) { - struct pagevec *pvec; + struct folio_batch *fbatch; WARN_ON_ONCE(cpu_online(cpu)); - pvec = &per_cpu(mlock_pvec.vec, cpu); - if (pagevec_count(pvec)) - mlock_pagevec(pvec); + fbatch = &per_cpu(mlock_fbatch.fbatch, cpu); + if (folio_batch_count(fbatch)) + mlock_folio_batch(fbatch); } bool need_mlock_page_drain(int cpu) { - return pagevec_count(&per_cpu(mlock_pvec.vec, cpu)); + return folio_batch_count(&per_cpu(mlock_fbatch.fbatch, cpu)); } /** @@ -242,10 +242,10 @@ bool need_mlock_page_drain(int cpu) */ void mlock_folio(struct folio *folio) { - struct pagevec *pvec; + struct folio_batch *fbatch; - local_lock(&mlock_pvec.lock); - pvec = this_cpu_ptr(&mlock_pvec.vec); + local_lock(&mlock_fbatch.lock); + fbatch = this_cpu_ptr(&mlock_fbatch.fbatch); if (!folio_test_set_mlocked(folio)) { int nr_pages = folio_nr_pages(folio); @@ -255,10 +255,10 @@ void mlock_folio(struct folio *folio) } folio_get(folio); - if (!pagevec_add(pvec, mlock_lru(&folio->page)) || + if (!folio_batch_add(fbatch, mlock_lru(folio)) || folio_test_large(folio) || lru_cache_disabled()) - mlock_pagevec(pvec); - local_unlock(&mlock_pvec.lock); + mlock_folio_batch(fbatch); + local_unlock(&mlock_fbatch.lock); } /** @@ -267,20 +267,22 @@ void mlock_folio(struct folio *folio) */ void mlock_new_page(struct page *page) { - struct pagevec *pvec; - int nr_pages = thp_nr_pages(page); + struct folio_batch *fbatch; + struct folio *folio = page_folio(page); + int nr_pages = folio_nr_pages(folio); - local_lock(&mlock_pvec.lock); - pvec = this_cpu_ptr(&mlock_pvec.vec); - SetPageMlocked(page); - mod_zone_page_state(page_zone(page), NR_MLOCK, nr_pages); + local_lock(&mlock_fbatch.lock); + fbatch = this_cpu_ptr(&mlock_fbatch.fbatch); + folio_set_mlocked(folio); + + zone_stat_mod_folio(folio, NR_MLOCK, nr_pages); __count_vm_events(UNEVICTABLE_PGMLOCKED, nr_pages); - get_page(page); - if (!pagevec_add(pvec, mlock_new(page)) || - PageHead(page) || lru_cache_disabled()) - mlock_pagevec(pvec); - local_unlock(&mlock_pvec.lock); + folio_get(folio); + if (!folio_batch_add(fbatch, mlock_new(folio)) || + folio_test_large(folio) || lru_cache_disabled()) + mlock_folio_batch(fbatch); + local_unlock(&mlock_fbatch.lock); } /** @@ -289,20 +291,20 @@ void mlock_new_page(struct page *page) */ void munlock_page(struct page *page) { - struct pagevec *pvec; + struct folio_batch *fbatch; + struct folio *folio = page_folio(page); - local_lock(&mlock_pvec.lock); - pvec = this_cpu_ptr(&mlock_pvec.vec); + local_lock(&mlock_fbatch.lock); + fbatch = this_cpu_ptr(&mlock_fbatch.fbatch); /* - * TestClearPageMlocked(page) must be left to __munlock_page(), - * which will check whether the page is multiply mlocked. + * folio_test_clear_mlocked(folio) must be left to __munlock_folio(), + * which will check whether the folio is multiply mlocked. */ - - get_page(page); - if (!pagevec_add(pvec, page) || - PageHead(page) || lru_cache_disabled()) - mlock_pagevec(pvec); - local_unlock(&mlock_pvec.lock); + folio_get(folio); + if (!folio_batch_add(fbatch, folio) || + folio_test_large(folio) || lru_cache_disabled()) + mlock_folio_batch(fbatch); + local_unlock(&mlock_fbatch.lock); } static int mlock_pte_range(pmd_t *pmd, unsigned long addr, From patchwork Thu Jan 12 12:39:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Stoakes X-Patchwork-Id: 42405 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4e01:0:0:0:0:0 with SMTP id p1csp3860433wrt; Thu, 12 Jan 2023 04:47:54 -0800 (PST) X-Google-Smtp-Source: AMrXdXsox6ee3IEvggYWKF3MKddcflR9JkFjnsQf0ixfT5l97qKVGoqGMVhU1n4/hW3HddIJMqLf X-Received: by 2002:a05:6402:e0f:b0:468:58d4:a0f2 with SMTP id h15-20020a0564020e0f00b0046858d4a0f2mr73215529edh.23.1673527673885; Thu, 12 Jan 2023 04:47:53 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1673527673; cv=none; d=google.com; s=arc-20160816; b=gzLzpq/PrauQJgAlgod4SW7lF8NudVTwdo/AqLrMIelyiWmYuCm2ERoG6iu1NEu4p3 jCbRJDYJhKkfVZ7qwhH492W47xwL4V84ZLCy4By3l83PnYZyR0kP9puocRozgw0LztRW LsajADTF3doZSgGQt9pdUPJrenI5VUOdXFdNKjEa759cljLbjOda+zuH8AGbebGIgbW0 1Yj9CdDnf4F5zoUqmRwvuBbkN3RCCWYN1WtfhHnDash2sxTKE+AYGZQLbObFcLP5/K59 75P1Wu4loiEkeHQRPshOoYKaa6oEqvb7iwXS42JoJ7McFvFlLQSk/J3245FiTgBSSIKA ZXKA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=q+sEQZiZPtD8IDLax3ZkLozlH0r133zDHTBg3+ltWkU=; b=PCUvGik6zx3CWh21ioXXj2G/Z5USesnGwRKtn+WOHdJunh67hWoGS5Cywv6qVYwVSt Hlb14bIOyCequR4DQMZqIQ9Dcme3D0jXZahJPyPmwfaiLgI5OvhcQ7VjNMYh2KSrlvkK aI4BfcTdyNgi/vIPwxwFBJorG0UwPLxPddlaCWgAo97XqTdOarlJBfGjWTZGloQA0QgQ s0ijaNQPJXM/VD/E55sNe0iQX1C/yzqMvwRgu8lbIBV6gC1zlgowVOTQLKECYl4HMNw5 PafqO3HeZRU3qdAF3VoxKqsf8P67mhM+7i9fLjKJzGvwXZ+K1hZa1LTd8q3Uv9p6XZFK 3DeQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20210112 header.b=QKJzp51R; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id q20-20020a056402519400b004854e1d2682si21156510edd.249.2023.01.12.04.47.30; Thu, 12 Jan 2023 04:47:53 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20210112 header.b=QKJzp51R; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231641AbjALMkO (ORCPT + 99 others); Thu, 12 Jan 2023 07:40:14 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32820 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231196AbjALMjz (ORCPT ); Thu, 12 Jan 2023 07:39:55 -0500 Received: from mail-wm1-x32f.google.com (mail-wm1-x32f.google.com [IPv6:2a00:1450:4864:20::32f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 41F754916B for ; Thu, 12 Jan 2023 04:39:54 -0800 (PST) Received: by mail-wm1-x32f.google.com with SMTP id bg13-20020a05600c3c8d00b003d9712b29d2so16487106wmb.2 for ; Thu, 12 Jan 2023 04:39:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=q+sEQZiZPtD8IDLax3ZkLozlH0r133zDHTBg3+ltWkU=; b=QKJzp51RE7FmnOadSsbRoJ7T4xdwUBOw03HY8z70vgTVPVLNHCoIFSesIgA8ZXoQXu DXau/md+pdbQC3lwijeFmKrVxER5fNP4tvR1WUFPdbX8YbJMw3zbIQWlSwHm205onFK2 zecFj/HEjVJSPymdYiBV6pR4bvOKEL8uXxUY427GP6Vxx2nlNeqGVIyOqDOqKyXCE/ZD 04hms/oMfd8ByUB1v1CDKGnHd0Egud9vxVqOZCyoXb9p75c8CPuUZFq4jwgmQArhklZW fVtSwyFuchUS8v7AmifnvsR7xt7cVtIlxKo8E2gnWg3Y15Nmd4wFgvMXgP1iyuujVUJM uZaA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=q+sEQZiZPtD8IDLax3ZkLozlH0r133zDHTBg3+ltWkU=; b=WNqqZRyyoKbuMJ3SN0SfRyakwsQ+Ezj/5rX8Cma2vhz1Awr//Ysh1HcpDLq99xjmSz ZCKsSz1EINpQ+yVzh2qgOZwY6JXbo1l3K330Z9VNit/OlY1xXlcUoUvo3hYvJEIVbt+g eWklRHMol+Y0NxVxTaJrGEb3GoOvCQ3u+zh/m5Ewsh1xd2UXe2aSMVbavRTi74FDKZ94 9dB6tITKs6OON7WpSVFwGl1JwMZ7rOjnhqFqPn5eO4nNgNtCZrnvOC4jHfJUEat/EE4t hq+4pNhfhYiL+ZL+LYw7Mqe9u8Uzy2pJ268rr9V0BpF9pOcjgwgNHurLXISUwnB7OaAU ex8w== X-Gm-Message-State: AFqh2koKAEOWNI1TGXyCh6XiktF3o8cCBeAyfgwGXCjwhAHMj7jtoemZ Ih98qZACPsl/4ylbl1/KO34= X-Received: by 2002:a05:600c:4f07:b0:3da:1bb0:4d78 with SMTP id l7-20020a05600c4f0700b003da1bb04d78mr991649wmq.14.1673527193584; Thu, 12 Jan 2023 04:39:53 -0800 (PST) Received: from lucifer.home (host86-164-169-89.range86-164.btcentralplus.com. [86.164.169.89]) by smtp.googlemail.com with ESMTPSA id q1-20020a1ce901000000b003b3307fb98fsm20890797wmc.24.2023.01.12.04.39.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 12 Jan 2023 04:39:52 -0800 (PST) From: Lorenzo Stoakes To: linux-mm@kvack.org, Andrew Morton , linux-kernel@vger.kernel.org Cc: Matthew Wilcox , Hugh Dickins , Vlastimil Babka , Liam Howlett , William Kucharski , Christian Brauner , Jonathan Corbet , Mike Rapoport , Joel Fernandes , Geert Uytterhoeven , Lorenzo Stoakes Subject: [PATCH v4 3/5] m68k/mm/motorola: specify pmd_page() type Date: Thu, 12 Jan 2023 12:39:30 +0000 Message-Id: X-Mailer: git-send-email 2.39.0 In-Reply-To: References: MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM, RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1754820954554149366?= X-GMAIL-MSGID: =?utf-8?q?1754820954554149366?= Failing to specify a specific type here breaks anything that relies on the type being explicitly known, such as page_folio(). Make explicit the type of null pointer returned here. Signed-off-by: Lorenzo Stoakes Acked-by: Geert Uytterhoeven Acked-by: Vlastimil Babka --- arch/m68k/include/asm/motorola_pgtable.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/m68k/include/asm/motorola_pgtable.h b/arch/m68k/include/asm/motorola_pgtable.h index 7ac3d64c6b33..562b54e09850 100644 --- a/arch/m68k/include/asm/motorola_pgtable.h +++ b/arch/m68k/include/asm/motorola_pgtable.h @@ -124,7 +124,7 @@ static inline void pud_set(pud_t *pudp, pmd_t *pmdp) * expects pmd_page() to exists, only to then DCE it all. Provide a dummy to * make the compiler happy. */ -#define pmd_page(pmd) NULL +#define pmd_page(pmd) ((struct page *)NULL) #define pud_none(pud) (!pud_val(pud)) From patchwork Thu Jan 12 12:39:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Stoakes X-Patchwork-Id: 42408 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4e01:0:0:0:0:0 with SMTP id p1csp3861040wrt; Thu, 12 Jan 2023 04:49:33 -0800 (PST) X-Google-Smtp-Source: AMrXdXvK3/Vpyw4JpbVUG3C5kqBj4JZY2alwWv2BGZ6YrTbepdp7zelnF0avAlXm5xYYBcdBR06U X-Received: by 2002:a17:907:6281:b0:855:d58e:59 with SMTP id nd1-20020a170907628100b00855d58e0059mr15268985ejc.75.1673527772203; Thu, 12 Jan 2023 04:49:32 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1673527772; cv=none; d=google.com; s=arc-20160816; b=G7SNwYeB3XP4k8zFH6uAH+j05w1rUB+jQNSm4pZ2peRQUy6D8/knwrUHa/2rp64z7e IBq18DJ+ETfNlMKZTwesZptZ3oGV34BvknFW9FEVOUeDw/xHl+0p13jKlVYRb8VAWquF qvJSIzfsltR7uLLbM83GrmT/gHf7UWBCHjRqI9BmBJWSBb7+yxVCcFNW9VGRRsARe9Bv reciE17jH+GH51GsotlNRs/3m/ATjXg+385imMlTQVidjUA/6VpS9POFHDOdYIWFdvd3 V3PiOs2qc9zEu0fKR1z0wLt8wuNiBhO5k5QYz+vfvE6EGeD4SdiFJQPL1MOaYQisKXM8 0xvg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=EzzwgU7cL1S3yrZYwQo4q/k5xxGCQFJxxgRUv/K0pH4=; b=0WbOSEJERyFw/JiuiGdFg442S+mXtWT8qgqbwazMADDgqK5nv36yhrwyWurpEXHh3X 2nwOZGgxkfQSPGG9eX9OCKY3teE/4s1P8OhFDwtigz4wRK6U4Zh8NcgXD+z6jucrCyUt un/7bl6YBevltGLbf0jcWBcEutl48FJFG5HwuZgmuXCTGYzIIj8rx8lgHiQFQbyfghaR 8ApvmFE17vmXQCxhnoUWG+gIWcCGiO7FUoxsZNNl+xTjm6X0/0hSx7EPL885RhiCjz9+ lyXk43zOppymX6lMW8uF94PPc6kBBXZhYBZuSFmm7rsa8dgoJMtNZzuBTwfDbw/9Yc9z Lu5A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20210112 header.b=n8EE3Cb2; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id xj4-20020a170906db0400b0084c6c09e713si18540154ejb.464.2023.01.12.04.49.07; Thu, 12 Jan 2023 04:49:32 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20210112 header.b=n8EE3Cb2; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232050AbjALMkk (ORCPT + 99 others); Thu, 12 Jan 2023 07:40:40 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32860 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231398AbjALMj6 (ORCPT ); Thu, 12 Jan 2023 07:39:58 -0500 Received: from mail-wm1-x32b.google.com (mail-wm1-x32b.google.com [IPv6:2a00:1450:4864:20::32b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7110B4C714 for ; Thu, 12 Jan 2023 04:39:56 -0800 (PST) Received: by mail-wm1-x32b.google.com with SMTP id ay12-20020a05600c1e0c00b003d9ea12bafcso11075093wmb.3 for ; Thu, 12 Jan 2023 04:39:56 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=EzzwgU7cL1S3yrZYwQo4q/k5xxGCQFJxxgRUv/K0pH4=; b=n8EE3Cb2JfnWTzG4MJhE0B1dNRpL1plv7AbMVzjwdAR0IbmS406ltS0bMekXlMgDci Kw6I00n4S1VRINQq0T9M9r1eZj2W1FVguyD0YN9Ebht6Q0iR3l2YxxBAKqpqoaE9921H l+soCbWuASlNv18kTfujG38iXeZnBsTHLkewaVOIAPOnCD4UaSzkLZiW31QHYWDf3TRn IS8+hf+EWiPma6bwq7gAqfwrmim5fI3L+jtk390FTDLoKBRX1cE8naBqmS60CfvFt4EO WmwVmLE95Vt0H2uTPz/0SMuoD6C7K2F5pErxIRW0dCDfn7e0fjrDkmoB9oRwHPj0urEW TZPg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=EzzwgU7cL1S3yrZYwQo4q/k5xxGCQFJxxgRUv/K0pH4=; b=XmGqmjS+q7/J0RemCIQvG+BGO3EkSLEyUF9xOFei6R0HhMcF0pZS0LaT0w2OoqGRJJ YRyZ3BZ9dyoStK6KgbYtVZXkcSoaHQKq5bJDFxJJ/rjh2IH5gzXAJIEHvbe6T0PZHind Z6GnbAJf2AadGoJ5raM9TUGG/aNYMo2KGBkC2Xpbq+KmHMUJ8sNkauHHkzspzOM6AnI5 4UVDQrRhq7hS7pH1pPPM++Wh1DAxsNAEDcgpyw7qICc9zFCDNSsUmYWfThi3J/TMBu+N foXvz0xB1I9KzwoictxbuFGfAvNR4IUkXewvp9xx7SA8xCEBGnx0RCfGBmDyad84EzvV bLTg== X-Gm-Message-State: AFqh2koavqcO+SzcYm8PO4JZwRi4qHlW9NE3a/YcZIsLnivSOONPkSLY YDWmFwKKr9xOCgQIve2c60nXTl+S+34= X-Received: by 2002:a05:600c:4d14:b0:3d3:5c7d:a5f3 with SMTP id u20-20020a05600c4d1400b003d35c7da5f3mr58831252wmp.37.1673527194910; Thu, 12 Jan 2023 04:39:54 -0800 (PST) Received: from lucifer.home (host86-164-169-89.range86-164.btcentralplus.com. [86.164.169.89]) by smtp.googlemail.com with ESMTPSA id q1-20020a1ce901000000b003b3307fb98fsm20890797wmc.24.2023.01.12.04.39.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 12 Jan 2023 04:39:54 -0800 (PST) From: Lorenzo Stoakes To: linux-mm@kvack.org, Andrew Morton , linux-kernel@vger.kernel.org Cc: Matthew Wilcox , Hugh Dickins , Vlastimil Babka , Liam Howlett , William Kucharski , Christian Brauner , Jonathan Corbet , Mike Rapoport , Joel Fernandes , Geert Uytterhoeven , Lorenzo Stoakes Subject: [PATCH v4 4/5] mm: mlock: update the interface to use folios Date: Thu, 12 Jan 2023 12:39:31 +0000 Message-Id: X-Mailer: git-send-email 2.39.0 In-Reply-To: References: MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM, RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1754821057554493852?= X-GMAIL-MSGID: =?utf-8?q?1754821057554493852?= This patch updates the mlock interface to accept folios rather than pages, bringing the interface in line with the internal implementation. munlock_vma_page() still requires a page_folio() conversion, however this is consistent with the existent mlock_vma_page() implementation and a product of rmap still dealing in pages rather than folios. Signed-off-by: Lorenzo Stoakes Acked-by: Vlastimil Babka --- mm/internal.h | 38 ++++++++++++++++++++++---------------- mm/migrate.c | 2 +- mm/mlock.c | 38 ++++++++++++++++++-------------------- mm/page_alloc.c | 2 +- mm/rmap.c | 4 ++-- mm/swap.c | 10 +++++----- 6 files changed, 49 insertions(+), 45 deletions(-) diff --git a/mm/internal.h b/mm/internal.h index c0a02fcb7745..2d09a7a0600a 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -533,10 +533,9 @@ extern int mlock_future_check(struct mm_struct *mm, unsigned long flags, * should be called with vma's mmap_lock held for read or write, * under page table lock for the pte/pmd being added or removed. * - * mlock is usually called at the end of page_add_*_rmap(), - * munlock at the end of page_remove_rmap(); but new anon - * pages are managed by lru_cache_add_inactive_or_unevictable() - * calling mlock_new_page(). + * mlock is usually called at the end of page_add_*_rmap(), munlock at + * the end of page_remove_rmap(); but new anon folios are managed by + * folio_add_lru_vma() calling mlock_new_folio(). * * @compound is used to include pmd mappings of THPs, but filter out * pte mappings of THPs, which cannot be consistently counted: a pte @@ -565,18 +564,25 @@ static inline void mlock_vma_page(struct page *page, mlock_vma_folio(page_folio(page), vma, compound); } -void munlock_page(struct page *page); -static inline void munlock_vma_page(struct page *page, +void munlock_folio(struct folio *folio); + +static inline void munlock_vma_folio(struct folio *folio, struct vm_area_struct *vma, bool compound) { if (unlikely(vma->vm_flags & VM_LOCKED) && - (compound || !PageTransCompound(page))) - munlock_page(page); + (compound || !folio_test_large(folio))) + munlock_folio(folio); +} + +static inline void munlock_vma_page(struct page *page, + struct vm_area_struct *vma, bool compound) +{ + munlock_vma_folio(page_folio(page), vma, compound); } -void mlock_new_page(struct page *page); -bool need_mlock_page_drain(int cpu); -void mlock_page_drain_local(void); -void mlock_page_drain_remote(int cpu); +void mlock_new_folio(struct folio *folio); +bool need_mlock_drain(int cpu); +void mlock_drain_local(void); +void mlock_drain_remote(int cpu); extern pmd_t maybe_pmd_mkwrite(pmd_t pmd, struct vm_area_struct *vma); @@ -665,10 +671,10 @@ static inline void mlock_vma_page(struct page *page, struct vm_area_struct *vma, bool compound) { } static inline void munlock_vma_page(struct page *page, struct vm_area_struct *vma, bool compound) { } -static inline void mlock_new_page(struct page *page) { } -static inline bool need_mlock_page_drain(int cpu) { return false; } -static inline void mlock_page_drain_local(void) { } -static inline void mlock_page_drain_remote(int cpu) { } +static inline void mlock_new_folio(struct folio *folio) { } +static inline bool need_mlock_drain(int cpu) { return false; } +static inline void mlock_drain_local(void) { } +static inline void mlock_drain_remote(int cpu) { } static inline void vunmap_range_noflush(unsigned long start, unsigned long end) { } diff --git a/mm/migrate.c b/mm/migrate.c index a314373c62b7..4d8c8a51f1b8 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -265,7 +265,7 @@ static bool remove_migration_pte(struct folio *folio, set_pte_at(vma->vm_mm, pvmw.address, pvmw.pte, pte); } if (vma->vm_flags & VM_LOCKED) - mlock_page_drain_local(); + mlock_drain_local(); trace_remove_migration_pte(pvmw.address, pte_val(pte), compound_order(new)); diff --git a/mm/mlock.c b/mm/mlock.c index f8e8d30ab08a..9e9c8be58277 100644 --- a/mm/mlock.c +++ b/mm/mlock.c @@ -210,7 +210,7 @@ static void mlock_folio_batch(struct folio_batch *fbatch) folio_batch_reinit(fbatch); } -void mlock_page_drain_local(void) +void mlock_drain_local(void) { struct folio_batch *fbatch; @@ -221,7 +221,7 @@ void mlock_page_drain_local(void) local_unlock(&mlock_fbatch.lock); } -void mlock_page_drain_remote(int cpu) +void mlock_drain_remote(int cpu) { struct folio_batch *fbatch; @@ -231,7 +231,7 @@ void mlock_page_drain_remote(int cpu) mlock_folio_batch(fbatch); } -bool need_mlock_page_drain(int cpu) +bool need_mlock_drain(int cpu) { return folio_batch_count(&per_cpu(mlock_fbatch.fbatch, cpu)); } @@ -262,13 +262,12 @@ void mlock_folio(struct folio *folio) } /** - * mlock_new_page - mlock a newly allocated page not yet on LRU - * @page: page to be mlocked, either a normal page or a THP head. + * mlock_new_folio - mlock a newly allocated folio not yet on LRU + * @folio: folio to be mlocked, either normal or a THP head. */ -void mlock_new_page(struct page *page) +void mlock_new_folio(struct folio *folio) { struct folio_batch *fbatch; - struct folio *folio = page_folio(page); int nr_pages = folio_nr_pages(folio); local_lock(&mlock_fbatch.lock); @@ -286,13 +285,12 @@ void mlock_new_page(struct page *page) } /** - * munlock_page - munlock a page - * @page: page to be munlocked, either a normal page or a THP head. + * munlock_folio - munlock a folio + * @folio: folio to be munlocked, either normal or a THP head. */ -void munlock_page(struct page *page) +void munlock_folio(struct folio *folio) { struct folio_batch *fbatch; - struct folio *folio = page_folio(page); local_lock(&mlock_fbatch.lock); fbatch = this_cpu_ptr(&mlock_fbatch.fbatch); @@ -314,7 +312,7 @@ static int mlock_pte_range(pmd_t *pmd, unsigned long addr, struct vm_area_struct *vma = walk->vma; spinlock_t *ptl; pte_t *start_pte, *pte; - struct page *page; + struct folio *folio; ptl = pmd_trans_huge_lock(pmd, vma); if (ptl) { @@ -322,11 +320,11 @@ static int mlock_pte_range(pmd_t *pmd, unsigned long addr, goto out; if (is_huge_zero_pmd(*pmd)) goto out; - page = pmd_page(*pmd); + folio = page_folio(pmd_page(*pmd)); if (vma->vm_flags & VM_LOCKED) - mlock_folio(page_folio(page)); + mlock_folio(folio); else - munlock_page(page); + munlock_folio(folio); goto out; } @@ -334,15 +332,15 @@ static int mlock_pte_range(pmd_t *pmd, unsigned long addr, for (pte = start_pte; addr != end; pte++, addr += PAGE_SIZE) { if (!pte_present(*pte)) continue; - page = vm_normal_page(vma, addr, *pte); - if (!page || is_zone_device_page(page)) + folio = vm_normal_folio(vma, addr, *pte); + if (!folio || folio_is_zone_device(folio)) continue; - if (PageTransCompound(page)) + if (folio_test_large(folio)) continue; if (vma->vm_flags & VM_LOCKED) - mlock_folio(page_folio(page)); + mlock_folio(folio); else - munlock_page(page); + munlock_folio(folio); } pte_unmap(start_pte); out: diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 41a239ce4692..7b36bda246cd 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -8610,7 +8610,7 @@ static int page_alloc_cpu_dead(unsigned int cpu) struct zone *zone; lru_add_drain_cpu(cpu); - mlock_page_drain_remote(cpu); + mlock_drain_remote(cpu); drain_pages(cpu); /* diff --git a/mm/rmap.c b/mm/rmap.c index 7f76fc40af9a..0e450e6bb963 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1764,7 +1764,7 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, */ page_remove_rmap(subpage, vma, folio_test_hugetlb(folio)); if (vma->vm_flags & VM_LOCKED) - mlock_page_drain_local(); + mlock_drain_local(); folio_put(folio); } @@ -2119,7 +2119,7 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma, */ page_remove_rmap(subpage, vma, folio_test_hugetlb(folio)); if (vma->vm_flags & VM_LOCKED) - mlock_page_drain_local(); + mlock_drain_local(); folio_put(folio); } diff --git a/mm/swap.c b/mm/swap.c index e54e2a252e27..42d67f9baa8c 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -562,7 +562,7 @@ void folio_add_lru_vma(struct folio *folio, struct vm_area_struct *vma) VM_BUG_ON_FOLIO(folio_test_lru(folio), folio); if (unlikely((vma->vm_flags & (VM_LOCKED | VM_SPECIAL)) == VM_LOCKED)) - mlock_new_page(&folio->page); + mlock_new_folio(folio); else folio_add_lru(folio); } @@ -781,7 +781,7 @@ void lru_add_drain(void) local_lock(&cpu_fbatches.lock); lru_add_drain_cpu(smp_processor_id()); local_unlock(&cpu_fbatches.lock); - mlock_page_drain_local(); + mlock_drain_local(); } /* @@ -796,7 +796,7 @@ static void lru_add_and_bh_lrus_drain(void) lru_add_drain_cpu(smp_processor_id()); local_unlock(&cpu_fbatches.lock); invalidate_bh_lrus_cpu(); - mlock_page_drain_local(); + mlock_drain_local(); } void lru_add_drain_cpu_zone(struct zone *zone) @@ -805,7 +805,7 @@ void lru_add_drain_cpu_zone(struct zone *zone) lru_add_drain_cpu(smp_processor_id()); drain_local_pages(zone); local_unlock(&cpu_fbatches.lock); - mlock_page_drain_local(); + mlock_drain_local(); } #ifdef CONFIG_SMP @@ -828,7 +828,7 @@ static bool cpu_needs_drain(unsigned int cpu) folio_batch_count(&fbatches->lru_deactivate) || folio_batch_count(&fbatches->lru_lazyfree) || folio_batch_count(&fbatches->activate) || - need_mlock_page_drain(cpu) || + need_mlock_drain(cpu) || has_bh_in_lru(cpu, NULL); } From patchwork Thu Jan 12 12:39:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Stoakes X-Patchwork-Id: 42407 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4e01:0:0:0:0:0 with SMTP id p1csp3860798wrt; Thu, 12 Jan 2023 04:48:49 -0800 (PST) X-Google-Smtp-Source: AMrXdXvo2FpNu1VjrlOwxQKPAg3LIz2TvSUV976JCTPz4JkancCsSVVvV4SJC1VhMUUT30xjAJaQ X-Received: by 2002:a17:906:9c88:b0:7ac:8e6a:a674 with SMTP id fj8-20020a1709069c8800b007ac8e6aa674mr65050957ejc.2.1673527729081; Thu, 12 Jan 2023 04:48:49 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1673527729; cv=none; d=google.com; s=arc-20160816; b=a47g2wRzIx0orlMNu+k5hxz0OGDOdbMDMEBVyw2PY47JzyhtzZkJ1BZctjBOiUnwCE TrduNVUuAXkBgARL7lSvgCgrpSCzVjvaS9EC3fo9GYoE9b/2LTc6Tr5xuPAiYQidpRDk 69vmJ0XIpGSbyInnnI4D/4DFfPTbza+bF9i6CwIU1/7NOSiBg6/A+muYzwUJuq1lYd39 S8cExuLnFRhwSMoN0U8HCKnYsaslQ19J/CX3M5XCQD4glwvjlGKD2BAKOgrS4ZP7YLTK uZ+O1t8pQFWv2spHct3C8BtY+7Fq/D/r7HbZYfVecwRcLwaYZ4f2SbAcr6HSjEXnV6LE 9tnw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=OIlme2djvGgEExw3u/UZa5uT/mD4cFx99rUFriWDSgQ=; b=i1JjNlm4Vr/DD+60wN2nSR9LxGHfSIt9BDIec2v4XlBbUGscDO95xvOf9X8JglZxjn JnTKe0BWLnv8hZUKRYVdxnNztlaUDoER7nBJqACN6ssI/aWykqBea2sp3bIrYPbzMX5Z ksPeR6rXIsI8G72JRlZQzXaeYmEj7zk7a1ao0LCAWULLhTIUmXUj67jswlJ+kilW1eC8 8WFgY7L4DoH1yJIiHHHnkL0JKCyoym4bVEV7Dov9ougklWvb9RNVBffcPQrng7oPROKD 5bxsbujfsXc3XRx+O26Vo9/WhO0X9GzfbXm3UmM9FT/VCcSFz4Nsv8eW+yczQAqkZRka RPPA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20210112 header.b=Bj8WAYfm; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id hg3-20020a1709072cc300b007ae9abf1994si18175501ejc.837.2023.01.12.04.48.25; Thu, 12 Jan 2023 04:48:49 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20210112 header.b=Bj8WAYfm; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232049AbjALMkd (ORCPT + 99 others); Thu, 12 Jan 2023 07:40:33 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60932 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230124AbjALMj7 (ORCPT ); Thu, 12 Jan 2023 07:39:59 -0500 Received: from mail-wm1-x32d.google.com (mail-wm1-x32d.google.com [IPv6:2a00:1450:4864:20::32d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E88A74C717 for ; Thu, 12 Jan 2023 04:39:57 -0800 (PST) Received: by mail-wm1-x32d.google.com with SMTP id j34-20020a05600c1c2200b003da1b054057so762487wms.5 for ; Thu, 12 Jan 2023 04:39:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=OIlme2djvGgEExw3u/UZa5uT/mD4cFx99rUFriWDSgQ=; b=Bj8WAYfmNP4kC2F8nv3UhSfUx5UZ90gWY74hSp4cxMWGVoIHQnh0Bo3Aef6aY83qfv dorhgyvnMIU6JpVga3F8+FCOQHx/uBvARy+ihQZ0C/8WV74AJPaMzUKyQMOxWObo4xvo yuWHhcjHALhQ/jeFg2dzzQj4pjBd/sL0+b57JLmCguGVxvc7BJU8kLBVRXYU6jzbD8NL pwbSJm0+gllCmV/9EGecEojyVRMShUjU5PT8WaCylGmJ9gvoU4T+QfrUR9Rl5mTF1jdq ETI+dso6dAkU997uRCSVMT8imMqOfh5/VSNUovnsOlDy6iQpzMcK0+DOZaMuRsXwtFZt 7NpA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=OIlme2djvGgEExw3u/UZa5uT/mD4cFx99rUFriWDSgQ=; b=4lj9D2aDbhI4fDrm8zB5VOaaWaoYVzvGkhEqigDwhNAaaoKDEnsq/WrP67HLJ9EZgB VlXyyLfWmmYI4rpwGoC5bWBJlMoYhkKBBvv91F5mFYGJdvCBmXDIA2QY4/bU776I1alh IM7sohBwncAtzlQ4xNTPdgk4AOzMCJBbqWw3l57MzD6QxK78qqKxTLmJ1DM0sdB2+dfj IKx9pvbUgxk9N4wEJ1Ez0EDVXrKYHKDEG+pymAASm6ecXc5iJpQWEGricBSjqclRGzTV mrwHq5VvoloOGxPu0ETJr8sVCufjwruY+cjnVq/gIRi8XmNbULKubjO21PYbL3PbUmF5 AWrg== X-Gm-Message-State: AFqh2koCAH2DTVeqg85u0l2wFMb3Dxhp+AWoi1pspiL8y7Ol1Vu1jqCV IwKPMFnHc4/053O4UTBk23k= X-Received: by 2002:a05:600c:1c12:b0:3d3:5841:e8af with SMTP id j18-20020a05600c1c1200b003d35841e8afmr55879654wms.25.1673527196283; Thu, 12 Jan 2023 04:39:56 -0800 (PST) Received: from lucifer.home (host86-164-169-89.range86-164.btcentralplus.com. [86.164.169.89]) by smtp.googlemail.com with ESMTPSA id q1-20020a1ce901000000b003b3307fb98fsm20890797wmc.24.2023.01.12.04.39.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 12 Jan 2023 04:39:55 -0800 (PST) From: Lorenzo Stoakes To: linux-mm@kvack.org, Andrew Morton , linux-kernel@vger.kernel.org Cc: Matthew Wilcox , Hugh Dickins , Vlastimil Babka , Liam Howlett , William Kucharski , Christian Brauner , Jonathan Corbet , Mike Rapoport , Joel Fernandes , Geert Uytterhoeven , Lorenzo Stoakes Subject: [PATCH v4 5/5] Documentation/mm: Update references to __m[un]lock_page() to *_folio() Date: Thu, 12 Jan 2023 12:39:32 +0000 Message-Id: <898c487169d98a7f09c1c1e57a7dfdc2b3f6bf0f.1673526881.git.lstoakes@gmail.com> X-Mailer: git-send-email 2.39.0 In-Reply-To: References: MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM, RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1754821011614067740?= X-GMAIL-MSGID: =?utf-8?q?1754821011614067740?= We now pass folios to these functions, so update the documentation accordingly. Additionally, correct the outdated reference to __pagevec_lru_add_fn(), the referenced action occurs in __munlock_folio() directly now, replace reference to lru_cache_add_inactive_or_unevictable() with the modern folio equivalent folio_add_lru_vma() and reference folio flags by the flag name rather than accessor. Signed-off-by: Lorenzo Stoakes Acked-by: Vlastimil Babka --- Documentation/mm/unevictable-lru.rst | 30 ++++++++++++++-------------- 1 file changed, 15 insertions(+), 15 deletions(-) diff --git a/Documentation/mm/unevictable-lru.rst b/Documentation/mm/unevictable-lru.rst index 4a0e158aa9ce..2a90d0721dd9 100644 --- a/Documentation/mm/unevictable-lru.rst +++ b/Documentation/mm/unevictable-lru.rst @@ -308,22 +308,22 @@ do end up getting faulted into this VM_LOCKED VMA, they will be handled in the fault path - which is also how mlock2()'s MLOCK_ONFAULT areas are handled. For each PTE (or PMD) being faulted into a VMA, the page add rmap function -calls mlock_vma_page(), which calls mlock_page() when the VMA is VM_LOCKED +calls mlock_vma_page(), which calls mlock_folio() when the VMA is VM_LOCKED (unless it is a PTE mapping of a part of a transparent huge page). Or when -it is a newly allocated anonymous page, lru_cache_add_inactive_or_unevictable() -calls mlock_new_page() instead: similar to mlock_page(), but can make better +it is a newly allocated anonymous page, folio_add_lru_vma() calls +mlock_new_folio() instead: similar to mlock_folio(), but can make better judgments, since this page is held exclusively and known not to be on LRU yet. -mlock_page() sets PageMlocked immediately, then places the page on the CPU's -mlock pagevec, to batch up the rest of the work to be done under lru_lock by -__mlock_page(). __mlock_page() sets PageUnevictable, initializes mlock_count +mlock_folio() sets PG_mlocked immediately, then places the page on the CPU's +mlock folio batch, to batch up the rest of the work to be done under lru_lock by +__mlock_folio(). __mlock_folio() sets PG_unevictable, initializes mlock_count and moves the page to unevictable state ("the unevictable LRU", but with -mlock_count in place of LRU threading). Or if the page was already PageLRU -and PageUnevictable and PageMlocked, it simply increments the mlock_count. +mlock_count in place of LRU threading). Or if the page was already PG_lru +and PG_unevictable and PG_mlocked, it simply increments the mlock_count. But in practice that may not work ideally: the page may not yet be on an LRU, or it may have been temporarily isolated from LRU. In such cases the mlock_count -field cannot be touched, but will be set to 0 later when __pagevec_lru_add_fn() +field cannot be touched, but will be set to 0 later when __munlock_folio() returns the page to "LRU". Races prohibit mlock_count from being set to 1 then: rather than risk stranding a page indefinitely as unevictable, always err with mlock_count on the low side, so that when munlocked the page will be rescued to @@ -377,8 +377,8 @@ that it is munlock() being performed. munlock_page() uses the mlock pagevec to batch up work to be done under lru_lock by __munlock_page(). __munlock_page() decrements the page's -mlock_count, and when that reaches 0 it clears PageMlocked and clears -PageUnevictable, moving the page from unevictable state to inactive LRU. +mlock_count, and when that reaches 0 it clears PG_mlocked and clears +PG_unevictable, moving the page from unevictable state to inactive LRU. But in practice that may not work ideally: the page may not yet have reached "the unevictable LRU", or it may have been temporarily isolated from it. In @@ -488,8 +488,8 @@ munlock_vma_page(), which calls munlock_page() when the VMA is VM_LOCKED munlock_page() uses the mlock pagevec to batch up work to be done under lru_lock by __munlock_page(). __munlock_page() decrements the page's -mlock_count, and when that reaches 0 it clears PageMlocked and clears -PageUnevictable, moving the page from unevictable state to inactive LRU. +mlock_count, and when that reaches 0 it clears PG_mlocked and clears +PG_unevictable, moving the page from unevictable state to inactive LRU. But in practice that may not work ideally: the page may not yet have reached "the unevictable LRU", or it may have been temporarily isolated from it. In @@ -515,7 +515,7 @@ munlocking by clearing VM_LOCKED from a VMA, before munlocking all the pages present, if one of those pages were unmapped by truncation or hole punch before mlock_pte_range() reached it, it would not be recognized as mlocked by this VMA, and would not be counted out of mlock_count. In this rare case, a page may -still appear as PageMlocked after it has been fully unmapped: and it is left to +still appear as PG_mlocked after it has been fully unmapped: and it is left to release_pages() (or __page_cache_release()) to clear it and update statistics before freeing (this event is counted in /proc/vmstat unevictable_pgs_cleared, which is usually 0). @@ -527,7 +527,7 @@ Page Reclaim in shrink_*_list() vmscan's shrink_active_list() culls any obviously unevictable pages - i.e. !page_evictable(page) pages - diverting those to the unevictable list. However, shrink_active_list() only sees unevictable pages that made it onto the -active/inactive LRU lists. Note that these pages do not have PageUnevictable +active/inactive LRU lists. Note that these pages do not have PG_unevictable set - otherwise they would be on the unevictable list and shrink_active_list() would never see them.