From patchwork Wed Dec 27 14:12:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kinsey Ho X-Patchwork-Id: 183477 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7301:6f82:b0:100:9c79:88ff with SMTP id tb2csp1454934dyb; Wed, 27 Dec 2023 06:13:03 -0800 (PST) X-Google-Smtp-Source: AGHT+IH5D6i8VW5z6FWzubrziEMdnoGA8jWi8GhZtcuiORGC/Zs3AkZFbkY/F1OjwBIXZLl7cPSf X-Received: by 2002:a05:6e02:1705:b0:35f:f65e:1194 with SMTP id u5-20020a056e02170500b0035ff65e1194mr8764946ill.62.1703686383687; Wed, 27 Dec 2023 06:13:03 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1703686383; cv=none; d=google.com; s=arc-20160816; b=siOGAmyDAwZoL/qnrkcfSPRugrQG2zSy8DaiiL8qElQombqHskyTYRdAL76j21o9E0 Cwl3U6O9sAHyoiu6AZMKc8KiPimtnxQuOQC1N5zFu+0etLoxJkdjBE0Xxu6foNJTSJvF MICxAtotnv9NrKIHi44NcgBBz09XjZo6XiUSxdy6CvJAWX1w6LJapjWdVsULJtn2+adn M3/EuWiEK3tf5u83PKVAYk/zw71LjgtCj1H+npEdQx6CmxupTO+sFfao9q5ZBH2pJRHX 0fF9P+NKxXCc1l7URVYdpxqBdhR9A2633Xe6WFC39WGrigXHMBUaHzY0FH91MWjEKu+w e/Bg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=cc:to:from:subject:message-id:references:mime-version :list-unsubscribe:list-subscribe:list-id:precedence:in-reply-to:date :dkim-signature; bh=vxNY2VrPCY3idyTMpJR/1ttR0VD4PODSyqOS/F6YoNY=; fh=F/25l1SFb0v/wIQTXa5dnMuM6SySfc8RlM9gXfTHNf4=; b=rRq83wwbgMufW3vVi5J2Cyg9UEgPh8qlGQtfGuw1T9fVIhZR//S2tdIeUay0YTYd2Z cljggEQmY22h1W6RZZ359DJFUYcFFkFlMmpTqr0BYbPUHdrYEKrssdKTJJGnLsGmRfNe dST94oY1F4u/hluI5TcDqX/kXdqpt/MXCyP+XC/7NhIHUVknH2ZeAvfEsJJ2bg5Ysvp4 x27VyvgRycadqwMSxZezb+cCm+d9zvNhNGHvU2BcWRLsdWrV6nCZk8I0MKE+okeXp6Zo ojPbC1bEj8sMgMCQ4I7TzUqA5HbaTva7rATTHNXk9IGJpJAjtKUBPSHkFkMnEddc3Wvt VXfw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=pYDT9L3e; spf=pass (google.com: domain of linux-kernel+bounces-12045-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-12045-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [139.178.88.99]) by mx.google.com with ESMTPS id h4-20020a636c04000000b005b942de1e92si6613612pgc.443.2023.12.27.06.13.03 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 27 Dec 2023 06:13:03 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-12045-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) client-ip=139.178.88.99; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=pYDT9L3e; spf=pass (google.com: domain of linux-kernel+bounces-12045-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-12045-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id 6E242283764 for ; Wed, 27 Dec 2023 14:13:03 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 53F5F45C13; Wed, 27 Dec 2023 14:12:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="pYDT9L3e" X-Original-To: linux-kernel@vger.kernel.org Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5EA3D45964 for ; Wed, 27 Dec 2023 14:12:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--kinseyho.bounces.google.com Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-5e9de9795dfso80037157b3.0 for ; Wed, 27 Dec 2023 06:12:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1703686342; x=1704291142; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=vxNY2VrPCY3idyTMpJR/1ttR0VD4PODSyqOS/F6YoNY=; b=pYDT9L3egtaAW9QVwLXGv03P80FZ2ysXWZm7wOwS6F+/LqqNiGA8gEYN30kFmHvXpt KDuFcoInc99nV2BT0BPR4Fpjk1dqxKthORvld1/XdBkPKTkXWguZaooSCEidBDk2hS2n H0SBffFPFBhGGfxYloKLfeXIfYRhgPeWD77GswRI1XbHxPA3GmwlTuyCzW9ivbl+sWqW OHA6IHlj8CBYrW1vMvXRsjmlDHHjw83ZEa4jHmt/hboIsl2E9ckRpiFj6gde95MvChZm v1x+03xBNIGpwqJcTgR8LoKcD6Kh/I0nMK/Dcbt5B2WZewRPbRP1hqanZtCHgBDMg7GO BY4A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1703686342; x=1704291142; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=vxNY2VrPCY3idyTMpJR/1ttR0VD4PODSyqOS/F6YoNY=; b=QWPLqf4juEyqitirqGuNY9AViq7vO5OfwlSVQM6cqaswT8LIZY3fv3nYk3EiI7RPgK FZ2Xpb1EVQ1dsV1dddHt85xOM/P9gNdftdUkhlJUS7w4X+Ron54GHShQf8zonxEOI6D8 sh/FngTHp6XJdvyYD7xFcUKDpRIy6VzoRNrlP/Mu/X1fqjTX4h2kDJLWA1mGVHqVeMJo g7Cr8Jm6ZDcelaI9/3UuYqQ31JltKK9ybOwThLISXErTwHN3uEAjOKfSHv2goX+ECE+/ see1n+il2+cAMPyaWpGdBzCZxip9DEC9QHkjuodHb/PVQz/4WcGDj985p4Hri/+IZ4HR YpLA== X-Gm-Message-State: AOJu0YwD8mSe1JQg87qyNyHpffpG1+fVUjKJAPqy7SzsRs2NPgMK/3MK tA+JmnI56Usr8xpwd4ug1YMuHo5jMmWOinrcg9Yn X-Received: from kinseyct.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:46b]) (user=kinseyho job=sendgmr) by 2002:a25:8201:0:b0:db5:4a39:feb8 with SMTP id q1-20020a258201000000b00db54a39feb8mr3392529ybk.8.1703686342353; Wed, 27 Dec 2023 06:12:22 -0800 (PST) Date: Wed, 27 Dec 2023 14:12:01 +0000 In-Reply-To: <20231227141205.2200125-1-kinseyho@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20231227141205.2200125-1-kinseyho@google.com> X-Mailer: git-send-email 2.43.0.472.g3155946c3a-goog Message-ID: <20231227141205.2200125-2-kinseyho@google.com> Subject: [PATCH mm-unstable v4 1/5] mm/mglru: add CONFIG_ARCH_HAS_HW_PTE_YOUNG From: Kinsey Ho To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Yu Zhao , Donet Tom , "Aneesh Kumar K . V" , Kinsey Ho X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1786444653636849463 X-GMAIL-MSGID: 1786444653636849463 Some architectures are able to set the accessed bit in PTEs when PTEs are used as part of linear address translations. Add CONFIG_ARCH_HAS_HW_PTE_YOUNG for such architectures to be able to override arch_has_hw_pte_young(). Signed-off-by: Kinsey Ho Co-developed-by: Aneesh Kumar K.V Signed-off-by: Aneesh Kumar K.V Tested-by: Donet Tom Acked-by: Yu Zhao --- arch/Kconfig | 8 ++++++++ arch/arm64/Kconfig | 1 + arch/x86/Kconfig | 1 + arch/x86/include/asm/pgtable.h | 6 ------ include/linux/pgtable.h | 2 +- 5 files changed, 11 insertions(+), 7 deletions(-) diff --git a/arch/Kconfig b/arch/Kconfig index f4b210ab0612..8c8901f80586 100644 --- a/arch/Kconfig +++ b/arch/Kconfig @@ -1470,6 +1470,14 @@ config DYNAMIC_SIGFRAME config HAVE_ARCH_NODE_DEV_GROUP bool +config ARCH_HAS_HW_PTE_YOUNG + bool + help + Architectures that select this option are capable of setting the + accessed bit in PTE entries when using them as part of linear address + translations. Architectures that require runtime check should select + this option and override arch_has_hw_pte_young(). + config ARCH_HAS_NONLEAF_PMD_YOUNG bool help diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 7b071a00425d..12d611f3da5d 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -36,6 +36,7 @@ config ARM64 select ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE select ARCH_HAS_PTE_DEVMAP select ARCH_HAS_PTE_SPECIAL + select ARCH_HAS_HW_PTE_YOUNG select ARCH_HAS_SETUP_DMA_OPS select ARCH_HAS_SET_DIRECT_MAP select ARCH_HAS_SET_MEMORY diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 1566748f16c4..04941a1ffc0a 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -88,6 +88,7 @@ config X86 select ARCH_HAS_PMEM_API if X86_64 select ARCH_HAS_PTE_DEVMAP if X86_64 select ARCH_HAS_PTE_SPECIAL + select ARCH_HAS_HW_PTE_YOUNG select ARCH_HAS_NONLEAF_PMD_YOUNG if PGTABLE_LEVELS > 2 select ARCH_HAS_UACCESS_FLUSHCACHE if X86_64 select ARCH_HAS_COPY_MC if X86_64 diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h index 57bab91bbf50..08b5cb22d9a6 100644 --- a/arch/x86/include/asm/pgtable.h +++ b/arch/x86/include/asm/pgtable.h @@ -1679,12 +1679,6 @@ static inline bool arch_has_pfn_modify_check(void) return boot_cpu_has_bug(X86_BUG_L1TF); } -#define arch_has_hw_pte_young arch_has_hw_pte_young -static inline bool arch_has_hw_pte_young(void) -{ - return true; -} - #define arch_check_zapped_pte arch_check_zapped_pte void arch_check_zapped_pte(struct vm_area_struct *vma, pte_t pte); diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index af7639c3b0a3..9ecc20fa6269 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -375,7 +375,7 @@ static inline bool arch_has_hw_nonleaf_pmd_young(void) */ static inline bool arch_has_hw_pte_young(void) { - return false; + return IS_ENABLED(CONFIG_ARCH_HAS_HW_PTE_YOUNG); } #endif From patchwork Wed Dec 27 14:12:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kinsey Ho X-Patchwork-Id: 183478 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7301:6f82:b0:100:9c79:88ff with SMTP id tb2csp1455110dyb; Wed, 27 Dec 2023 06:13:22 -0800 (PST) X-Google-Smtp-Source: AGHT+IH8vvpUN2C8eD/l/q4qsw20E71hVsZdmTlM1BlwzbVBJs3Tw+Tdj2abLLa90XLa4zH1DL+C X-Received: by 2002:a17:906:b7d9:b0:a23:f39:982f with SMTP id fy25-20020a170906b7d900b00a230f39982fmr2605744ejb.55.1703686401889; Wed, 27 Dec 2023 06:13:21 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1703686401; cv=none; d=google.com; s=arc-20160816; b=G1WueB5kzPjHSiHrdaWTnzzzaqKEgVCM357aJ6JDjmJ7+8NBNBnpVxdSswtx5yyMhz x9Z+2FJwFNrVUmJg7bvwFsrewMLpfM2dbTIYcOD3eDZ2Iqk2doF4Xm5rRyp7vSlgyCDb 2JLYKmNtYRfc+xhETMwOR5OAp/6+PcphLrAYjb939j43h4q5N/gvsTpNeQ6T40a67Fkq BvLWQ9vlcxyYYXfFfXVUzIwkJPg1wc570XQMT6EZ9qyFLs1VIt2lAnbS2sAUd7PnM0g0 puXvKFsQSv1w9G9W3yTBPUn8ux7C6E0cMDmDhG6qThMIOy/KXKW7rOdof3aoCaUXXR2m etSw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=cc:to:from:subject:message-id:references:mime-version :list-unsubscribe:list-subscribe:list-id:precedence:in-reply-to:date :dkim-signature; bh=H66WYYyor5VKn1nkrvH777s0EpSf6ZBvBeCx9Al6YIY=; fh=F/25l1SFb0v/wIQTXa5dnMuM6SySfc8RlM9gXfTHNf4=; b=RELKIT2e9796xSpppRYHR6kjYHTEpfV7a6jXHFtX3avzpEhD8LOYLyWH1Tr6y+b4uo epNvzMvH3dEtVTPlSBuqK5dwEbgTcupjQNgP14j43OD4TOfscPsR+PLH5MAKZzqJs/TB SZ2aPwYryjgiCrwtZNBD2B3H9u2kQxc2pD9c0aGO1fb8WBWPaGbx9oCMjO2N8wrhjutz vvjHQBJQwvx//uIFGdBzczP8p6LeeO7giXuwjxAMx6GNnLmHxMnDBdCFM10eDDjnc/Ox B5q7C0zDzAuRYAI9ZssZmHqohPYGYF+r68U9WjC9MK58Mef8T8sN++bITNbhuD+c57eh GkUw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=wg7MwqG4; spf=pass (google.com: domain of linux-kernel+bounces-12046-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-12046-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [2604:1380:4601:e00::3]) by mx.google.com with ESMTPS id m5-20020a1709060d8500b00a23c5d1163bsi6283016eji.980.2023.12.27.06.13.21 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 27 Dec 2023 06:13:21 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-12046-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) client-ip=2604:1380:4601:e00::3; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=wg7MwqG4; spf=pass (google.com: domain of linux-kernel+bounces-12046-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-12046-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id 59AD51F2210B for ; Wed, 27 Dec 2023 14:13:21 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 990C546454; Wed, 27 Dec 2023 14:12:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="wg7MwqG4" X-Original-To: linux-kernel@vger.kernel.org Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 122144597B for ; Wed, 27 Dec 2023 14:12:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--kinseyho.bounces.google.com Received: by mail-yb1-f201.google.com with SMTP id 3f1490d57ef6-db410931c23so6111061276.2 for ; Wed, 27 Dec 2023 06:12:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1703686344; x=1704291144; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=H66WYYyor5VKn1nkrvH777s0EpSf6ZBvBeCx9Al6YIY=; b=wg7MwqG4P0MhwlXZHP+0VRbtQIMEzl0arBAAPdO6hhZL1wMC2ELkypV9Nmkv8cPGax fsymM2SIvN8h+IvBobx4lBgyZVhe8FVxzVh7EQC+7N5zXNWe/cLTFHd/+imdhDbwLPNG RwxKQUm89LT/Lu5R5ryJo2gpRN728+5UyzKYN59ctWi2hGi4UQfAMy1gYmI5xlJqwqku F1UVc5J+poE+4/kuYMHDBQsRiRkmjCtj+I4/2YjY0c0gHrzPhDTfgL6glDV8ucMsyX7I VhJXPBgwk1rgsDKy+Zn2oKluR9UyFJmMCTYTkLv43KM5tBNlMxj9fIOYsEAlYc6FoL6X 1zoQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1703686344; x=1704291144; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=H66WYYyor5VKn1nkrvH777s0EpSf6ZBvBeCx9Al6YIY=; b=R8npqjHa0jvaE+BieiuyT60+txE8xigKPiEpQCnoC79LHVe0zpfEP1izt2cMJb72Fp D+V+T0MGFMWRvWkPavnTS/TseZQbCm0+Y6BrigDUT038cWV+HuO/DocYNuenUgEwLO1a hIQXUHLgERadVCOAxJPKUSKW7SLcjfZVFdQVwwUGO/4y7NwJuIAKUsQqz4TiLRu2wQcv 1rgPIy2INtrsjlftG4InNaoDPJLi+5JRM7c56EE/alnhULWTbE62k9Ov/H0KpNYwJ6IB /uppHVPCt6GCpIrFXUl7aSpJSCULBthDt6E39Y+dS307Db4Ji1Rq8f4l0Py8f7c4ebvU dbYQ== X-Gm-Message-State: AOJu0YzoSSRo1xUztL/X9G0vQ8bY3iH1XLDrNa9WmUORd33qJuMbadMD CCbtZ7O6el14ae5hu+S0JYZSi8I+JZTtOfbUHewj X-Received: from kinseyct.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:46b]) (user=kinseyho job=sendgmr) by 2002:a05:6902:1b08:b0:dbd:e0b4:9647 with SMTP id eh8-20020a0569021b0800b00dbde0b49647mr3200223ybb.5.1703686344087; Wed, 27 Dec 2023 06:12:24 -0800 (PST) Date: Wed, 27 Dec 2023 14:12:02 +0000 In-Reply-To: <20231227141205.2200125-1-kinseyho@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20231227141205.2200125-1-kinseyho@google.com> X-Mailer: git-send-email 2.43.0.472.g3155946c3a-goog Message-ID: <20231227141205.2200125-3-kinseyho@google.com> Subject: [PATCH mm-unstable v4 2/5] mm/mglru: add CONFIG_LRU_GEN_WALKS_MMU From: Kinsey Ho To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Yu Zhao , Donet Tom , "Aneesh Kumar K . V" , Kinsey Ho X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1786444672663070129 X-GMAIL-MSGID: 1786444672663070129 Add CONFIG_LRU_GEN_WALKS_MMU such that if disabled, the code that walks page tables to promote pages into the youngest generation will not be built. Also improves code readability by adding two helper functions get_mm_state() and get_next_mm(). Signed-off-by: Kinsey Ho Co-developed-by: Aneesh Kumar K.V Signed-off-by: Aneesh Kumar K.V Tested-by: Donet Tom Acked-by: Yu Zhao --- include/linux/memcontrol.h | 2 +- include/linux/mm_types.h | 12 ++- include/linux/mmzone.h | 2 + kernel/fork.c | 2 +- mm/Kconfig | 4 + mm/vmscan.c | 192 ++++++++++++++++++++++++------------- 6 files changed, 139 insertions(+), 75 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 5de775e6cdd9..20ff87f8e001 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -330,7 +330,7 @@ struct mem_cgroup { struct deferred_split deferred_split_queue; #endif -#ifdef CONFIG_LRU_GEN +#ifdef CONFIG_LRU_GEN_WALKS_MMU /* per-memcg mm_struct list */ struct lru_gen_mm_list mm_list; #endif diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index a66534c78c4d..552fa2d11c57 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -958,7 +958,7 @@ struct mm_struct { */ unsigned long ksm_zero_pages; #endif /* CONFIG_KSM */ -#ifdef CONFIG_LRU_GEN +#ifdef CONFIG_LRU_GEN_WALKS_MMU struct { /* this mm_struct is on lru_gen_mm_list */ struct list_head list; @@ -973,7 +973,7 @@ struct mm_struct { struct mem_cgroup *memcg; #endif } lru_gen; -#endif /* CONFIG_LRU_GEN */ +#endif /* CONFIG_LRU_GEN_WALKS_MMU */ } __randomize_layout; /* @@ -1011,6 +1011,10 @@ struct lru_gen_mm_list { spinlock_t lock; }; +#endif /* CONFIG_LRU_GEN */ + +#ifdef CONFIG_LRU_GEN_WALKS_MMU + void lru_gen_add_mm(struct mm_struct *mm); void lru_gen_del_mm(struct mm_struct *mm); #ifdef CONFIG_MEMCG @@ -1036,7 +1040,7 @@ static inline void lru_gen_use_mm(struct mm_struct *mm) WRITE_ONCE(mm->lru_gen.bitmap, -1); } -#else /* !CONFIG_LRU_GEN */ +#else /* !CONFIG_LRU_GEN_WALKS_MMU */ static inline void lru_gen_add_mm(struct mm_struct *mm) { @@ -1060,7 +1064,7 @@ static inline void lru_gen_use_mm(struct mm_struct *mm) { } -#endif /* CONFIG_LRU_GEN */ +#endif /* CONFIG_LRU_GEN_WALKS_MMU */ struct vma_iterator { struct ma_state mas; diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 2efd3be484fd..bc3f63ec4291 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -640,9 +640,11 @@ struct lruvec { #ifdef CONFIG_LRU_GEN /* evictable pages divided into generations */ struct lru_gen_folio lrugen; +#ifdef CONFIG_LRU_GEN_WALKS_MMU /* to concurrently iterate lru_gen_mm_list */ struct lru_gen_mm_state mm_state; #endif +#endif /* CONFIG_LRU_GEN */ #ifdef CONFIG_MEMCG struct pglist_data *pgdat; #endif diff --git a/kernel/fork.c b/kernel/fork.c index 93924392a5c3..56cf276432c8 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -2946,7 +2946,7 @@ pid_t kernel_clone(struct kernel_clone_args *args) get_task_struct(p); } - if (IS_ENABLED(CONFIG_LRU_GEN) && !(clone_flags & CLONE_VM)) { + if (IS_ENABLED(CONFIG_LRU_GEN_WALKS_MMU) && !(clone_flags & CLONE_VM)) { /* lock the task to synchronize with memcg migration */ task_lock(p); lru_gen_add_mm(p->mm); diff --git a/mm/Kconfig b/mm/Kconfig index 8f8b02e9c136..c98076dec5fb 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -1274,6 +1274,10 @@ config LRU_GEN_STATS from evicted generations for debugging purpose. This option has a per-memcg and per-node memory overhead. + +config LRU_GEN_WALKS_MMU + def_bool y + depends on LRU_GEN && ARCH_HAS_HW_PTE_YOUNG # } config ARCH_SUPPORTS_PER_VMA_LOCK diff --git a/mm/vmscan.c b/mm/vmscan.c index b4ca3563bcf4..aa7ea09ffb4c 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -2671,13 +2671,14 @@ static void get_item_key(void *item, int *key) key[1] = hash >> BLOOM_FILTER_SHIFT; } -static bool test_bloom_filter(struct lruvec *lruvec, unsigned long seq, void *item) +static bool test_bloom_filter(struct lru_gen_mm_state *mm_state, unsigned long seq, + void *item) { int key[2]; unsigned long *filter; int gen = filter_gen_from_seq(seq); - filter = READ_ONCE(lruvec->mm_state.filters[gen]); + filter = READ_ONCE(mm_state->filters[gen]); if (!filter) return true; @@ -2686,13 +2687,14 @@ static bool test_bloom_filter(struct lruvec *lruvec, unsigned long seq, void *it return test_bit(key[0], filter) && test_bit(key[1], filter); } -static void update_bloom_filter(struct lruvec *lruvec, unsigned long seq, void *item) +static void update_bloom_filter(struct lru_gen_mm_state *mm_state, unsigned long seq, + void *item) { int key[2]; unsigned long *filter; int gen = filter_gen_from_seq(seq); - filter = READ_ONCE(lruvec->mm_state.filters[gen]); + filter = READ_ONCE(mm_state->filters[gen]); if (!filter) return; @@ -2704,12 +2706,12 @@ static void update_bloom_filter(struct lruvec *lruvec, unsigned long seq, void * set_bit(key[1], filter); } -static void reset_bloom_filter(struct lruvec *lruvec, unsigned long seq) +static void reset_bloom_filter(struct lru_gen_mm_state *mm_state, unsigned long seq) { unsigned long *filter; int gen = filter_gen_from_seq(seq); - filter = lruvec->mm_state.filters[gen]; + filter = mm_state->filters[gen]; if (filter) { bitmap_clear(filter, 0, BIT(BLOOM_FILTER_SHIFT)); return; @@ -2717,13 +2719,15 @@ static void reset_bloom_filter(struct lruvec *lruvec, unsigned long seq) filter = bitmap_zalloc(BIT(BLOOM_FILTER_SHIFT), __GFP_HIGH | __GFP_NOMEMALLOC | __GFP_NOWARN); - WRITE_ONCE(lruvec->mm_state.filters[gen], filter); + WRITE_ONCE(mm_state->filters[gen], filter); } /****************************************************************************** * mm_struct list ******************************************************************************/ +#ifdef CONFIG_LRU_GEN_WALKS_MMU + static struct lru_gen_mm_list *get_mm_list(struct mem_cgroup *memcg) { static struct lru_gen_mm_list mm_list = { @@ -2740,6 +2744,29 @@ static struct lru_gen_mm_list *get_mm_list(struct mem_cgroup *memcg) return &mm_list; } +static struct lru_gen_mm_state *get_mm_state(struct lruvec *lruvec) +{ + return &lruvec->mm_state; +} + +static struct mm_struct *get_next_mm(struct lru_gen_mm_walk *walk) +{ + int key; + struct mm_struct *mm; + struct pglist_data *pgdat = lruvec_pgdat(walk->lruvec); + struct lru_gen_mm_state *mm_state = get_mm_state(walk->lruvec); + + mm = list_entry(mm_state->head, struct mm_struct, lru_gen.list); + key = pgdat->node_id % BITS_PER_TYPE(mm->lru_gen.bitmap); + + if (!walk->force_scan && !test_bit(key, &mm->lru_gen.bitmap)) + return NULL; + + clear_bit(key, &mm->lru_gen.bitmap); + + return mmget_not_zero(mm) ? mm : NULL; +} + void lru_gen_add_mm(struct mm_struct *mm) { int nid; @@ -2755,10 +2782,11 @@ void lru_gen_add_mm(struct mm_struct *mm) for_each_node_state(nid, N_MEMORY) { struct lruvec *lruvec = get_lruvec(memcg, nid); + struct lru_gen_mm_state *mm_state = get_mm_state(lruvec); /* the first addition since the last iteration */ - if (lruvec->mm_state.tail == &mm_list->fifo) - lruvec->mm_state.tail = &mm->lru_gen.list; + if (mm_state->tail == &mm_list->fifo) + mm_state->tail = &mm->lru_gen.list; } list_add_tail(&mm->lru_gen.list, &mm_list->fifo); @@ -2784,14 +2812,15 @@ void lru_gen_del_mm(struct mm_struct *mm) for_each_node(nid) { struct lruvec *lruvec = get_lruvec(memcg, nid); + struct lru_gen_mm_state *mm_state = get_mm_state(lruvec); /* where the current iteration continues after */ - if (lruvec->mm_state.head == &mm->lru_gen.list) - lruvec->mm_state.head = lruvec->mm_state.head->prev; + if (mm_state->head == &mm->lru_gen.list) + mm_state->head = mm_state->head->prev; /* where the last iteration ended before */ - if (lruvec->mm_state.tail == &mm->lru_gen.list) - lruvec->mm_state.tail = lruvec->mm_state.tail->next; + if (mm_state->tail == &mm->lru_gen.list) + mm_state->tail = mm_state->tail->next; } list_del_init(&mm->lru_gen.list); @@ -2834,10 +2863,30 @@ void lru_gen_migrate_mm(struct mm_struct *mm) } #endif +#else /* !CONFIG_LRU_GEN_WALKS_MMU */ + +static struct lru_gen_mm_list *get_mm_list(struct mem_cgroup *memcg) +{ + return NULL; +} + +static struct lru_gen_mm_state *get_mm_state(struct lruvec *lruvec) +{ + return NULL; +} + +static struct mm_struct *get_next_mm(struct lru_gen_mm_walk *walk) +{ + return NULL; +} + +#endif + static void reset_mm_stats(struct lruvec *lruvec, struct lru_gen_mm_walk *walk, bool last) { int i; int hist; + struct lru_gen_mm_state *mm_state = get_mm_state(lruvec); lockdep_assert_held(&get_mm_list(lruvec_memcg(lruvec))->lock); @@ -2845,44 +2894,20 @@ static void reset_mm_stats(struct lruvec *lruvec, struct lru_gen_mm_walk *walk, hist = lru_hist_from_seq(walk->max_seq); for (i = 0; i < NR_MM_STATS; i++) { - WRITE_ONCE(lruvec->mm_state.stats[hist][i], - lruvec->mm_state.stats[hist][i] + walk->mm_stats[i]); + WRITE_ONCE(mm_state->stats[hist][i], + mm_state->stats[hist][i] + walk->mm_stats[i]); walk->mm_stats[i] = 0; } } if (NR_HIST_GENS > 1 && last) { - hist = lru_hist_from_seq(lruvec->mm_state.seq + 1); + hist = lru_hist_from_seq(mm_state->seq + 1); for (i = 0; i < NR_MM_STATS; i++) - WRITE_ONCE(lruvec->mm_state.stats[hist][i], 0); + WRITE_ONCE(mm_state->stats[hist][i], 0); } } -static bool should_skip_mm(struct mm_struct *mm, struct lru_gen_mm_walk *walk) -{ - int type; - unsigned long size = 0; - struct pglist_data *pgdat = lruvec_pgdat(walk->lruvec); - int key = pgdat->node_id % BITS_PER_TYPE(mm->lru_gen.bitmap); - - if (!walk->force_scan && !test_bit(key, &mm->lru_gen.bitmap)) - return true; - - clear_bit(key, &mm->lru_gen.bitmap); - - for (type = !walk->can_swap; type < ANON_AND_FILE; type++) { - size += type ? get_mm_counter(mm, MM_FILEPAGES) : - get_mm_counter(mm, MM_ANONPAGES) + - get_mm_counter(mm, MM_SHMEMPAGES); - } - - if (size < MIN_LRU_BATCH) - return true; - - return !mmget_not_zero(mm); -} - static bool iterate_mm_list(struct lruvec *lruvec, struct lru_gen_mm_walk *walk, struct mm_struct **iter) { @@ -2891,7 +2916,7 @@ static bool iterate_mm_list(struct lruvec *lruvec, struct lru_gen_mm_walk *walk, struct mm_struct *mm = NULL; struct mem_cgroup *memcg = lruvec_memcg(lruvec); struct lru_gen_mm_list *mm_list = get_mm_list(memcg); - struct lru_gen_mm_state *mm_state = &lruvec->mm_state; + struct lru_gen_mm_state *mm_state = get_mm_state(lruvec); /* * mm_state->seq is incremented after each iteration of mm_list. There @@ -2929,11 +2954,7 @@ static bool iterate_mm_list(struct lruvec *lruvec, struct lru_gen_mm_walk *walk, mm_state->tail = mm_state->head->next; walk->force_scan = true; } - - mm = list_entry(mm_state->head, struct mm_struct, lru_gen.list); - if (should_skip_mm(mm, walk)) - mm = NULL; - } while (!mm); + } while (!(mm = get_next_mm(walk))); done: if (*iter || last) reset_mm_stats(lruvec, walk, last); @@ -2941,7 +2962,7 @@ static bool iterate_mm_list(struct lruvec *lruvec, struct lru_gen_mm_walk *walk, spin_unlock(&mm_list->lock); if (mm && first) - reset_bloom_filter(lruvec, walk->max_seq + 1); + reset_bloom_filter(mm_state, walk->max_seq + 1); if (*iter) mmput_async(*iter); @@ -2956,7 +2977,7 @@ static bool iterate_mm_list_nowalk(struct lruvec *lruvec, unsigned long max_seq) bool success = false; struct mem_cgroup *memcg = lruvec_memcg(lruvec); struct lru_gen_mm_list *mm_list = get_mm_list(memcg); - struct lru_gen_mm_state *mm_state = &lruvec->mm_state; + struct lru_gen_mm_state *mm_state = get_mm_state(lruvec); spin_lock(&mm_list->lock); @@ -3469,6 +3490,7 @@ static void walk_pmd_range(pud_t *pud, unsigned long start, unsigned long end, DECLARE_BITMAP(bitmap, MIN_LRU_BATCH); unsigned long first = -1; struct lru_gen_mm_walk *walk = args->private; + struct lru_gen_mm_state *mm_state = get_mm_state(walk->lruvec); VM_WARN_ON_ONCE(pud_leaf(*pud)); @@ -3520,7 +3542,7 @@ static void walk_pmd_range(pud_t *pud, unsigned long start, unsigned long end, walk_pmd_range_locked(pud, addr, vma, args, bitmap, &first); } - if (!walk->force_scan && !test_bloom_filter(walk->lruvec, walk->max_seq, pmd + i)) + if (!walk->force_scan && !test_bloom_filter(mm_state, walk->max_seq, pmd + i)) continue; walk->mm_stats[MM_NONLEAF_FOUND]++; @@ -3531,7 +3553,7 @@ static void walk_pmd_range(pud_t *pud, unsigned long start, unsigned long end, walk->mm_stats[MM_NONLEAF_ADDED]++; /* carry over to the next generation */ - update_bloom_filter(walk->lruvec, walk->max_seq + 1, pmd + i); + update_bloom_filter(mm_state, walk->max_seq + 1, pmd + i); } walk_pmd_range_locked(pud, -1, vma, args, bitmap, &first); @@ -3738,16 +3760,25 @@ static bool try_to_inc_min_seq(struct lruvec *lruvec, bool can_swap) return success; } -static void inc_max_seq(struct lruvec *lruvec, bool can_swap, bool force_scan) +static bool inc_max_seq(struct lruvec *lruvec, unsigned long max_seq, + bool can_swap, bool force_scan) { + bool success; int prev, next; int type, zone; struct lru_gen_folio *lrugen = &lruvec->lrugen; restart: + if (max_seq < READ_ONCE(lrugen->max_seq)) + return false; + spin_lock_irq(&lruvec->lru_lock); VM_WARN_ON_ONCE(!seq_is_valid(lruvec)); + success = max_seq == lrugen->max_seq; + if (!success) + goto unlock; + for (type = ANON_AND_FILE - 1; type >= 0; type--) { if (get_nr_gens(lruvec, type) != MAX_NR_GENS) continue; @@ -3791,8 +3822,10 @@ static void inc_max_seq(struct lruvec *lruvec, bool can_swap, bool force_scan) WRITE_ONCE(lrugen->timestamps[next], jiffies); /* make sure preceding modifications appear */ smp_store_release(&lrugen->max_seq, lrugen->max_seq + 1); - +unlock: spin_unlock_irq(&lruvec->lru_lock); + + return success; } static bool try_to_inc_max_seq(struct lruvec *lruvec, unsigned long max_seq, @@ -3802,14 +3835,16 @@ static bool try_to_inc_max_seq(struct lruvec *lruvec, unsigned long max_seq, struct lru_gen_mm_walk *walk; struct mm_struct *mm = NULL; struct lru_gen_folio *lrugen = &lruvec->lrugen; + struct lru_gen_mm_state *mm_state = get_mm_state(lruvec); VM_WARN_ON_ONCE(max_seq > READ_ONCE(lrugen->max_seq)); + if (!mm_state) + return inc_max_seq(lruvec, max_seq, can_swap, force_scan); + /* see the comment in iterate_mm_list() */ - if (max_seq <= READ_ONCE(lruvec->mm_state.seq)) { - success = false; - goto done; - } + if (max_seq <= READ_ONCE(mm_state->seq)) + return false; /* * If the hardware doesn't automatically set the accessed bit, fallback @@ -3839,8 +3874,10 @@ static bool try_to_inc_max_seq(struct lruvec *lruvec, unsigned long max_seq, walk_mm(lruvec, mm, walk); } while (mm); done: - if (success) - inc_max_seq(lruvec, can_swap, force_scan); + if (success) { + success = inc_max_seq(lruvec, max_seq, can_swap, force_scan); + WARN_ON_ONCE(!success); + } return success; } @@ -3964,6 +4001,7 @@ void lru_gen_look_around(struct page_vma_mapped_walk *pvmw) struct mem_cgroup *memcg = folio_memcg(folio); struct pglist_data *pgdat = folio_pgdat(folio); struct lruvec *lruvec = mem_cgroup_lruvec(memcg, pgdat); + struct lru_gen_mm_state *mm_state = get_mm_state(lruvec); DEFINE_MAX_SEQ(lruvec); int old_gen, new_gen = lru_gen_from_seq(max_seq); @@ -4042,8 +4080,8 @@ void lru_gen_look_around(struct page_vma_mapped_walk *pvmw) mem_cgroup_unlock_pages(); /* feedback from rmap walkers to page table walkers */ - if (suitable_to_scan(i, young)) - update_bloom_filter(lruvec, max_seq, pvmw->pmd); + if (mm_state && suitable_to_scan(i, young)) + update_bloom_filter(mm_state, max_seq, pvmw->pmd); } /****************************************************************************** @@ -5219,6 +5257,7 @@ static void lru_gen_seq_show_full(struct seq_file *m, struct lruvec *lruvec, int type, tier; int hist = lru_hist_from_seq(seq); struct lru_gen_folio *lrugen = &lruvec->lrugen; + struct lru_gen_mm_state *mm_state = get_mm_state(lruvec); for (tier = 0; tier < MAX_NR_TIERS; tier++) { seq_printf(m, " %10d", tier); @@ -5244,6 +5283,9 @@ static void lru_gen_seq_show_full(struct seq_file *m, struct lruvec *lruvec, seq_putc(m, '\n'); } + if (!mm_state) + return; + seq_puts(m, " "); for (i = 0; i < NR_MM_STATS; i++) { const char *s = " "; @@ -5251,10 +5293,10 @@ static void lru_gen_seq_show_full(struct seq_file *m, struct lruvec *lruvec, if (seq == max_seq && NR_HIST_GENS == 1) { s = "LOYNFA"; - n = READ_ONCE(lruvec->mm_state.stats[hist][i]); + n = READ_ONCE(mm_state->stats[hist][i]); } else if (seq != max_seq && NR_HIST_GENS > 1) { s = "loynfa"; - n = READ_ONCE(lruvec->mm_state.stats[hist][i]); + n = READ_ONCE(mm_state->stats[hist][i]); } seq_printf(m, " %10lu%c", n, s[i]); @@ -5523,6 +5565,7 @@ void lru_gen_init_lruvec(struct lruvec *lruvec) int i; int gen, type, zone; struct lru_gen_folio *lrugen = &lruvec->lrugen; + struct lru_gen_mm_state *mm_state = get_mm_state(lruvec); lrugen->max_seq = MIN_NR_GENS + 1; lrugen->enabled = lru_gen_enabled(); @@ -5533,7 +5576,8 @@ void lru_gen_init_lruvec(struct lruvec *lruvec) for_each_gen_type_zone(gen, type, zone) INIT_LIST_HEAD(&lrugen->folios[gen][type][zone]); - lruvec->mm_state.seq = MIN_NR_GENS; + if (mm_state) + mm_state->seq = MIN_NR_GENS; } #ifdef CONFIG_MEMCG @@ -5552,28 +5596,38 @@ void lru_gen_init_pgdat(struct pglist_data *pgdat) void lru_gen_init_memcg(struct mem_cgroup *memcg) { - INIT_LIST_HEAD(&memcg->mm_list.fifo); - spin_lock_init(&memcg->mm_list.lock); + struct lru_gen_mm_list *mm_list = get_mm_list(memcg); + + if (!mm_list) + return; + + INIT_LIST_HEAD(&mm_list->fifo); + spin_lock_init(&mm_list->lock); } void lru_gen_exit_memcg(struct mem_cgroup *memcg) { int i; int nid; + struct lru_gen_mm_list *mm_list = get_mm_list(memcg); - VM_WARN_ON_ONCE(!list_empty(&memcg->mm_list.fifo)); + VM_WARN_ON_ONCE(mm_list && !list_empty(&mm_list->fifo)); for_each_node(nid) { struct lruvec *lruvec = get_lruvec(memcg, nid); + struct lru_gen_mm_state *mm_state = get_mm_state(lruvec); VM_WARN_ON_ONCE(memchr_inv(lruvec->lrugen.nr_pages, 0, sizeof(lruvec->lrugen.nr_pages))); lruvec->lrugen.list.next = LIST_POISON1; + if (!mm_state) + continue; + for (i = 0; i < NR_BLOOM_FILTERS; i++) { - bitmap_free(lruvec->mm_state.filters[i]); - lruvec->mm_state.filters[i] = NULL; + bitmap_free(mm_state->filters[i]); + mm_state->filters[i] = NULL; } } } From patchwork Wed Dec 27 14:12:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kinsey Ho X-Patchwork-Id: 183479 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7301:6f82:b0:100:9c79:88ff with SMTP id tb2csp1455291dyb; Wed, 27 Dec 2023 06:13:37 -0800 (PST) X-Google-Smtp-Source: AGHT+IHvotpMk3RMgEGHpntYHoKPpqUIFHTjaCk8soypIKqnXTXkTAy5RWZJIxMAGrMdgJNYz7J5 X-Received: by 2002:a05:6102:4191:b0:466:ead1:375b with SMTP id cd17-20020a056102419100b00466ead1375bmr2981202vsb.21.1703686417238; Wed, 27 Dec 2023 06:13:37 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1703686417; cv=none; d=google.com; s=arc-20160816; b=uTZL2s+PHIxukd88zZqayBmeXc0e+cNQPyQi5k1hkk+YkuqGiUbHX+0mQja3lliNlc CdrDeY7yuQRZQrHkojDp7ic891l6lWz6xV+YEirQkMl9yL72XU/B/ycdl5A2AbmZvGVl jQJHYKimCFQJPK+N7ta6BiS9l7hbcuX9AbTgRemVVmJk/XkmHhi327dz5D8KJPtJVJPQ 5gntn+XT1wBam5Xz5vGXoWNJQh9YTslUJ2ETLoGdLf9C7F3z6DInk7g4pQo6EGzchJCZ QK1ch2b/byNxliefZWQQjM1P9CvhwHfgabJ5gCtrQ0vYvWbvQk6A7fASyXtyK0C90rC6 MEYA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=cc:to:from:subject:message-id:references:mime-version :list-unsubscribe:list-subscribe:list-id:precedence:in-reply-to:date :dkim-signature; bh=GfH+CZqEFUwjjngghpK/lQYceBtob8M1ismEZxIq3pA=; fh=F/25l1SFb0v/wIQTXa5dnMuM6SySfc8RlM9gXfTHNf4=; b=YF1fCfhR9xtvgHatBmSKDtNd7zxS+nBrQtsOUCThNie9dL0blBz1e/7YpPvYfFzAzy JQJD+4FV+2LOGUYFfv2WWjfO5KyGqroNjxOs//lt9QgrTdlgZKIIQt9HzbOm43ya0rua V04SMGtoMhPBofqDx5SoM2/pYaRsg5QrVy59FKFTglWtVu+wxH9P89GJF7YtfPKY2Uwx avmBX+DbJ8wNax/RIF5VrhqzybsIS+iahVtASy1QY9OVL3m0dDSRBH0UDLxIkOz7ZRs/ KwtLDkPVOnQmRnBfZJwZBeGhxEiJJwe88x301hVF7zncur1RjAYpGbG2k0hcYcS9IZvq uIow== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=s5aURZ+T; spf=pass (google.com: domain of linux-kernel+bounces-12047-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-12047-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [147.75.199.223]) by mx.google.com with ESMTPS id j15-20020a056102240f00b00466ed637532si953299vsi.485.2023.12.27.06.13.37 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 27 Dec 2023 06:13:37 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-12047-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) client-ip=147.75.199.223; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=s5aURZ+T; spf=pass (google.com: domain of linux-kernel+bounces-12047-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-12047-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id ECB2C1C20BD5 for ; Wed, 27 Dec 2023 14:13:36 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 36A8246546; Wed, 27 Dec 2023 14:12:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="s5aURZ+T" X-Original-To: linux-kernel@vger.kernel.org Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2D28B45C0C for ; Wed, 27 Dec 2023 14:12:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--kinseyho.bounces.google.com Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-5ca2e530041so95091817b3.3 for ; Wed, 27 Dec 2023 06:12:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1703686346; x=1704291146; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=GfH+CZqEFUwjjngghpK/lQYceBtob8M1ismEZxIq3pA=; b=s5aURZ+TWjEAsZd80s0dWuV95q3zVlA2LWQ+uJ4qCiLx5qQNvUN1xm5BAIxdVmnpyy W7obdzeWBNuATPcz47keJGxmyFecoHijGAZ6wW1DQ39u9A7JV9UjpuhN5+TUJ8f0wsK8 u9hMdFYxkLO79jhrN0wMya6WnYHN/iPa5Yqr/CsM/V3U5z9EJMhgU4EzdPj3lHrjVxDw XZfSObExyUDMH+Nw7lT2XInpL4kDY940don8CES9smVlGX8fBFD0UHLESr+JHYrG9SWd DvLXoQGSNEXZBz1z/HJy103xr2MYZlWQr2VkvMU3BUaTWH+oA22Q44MWVfoh+yPA6pV1 P7gQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1703686346; x=1704291146; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=GfH+CZqEFUwjjngghpK/lQYceBtob8M1ismEZxIq3pA=; b=LA0QL5XWarHh0WRowtjqfZi+SoA8Sznv0kU1IelUreckSlFTYZI6QjeYSR8+Hfe3n5 XSoEXNAuSIzgUJs8V/V6hpOoSZGhv8TlHvoKPutJHG7cUKlhYpz29fpdUny6mIXdlP3d CLf+ob1IZL3j/rtvFjyFvUhCC1ZOY37iceykqVw9vjwEsw62U1lW9wzaEpJBXtaID/zQ iqwDod4qKDhzdcv7tWYfehC65ko5cVR0qVw3QTrrzzMp6kyjiHS3ILG5cHoLwN8ggrTD onLz5ljksEqYZhJ3xyxJ6UVRyetdDAwBqztEOK9pgt14x6A/7ZLT8DR0SbY9JxXVNqFA PXzw== X-Gm-Message-State: AOJu0Yy0FWTdTBZxn0sJHJgUoN1DViPiWCwS9MQ27TxdMUdmV0FrOKjr MnEC6lT108b7zo8fKZ5hPwOnKpUyGkZjmss+HXdS X-Received: from kinseyct.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:46b]) (user=kinseyho job=sendgmr) by 2002:a05:690c:c81:b0:5d1:7706:b886 with SMTP id cm1-20020a05690c0c8100b005d17706b886mr4445060ywb.0.1703686346054; Wed, 27 Dec 2023 06:12:26 -0800 (PST) Date: Wed, 27 Dec 2023 14:12:03 +0000 In-Reply-To: <20231227141205.2200125-1-kinseyho@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20231227141205.2200125-1-kinseyho@google.com> X-Mailer: git-send-email 2.43.0.472.g3155946c3a-goog Message-ID: <20231227141205.2200125-4-kinseyho@google.com> Subject: [PATCH mm-unstable v4 3/5] mm/mglru: remove CONFIG_MEMCG From: Kinsey Ho To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Yu Zhao , Donet Tom , "Aneesh Kumar K . V" , Kinsey Ho X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1786444688811611330 X-GMAIL-MSGID: 1786444688811611330 Remove CONFIG_MEMCG in a refactoring to improve code readability at the cost of a few bytes in struct lru_gen_folio per node when CONFIG_MEMCG=n. Signed-off-by: Kinsey Ho Co-developed-by: Aneesh Kumar K.V Signed-off-by: Aneesh Kumar K.V Tested-by: Donet Tom Acked-by: Yu Zhao --- include/linux/mm_types.h | 4 --- include/linux/mmzone.h | 26 ++-------------- mm/vmscan.c | 67 +++++++++++++--------------------------- 3 files changed, 23 insertions(+), 74 deletions(-) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 552fa2d11c57..55b7121809ff 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -1017,9 +1017,7 @@ struct lru_gen_mm_list { void lru_gen_add_mm(struct mm_struct *mm); void lru_gen_del_mm(struct mm_struct *mm); -#ifdef CONFIG_MEMCG void lru_gen_migrate_mm(struct mm_struct *mm); -#endif static inline void lru_gen_init_mm(struct mm_struct *mm) { @@ -1050,11 +1048,9 @@ static inline void lru_gen_del_mm(struct mm_struct *mm) { } -#ifdef CONFIG_MEMCG static inline void lru_gen_migrate_mm(struct mm_struct *mm) { } -#endif static inline void lru_gen_init_mm(struct mm_struct *mm) { diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index bc3f63ec4291..28665e1b8475 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -440,14 +440,12 @@ struct lru_gen_folio { atomic_long_t refaulted[NR_HIST_GENS][ANON_AND_FILE][MAX_NR_TIERS]; /* whether the multi-gen LRU is enabled */ bool enabled; -#ifdef CONFIG_MEMCG /* the memcg generation this lru_gen_folio belongs to */ u8 gen; /* the list segment this lru_gen_folio belongs to */ u8 seg; /* per-node lru_gen_folio list for global reclaim */ struct hlist_nulls_node list; -#endif }; enum { @@ -493,11 +491,6 @@ struct lru_gen_mm_walk { bool force_scan; }; -void lru_gen_init_lruvec(struct lruvec *lruvec); -void lru_gen_look_around(struct page_vma_mapped_walk *pvmw); - -#ifdef CONFIG_MEMCG - /* * For each node, memcgs are divided into two generations: the old and the * young. For each generation, memcgs are randomly sharded into multiple bins @@ -555,6 +548,8 @@ struct lru_gen_memcg { }; void lru_gen_init_pgdat(struct pglist_data *pgdat); +void lru_gen_init_lruvec(struct lruvec *lruvec); +void lru_gen_look_around(struct page_vma_mapped_walk *pvmw); void lru_gen_init_memcg(struct mem_cgroup *memcg); void lru_gen_exit_memcg(struct mem_cgroup *memcg); @@ -563,19 +558,6 @@ void lru_gen_offline_memcg(struct mem_cgroup *memcg); void lru_gen_release_memcg(struct mem_cgroup *memcg); void lru_gen_soft_reclaim(struct mem_cgroup *memcg, int nid); -#else /* !CONFIG_MEMCG */ - -#define MEMCG_NR_GENS 1 - -struct lru_gen_memcg { -}; - -static inline void lru_gen_init_pgdat(struct pglist_data *pgdat) -{ -} - -#endif /* CONFIG_MEMCG */ - #else /* !CONFIG_LRU_GEN */ static inline void lru_gen_init_pgdat(struct pglist_data *pgdat) @@ -590,8 +572,6 @@ static inline void lru_gen_look_around(struct page_vma_mapped_walk *pvmw) { } -#ifdef CONFIG_MEMCG - static inline void lru_gen_init_memcg(struct mem_cgroup *memcg) { } @@ -616,8 +596,6 @@ static inline void lru_gen_soft_reclaim(struct mem_cgroup *memcg, int nid) { } -#endif /* CONFIG_MEMCG */ - #endif /* CONFIG_LRU_GEN */ struct lruvec { diff --git a/mm/vmscan.c b/mm/vmscan.c index aa7ea09ffb4c..351a0b5043c0 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -4097,13 +4097,6 @@ enum { MEMCG_LRU_YOUNG, }; -#ifdef CONFIG_MEMCG - -static int lru_gen_memcg_seg(struct lruvec *lruvec) -{ - return READ_ONCE(lruvec->lrugen.seg); -} - static void lru_gen_rotate_memcg(struct lruvec *lruvec, int op) { int seg; @@ -4150,6 +4143,8 @@ static void lru_gen_rotate_memcg(struct lruvec *lruvec, int op) spin_unlock_irqrestore(&pgdat->memcg_lru.lock, flags); } +#ifdef CONFIG_MEMCG + void lru_gen_online_memcg(struct mem_cgroup *memcg) { int gen; @@ -4217,18 +4212,11 @@ void lru_gen_soft_reclaim(struct mem_cgroup *memcg, int nid) struct lruvec *lruvec = get_lruvec(memcg, nid); /* see the comment on MEMCG_NR_GENS */ - if (lru_gen_memcg_seg(lruvec) != MEMCG_LRU_HEAD) + if (READ_ONCE(lruvec->lrugen.seg) != MEMCG_LRU_HEAD) lru_gen_rotate_memcg(lruvec, MEMCG_LRU_HEAD); } -#else /* !CONFIG_MEMCG */ - -static int lru_gen_memcg_seg(struct lruvec *lruvec) -{ - return 0; -} - -#endif +#endif /* CONFIG_MEMCG */ /****************************************************************************** * the eviction @@ -4776,7 +4764,7 @@ static int shrink_one(struct lruvec *lruvec, struct scan_control *sc) if (mem_cgroup_below_low(NULL, memcg)) { /* see the comment on MEMCG_NR_GENS */ - if (lru_gen_memcg_seg(lruvec) != MEMCG_LRU_TAIL) + if (READ_ONCE(lruvec->lrugen.seg) != MEMCG_LRU_TAIL) return MEMCG_LRU_TAIL; memcg_memory_event(memcg, MEMCG_LOW); @@ -4799,12 +4787,10 @@ static int shrink_one(struct lruvec *lruvec, struct scan_control *sc) return 0; /* one retry if offlined or too small */ - return lru_gen_memcg_seg(lruvec) != MEMCG_LRU_TAIL ? + return READ_ONCE(lruvec->lrugen.seg) != MEMCG_LRU_TAIL ? MEMCG_LRU_TAIL : MEMCG_LRU_YOUNG; } -#ifdef CONFIG_MEMCG - static void shrink_many(struct pglist_data *pgdat, struct scan_control *sc) { int op; @@ -4896,20 +4882,6 @@ static void lru_gen_shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc blk_finish_plug(&plug); } -#else /* !CONFIG_MEMCG */ - -static void shrink_many(struct pglist_data *pgdat, struct scan_control *sc) -{ - BUILD_BUG(); -} - -static void lru_gen_shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc) -{ - BUILD_BUG(); -} - -#endif - static void set_initial_priority(struct pglist_data *pgdat, struct scan_control *sc) { int priority; @@ -5560,6 +5532,18 @@ static const struct file_operations lru_gen_ro_fops = { * initialization ******************************************************************************/ +void lru_gen_init_pgdat(struct pglist_data *pgdat) +{ + int i, j; + + spin_lock_init(&pgdat->memcg_lru.lock); + + for (i = 0; i < MEMCG_NR_GENS; i++) { + for (j = 0; j < MEMCG_NR_BINS; j++) + INIT_HLIST_NULLS_HEAD(&pgdat->memcg_lru.fifo[i][j], i); + } +} + void lru_gen_init_lruvec(struct lruvec *lruvec) { int i; @@ -5582,18 +5566,6 @@ void lru_gen_init_lruvec(struct lruvec *lruvec) #ifdef CONFIG_MEMCG -void lru_gen_init_pgdat(struct pglist_data *pgdat) -{ - int i, j; - - spin_lock_init(&pgdat->memcg_lru.lock); - - for (i = 0; i < MEMCG_NR_GENS; i++) { - for (j = 0; j < MEMCG_NR_BINS; j++) - INIT_HLIST_NULLS_HEAD(&pgdat->memcg_lru.fifo[i][j], i); - } -} - void lru_gen_init_memcg(struct mem_cgroup *memcg) { struct lru_gen_mm_list *mm_list = get_mm_list(memcg); @@ -5653,14 +5625,17 @@ late_initcall(init_lru_gen); static void lru_gen_age_node(struct pglist_data *pgdat, struct scan_control *sc) { + BUILD_BUG(); } static void lru_gen_shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc) { + BUILD_BUG(); } static void lru_gen_shrink_node(struct pglist_data *pgdat, struct scan_control *sc) { + BUILD_BUG(); } #endif /* CONFIG_LRU_GEN */ From patchwork Wed Dec 27 14:12:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kinsey Ho X-Patchwork-Id: 183480 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7301:6f82:b0:100:9c79:88ff with SMTP id tb2csp1455380dyb; Wed, 27 Dec 2023 06:13:45 -0800 (PST) X-Google-Smtp-Source: AGHT+IHS3Ug9BiFp6SrkUwUhmry1U9E1wodtZ8AsgxFJvcb894GoF1I2+d8vGHoSYyZZ7HoBzujE X-Received: by 2002:a05:6808:1512:b0:3b9:f151:7765 with SMTP id u18-20020a056808151200b003b9f1517765mr8929505oiw.27.1703686425329; Wed, 27 Dec 2023 06:13:45 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1703686425; cv=none; d=google.com; s=arc-20160816; b=M9nYF1Tjgzx9NHOXz4cpLpXpemvNdL3Gd5ENLCsevLhPZOyiHtl9MfbO2Zxp4F0Bol nhbCPMMDQxMZhi5d0mnrPaaTfW+50xZfssb1hNVB0OmS4TIhV7ona2nGAjRg4ZbMqWy+ tEtBTq7S9TnmsaKwcFtQL/V3WoxZmROOPHBPM1e56j2Cd1RZact14dY0IaXs3t7TxJ3Y 2zBpiL5cwfCBfjPZjuCpvDMQwa5JuwrMrC8KvtIAV4BQ1VIiIWA+7XSgGTKL5Wm40rbT IemM7E/XNBJ9M2SzCQ57oeF5npiLIZed/wk4fMn/HhaHoALL1D5vvR2kVk4Ea6uyzDMd LFkg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=cc:to:from:subject:message-id:references:mime-version :list-unsubscribe:list-subscribe:list-id:precedence:in-reply-to:date :dkim-signature; bh=AMXZnWGUqNtn7KYJD8Fock0hYei0LYTTlWAZlt2Oe3E=; fh=Y2l/teiUJdPlrHsaEXa5Ys9+72CAcadxrE5yHw8U8ww=; b=1G6Oev+Vuq3+ZxSf8I9zqfnPKELLZmkpNRVnA+IrDGzXHx2r9ssrTPQowzgV2blBPG R5ys3SiHkV1egTyEBBPIzI8MDFcNvIxv60yokxcmLV6USjnULbhI6BAVvtwiRmhJhayZ xL0+NvFePpHwB04J6hgXfLq4E9Kpwf9BWSdHIIv+/PG9qAexEsoFzT3nwuqcUk3lChLj C44vOK5X9rqDND/J0GMMWOUJpCmhEJmOXAw1C6Bt4XgfDM58ROTIuzzjdpeVblAYBl1l rT3FaJJi9cyID6bWoJC9Ez0hXn6HHgI+xuR2Kwcgl7nHWNQnK2ytnJuu+qbG/IcKonk1 MdvQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=DbQ5Lg3P; spf=pass (google.com: domain of linux-kernel+bounces-12048-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-12048-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [147.75.199.223]) by mx.google.com with ESMTPS id jn8-20020a05610219c800b0046715cf82e3si417723vsb.23.2023.12.27.06.13.45 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 27 Dec 2023 06:13:45 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-12048-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) client-ip=147.75.199.223; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=DbQ5Lg3P; spf=pass (google.com: domain of linux-kernel+bounces-12048-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-12048-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id 1AA761C217EA for ; Wed, 27 Dec 2023 14:13:45 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 4CF2F46B87; Wed, 27 Dec 2023 14:12:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="DbQ5Lg3P" X-Original-To: linux-kernel@vger.kernel.org Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 731DB46433 for ; Wed, 27 Dec 2023 14:12:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--kinseyho.bounces.google.com Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-28c05e74e36so5607862a91.0 for ; Wed, 27 Dec 2023 06:12:28 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1703686347; x=1704291147; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=AMXZnWGUqNtn7KYJD8Fock0hYei0LYTTlWAZlt2Oe3E=; b=DbQ5Lg3PsGtgSH9TOHenyahbFAukf2XmW2LMMzXsUwQjkkM2Ei3ptkUFYh62kJNvXm 07+QwR4MulxALIv4St8HjVHD0ZGJhPjhkf7k8/3d1Lbn5kHYL1ajib9hDgicaT7Nc25q brQ8kvuKcT2HZluLqlvjRVCXyIc15jeqSEf3dFS0dh/hnPhL4knmKQdmRIjLgqkQLczy pTGEG02my65XiZXUAU3s/Sf2tNtWf4HrxQfKEfcQBkq2bY0y/3bubCo/UNjcjVZSpxWV iA2gArIA/0cG2KAZIEoTdZhjUtqgjgta2ORehMGXDucVllegG+RAJKbAUTEoQdb8Ng7s i7qQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1703686347; x=1704291147; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=AMXZnWGUqNtn7KYJD8Fock0hYei0LYTTlWAZlt2Oe3E=; b=QbnZ/JjaydJlOU2tdzq29Rggx1ycDoPiciDOCxDpSAkKdDMsxYhsmIHaCyuwa+o+CJ ATs3tAbQlolIlr0QZMwphUQwMeovL+2mXAiWPwDvvmFJip2tf2uHhUXwZLta0AdfvABW vqESMnQUi2nslx2TAxRNew/8Aq2WhoAGj8GMPEYKEnAKo608OUfqFJP22GdmCMhMGlKv XbEeBrz73bTw2FJ4Gpg1Kul56SjyoaS2ME7eTQnBm9ckySM26FNtd+9OuYIkY1B33XL/ GrS870Jx3iJ3qoMtJZNTDNUZkb+rdjtP1cTEZDPLrVbGs5WuamtHWRQvezNkNYGaCPc8 y2Vg== X-Gm-Message-State: AOJu0YyfP0Ae286DDZ4zf9mCM92fhn4sDokGcxP3dT5YghYU8gYIwBtd JYOAZ4j7lAu89U6k0zC24AgstqInvv40O85g8zDL X-Received: from kinseyct.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:46b]) (user=kinseyho job=sendgmr) by 2002:a17:90b:4b8f:b0:28b:2844:c35e with SMTP id lr15-20020a17090b4b8f00b0028b2844c35emr998089pjb.5.1703686347664; Wed, 27 Dec 2023 06:12:27 -0800 (PST) Date: Wed, 27 Dec 2023 14:12:04 +0000 In-Reply-To: <20231227141205.2200125-1-kinseyho@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20231227141205.2200125-1-kinseyho@google.com> X-Mailer: git-send-email 2.43.0.472.g3155946c3a-goog Message-ID: <20231227141205.2200125-5-kinseyho@google.com> Subject: [PATCH mm-unstable v4 4/5] mm/mglru: add dummy pmd_dirty() From: Kinsey Ho To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Yu Zhao , Donet Tom , "Aneesh Kumar K . V" , Kinsey Ho , kernel test robot X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1786444697116502597 X-GMAIL-MSGID: 1786444697116502597 Add dummy pmd_dirty() for architectures that don't provide it. This is similar to commit 6617da8fb565 ("mm: add dummy pmd_young() for architectures not having it"). Reported-by: kernel test robot Closes: https://lore.kernel.org/oe-kbuild-all/202312210606.1Etqz3M4-lkp@intel.com/ Closes: https://lore.kernel.org/oe-kbuild-all/202312210042.xQEiqlEh-lkp@intel.com/ Signed-off-by: Kinsey Ho Suggested-by: Yu Zhao --- arch/loongarch/include/asm/pgtable.h | 1 + arch/mips/include/asm/pgtable.h | 1 + arch/riscv/include/asm/pgtable.h | 1 + arch/s390/include/asm/pgtable.h | 1 + arch/sparc/include/asm/pgtable_64.h | 1 + arch/x86/include/asm/pgtable.h | 1 + include/linux/pgtable.h | 7 +++++++ 7 files changed, 13 insertions(+) diff --git a/arch/loongarch/include/asm/pgtable.h b/arch/loongarch/include/asm/pgtable.h index 29d9b12298bc..8b5df1bbf9e9 100644 --- a/arch/loongarch/include/asm/pgtable.h +++ b/arch/loongarch/include/asm/pgtable.h @@ -523,6 +523,7 @@ static inline pmd_t pmd_wrprotect(pmd_t pmd) return pmd; } +#define pmd_dirty pmd_dirty static inline int pmd_dirty(pmd_t pmd) { return !!(pmd_val(pmd) & (_PAGE_DIRTY | _PAGE_MODIFIED)); diff --git a/arch/mips/include/asm/pgtable.h b/arch/mips/include/asm/pgtable.h index 430b208c0130..e27a4c83c548 100644 --- a/arch/mips/include/asm/pgtable.h +++ b/arch/mips/include/asm/pgtable.h @@ -655,6 +655,7 @@ static inline pmd_t pmd_mkwrite_novma(pmd_t pmd) return pmd; } +#define pmd_dirty pmd_dirty static inline int pmd_dirty(pmd_t pmd) { return !!(pmd_val(pmd) & _PAGE_MODIFIED); diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h index ab00235b018f..7b4287f36054 100644 --- a/arch/riscv/include/asm/pgtable.h +++ b/arch/riscv/include/asm/pgtable.h @@ -673,6 +673,7 @@ static inline int pmd_write(pmd_t pmd) return pte_write(pmd_pte(pmd)); } +#define pmd_dirty pmd_dirty static inline int pmd_dirty(pmd_t pmd) { return pte_dirty(pmd_pte(pmd)); diff --git a/arch/s390/include/asm/pgtable.h b/arch/s390/include/asm/pgtable.h index 601e87fa8a9a..1299b56e43f6 100644 --- a/arch/s390/include/asm/pgtable.h +++ b/arch/s390/include/asm/pgtable.h @@ -770,6 +770,7 @@ static inline int pud_write(pud_t pud) return (pud_val(pud) & _REGION3_ENTRY_WRITE) != 0; } +#define pmd_dirty pmd_dirty static inline int pmd_dirty(pmd_t pmd) { return (pmd_val(pmd) & _SEGMENT_ENTRY_DIRTY) != 0; diff --git a/arch/sparc/include/asm/pgtable_64.h b/arch/sparc/include/asm/pgtable_64.h index 5e41033bf4ca..a8c871b7d786 100644 --- a/arch/sparc/include/asm/pgtable_64.h +++ b/arch/sparc/include/asm/pgtable_64.h @@ -706,6 +706,7 @@ static inline unsigned long pmd_write(pmd_t pmd) #define pud_write(pud) pte_write(__pte(pud_val(pud))) #ifdef CONFIG_TRANSPARENT_HUGEPAGE +#define pmd_dirty pmd_dirty static inline unsigned long pmd_dirty(pmd_t pmd) { pte_t pte = __pte(pmd_val(pmd)); diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h index 08b5cb22d9a6..9d077bca6a10 100644 --- a/arch/x86/include/asm/pgtable.h +++ b/arch/x86/include/asm/pgtable.h @@ -141,6 +141,7 @@ static inline int pte_young(pte_t pte) return pte_flags(pte) & _PAGE_ACCESSED; } +#define pmd_dirty pmd_dirty static inline bool pmd_dirty(pmd_t pmd) { return pmd_flags(pmd) & _PAGE_DIRTY_BITS; diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index 9ecc20fa6269..466cf477551a 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -184,6 +184,13 @@ static inline int pmd_young(pmd_t pmd) } #endif +#ifndef pmd_dirty +static inline int pmd_dirty(pmd_t pmd) +{ + return 0; +} +#endif + /* * A facility to provide lazy MMU batching. This allows PTE updates and * page invalidations to be delayed until a call to leave lazy MMU mode From patchwork Wed Dec 27 14:12:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kinsey Ho X-Patchwork-Id: 183481 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7301:6f82:b0:100:9c79:88ff with SMTP id tb2csp1455521dyb; Wed, 27 Dec 2023 06:13:57 -0800 (PST) X-Google-Smtp-Source: AGHT+IEDnKGksj7aCj5ixnT2Lq3XWwb1Jrr4wGPpmPP7GZuBkOl+PPuSYT2oeFlVR82dbrywH6Jr X-Received: by 2002:a05:6102:e:b0:467:29b:812 with SMTP id j14-20020a056102000e00b00467029b0812mr1671667vsp.1.1703686437660; Wed, 27 Dec 2023 06:13:57 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1703686437; cv=none; d=google.com; s=arc-20160816; b=rz4HTPCsQkJrpqeAqOI4Qd/hFturvc9ZRgIOdfIkYNAwpBZj4oOCr8/xlNZUJ72TY/ 0IaA8OeTF8pqhlrjNfW9/zyh3gISRBprMgcDXzw8vEJpqvuIpIz08kFgMtxZlXr0rost dcBk03s+nHmew+BcwknzPGwYHPX+GK3Pu6Djb9Szq5lc9OWHsr0ELxQMqhY9JDB8aC1B 50R38qgQ8gBC4zBDaNygX+M4NZvzKx20bMWPTw67yD6wZL8CIAM3KzOO3s17TMCHFSnC fIlOOaMjYxu3TFUgCc8f5pmFMtt9OgERbsWz6H6u/vo6Nfrdo5copakwFhX+hHA+Wt78 A4VQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=cc:to:from:subject:message-id:references:mime-version :list-unsubscribe:list-subscribe:list-id:precedence:in-reply-to:date :dkim-signature; bh=u5ETftbMUjWlEUP4x6SdTstUpWGRICTXI383f0d5fPw=; fh=F/25l1SFb0v/wIQTXa5dnMuM6SySfc8RlM9gXfTHNf4=; b=VGrGZfk+gB+mjP9gFP98Kw2naI6O8LNYlKwuSEshqpJHdzC9ahKvNWvFfQ/Rt3XqMB g2KrnQTk0ji9xqHAfjXJWm6L5Z+CW6brSNV4RatHpCU3bHXLiNBi18TrV+58RxarScx1 UtCGuucdofLLTTsJJsMHihZR0CxYsb1bKGmnUITbSY+SQuWowNZJe63r/5OpdxVwzxgi gkmGOiI8V0BD632Hq75P5Crplnz2mVPYRebbBXrLUn32pEcbGy9nyv8TppBzAuMenV1e 7K4REiKe5EASEaUXmRBUrMTrgnrBRhywyP76l+uEP/3nwVYU1vw+NfKks5+B0+z1vB09 Smqw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=LAZDqFIo; spf=pass (google.com: domain of linux-kernel+bounces-12049-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-12049-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [147.75.199.223]) by mx.google.com with ESMTPS id f10-20020a67cf4a000000b004673f44f49fsi27033vsm.394.2023.12.27.06.13.57 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 27 Dec 2023 06:13:57 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-12049-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) client-ip=147.75.199.223; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=LAZDqFIo; spf=pass (google.com: domain of linux-kernel+bounces-12049-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-12049-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id 5A2D71C21951 for ; Wed, 27 Dec 2023 14:13:57 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 87DCD47778; Wed, 27 Dec 2023 14:12:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="LAZDqFIo" X-Original-To: linux-kernel@vger.kernel.org Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9E70346537 for ; Wed, 27 Dec 2023 14:12:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--kinseyho.bounces.google.com Received: by mail-yb1-f201.google.com with SMTP id 3f1490d57ef6-dbdb759e73bso3008747276.2 for ; Wed, 27 Dec 2023 06:12:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1703686349; x=1704291149; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=u5ETftbMUjWlEUP4x6SdTstUpWGRICTXI383f0d5fPw=; b=LAZDqFIoTgTEuXy/M4jSzffsUz8RmT48XWB9OoqXbxgz7a5RCtqILJf/j1CtYVBXRI 38ktq+h0mhUtf7yeyjH1gZUXv72yaDjAtxGh2P/mzf6E/8PkvzfmSh6RH+OKwKo7RhDi O5mLo7yxrD1GFpRtVV+pD8CWJ4TsGlKHMIFyOxH4xZMUKCG4NQHJCR+IdV/m0pi6tDqI /nnEZcULu/I45NoFvD8oOexvZlVIH9lMLhdGxX7Nup5mZVbc9Cr0n49DhIfghu/AhrE6 Cu0GUcKu/8lOscN6XjYXU5RsPmmoiJ1mRfUAI6exYzsC9VM9SJiFEbbeXuzxwM+Fyj8s ff3w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1703686349; x=1704291149; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=u5ETftbMUjWlEUP4x6SdTstUpWGRICTXI383f0d5fPw=; b=UM8an1nPGLz/Ue+ufEGY1P149qqj1r0x0jPJYBtsQOlmYy6uKai5iqo93qWvYGffLj ukbrRr95Kl3/gQmWJjXLzlmm78V9t7mVMMQl5W8O07SwiTyKJurF2oVFwjDonlYGZq+e VuUMnfhKw7E+TMNW2OyXYO3zmbpnun5HUHtz1zHaXylb13H8PTnTgBpchLYrGVkzyxB+ WEPqZJdIs8oFNhBr1WHivDx/JLFToNoHJj+zB4MV/MFem24B+Jg2fg+sFK1S+33yBWJT uTDjRDIIbnZqCvDh2wyFwUEqDYAP+rRhefcK36HCu4w+XDwIv3JS7ui4lj+mOR+4eZwp LWng== X-Gm-Message-State: AOJu0YyKoFb3V26sdiIFbtkVTPEmJZz6Ky058eV3x651K318nNm4sVJc JbQxVb4pUIsbud6WY1FAq5IA6+2ot7WNcbM3IsRZ X-Received: from kinseyct.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:46b]) (user=kinseyho job=sendgmr) by 2002:a25:8392:0:b0:dbc:d008:4221 with SMTP id t18-20020a258392000000b00dbcd0084221mr148641ybk.13.1703686349634; Wed, 27 Dec 2023 06:12:29 -0800 (PST) Date: Wed, 27 Dec 2023 14:12:05 +0000 In-Reply-To: <20231227141205.2200125-1-kinseyho@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20231227141205.2200125-1-kinseyho@google.com> X-Mailer: git-send-email 2.43.0.472.g3155946c3a-goog Message-ID: <20231227141205.2200125-6-kinseyho@google.com> Subject: [PATCH mm-unstable v4 5/5] mm/mglru: remove CONFIG_TRANSPARENT_HUGEPAGE From: Kinsey Ho To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Yu Zhao , Donet Tom , "Aneesh Kumar K . V" , Kinsey Ho X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1786444710197769320 X-GMAIL-MSGID: 1786444710197769320 Improve code readability by removing CONFIG_TRANSPARENT_HUGEPAGE, since the compiler should be able to automatically optimize out the code that promotes THPs during page table walks. No functional changes. Signed-off-by: Kinsey Ho Co-developed-by: Aneesh Kumar K.V Signed-off-by: Aneesh Kumar K.V Tested-by: Donet Tom Acked-by: Yu Zhao --- mm/vmscan.c | 12 +----------- 1 file changed, 1 insertion(+), 11 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 351a0b5043c0..ceba905e5630 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -3273,7 +3273,6 @@ static unsigned long get_pte_pfn(pte_t pte, struct vm_area_struct *vma, unsigned return pfn; } -#if defined(CONFIG_TRANSPARENT_HUGEPAGE) || defined(CONFIG_ARCH_HAS_NONLEAF_PMD_YOUNG) static unsigned long get_pmd_pfn(pmd_t pmd, struct vm_area_struct *vma, unsigned long addr) { unsigned long pfn = pmd_pfn(pmd); @@ -3291,7 +3290,6 @@ static unsigned long get_pmd_pfn(pmd_t pmd, struct vm_area_struct *vma, unsigned return pfn; } -#endif static struct folio *get_pfn_folio(unsigned long pfn, struct mem_cgroup *memcg, struct pglist_data *pgdat, bool can_swap) @@ -3394,7 +3392,6 @@ static bool walk_pte_range(pmd_t *pmd, unsigned long start, unsigned long end, return suitable_to_scan(total, young); } -#if defined(CONFIG_TRANSPARENT_HUGEPAGE) || defined(CONFIG_ARCH_HAS_NONLEAF_PMD_YOUNG) static void walk_pmd_range_locked(pud_t *pud, unsigned long addr, struct vm_area_struct *vma, struct mm_walk *args, unsigned long *bitmap, unsigned long *first) { @@ -3472,12 +3469,6 @@ static void walk_pmd_range_locked(pud_t *pud, unsigned long addr, struct vm_area done: *first = -1; } -#else -static void walk_pmd_range_locked(pud_t *pud, unsigned long addr, struct vm_area_struct *vma, - struct mm_walk *args, unsigned long *bitmap, unsigned long *first) -{ -} -#endif static void walk_pmd_range(pud_t *pud, unsigned long start, unsigned long end, struct mm_walk *args) @@ -3513,7 +3504,6 @@ static void walk_pmd_range(pud_t *pud, unsigned long start, unsigned long end, continue; } -#ifdef CONFIG_TRANSPARENT_HUGEPAGE if (pmd_trans_huge(val)) { unsigned long pfn = pmd_pfn(val); struct pglist_data *pgdat = lruvec_pgdat(walk->lruvec); @@ -3532,7 +3522,7 @@ static void walk_pmd_range(pud_t *pud, unsigned long start, unsigned long end, walk_pmd_range_locked(pud, addr, vma, args, bitmap, &first); continue; } -#endif + walk->mm_stats[MM_NONLEAF_TOTAL]++; if (should_clear_pmd_young()) {