From patchwork Tue Feb 13 21:55:14 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zi Yan X-Patchwork-Id: 200673 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:bc8a:b0:106:860b:bbdd with SMTP id dn10csp829832dyb; Tue, 13 Feb 2024 13:59:02 -0800 (PST) X-Forwarded-Encrypted: i=3; AJvYcCUbNvltrBZ/MnAKYiX4rvp/6n+nxGIJPvf683ALKwvjAxJwncRz21goiTKnKWYRi7+nu3MvGdJsO446gU8UUnfB33BUiw== X-Google-Smtp-Source: AGHT+IEuu6T+/7b+6EVUbUyjXpYCI1hJUHSVuf0Os5Jve+12OI8ECy9djKjFp2nikvivu8rzdZbE X-Received: by 2002:a17:906:3549:b0:a3c:f531:4514 with SMTP id s9-20020a170906354900b00a3cf5314514mr446250eja.62.1707861542671; Tue, 13 Feb 2024 13:59:02 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1707861542; cv=pass; d=google.com; s=arc-20160816; b=ZPpw/ou7p9pRlh0DYQwV0LcMkaaGsgMvBHzEqGC/tfPN8ClQ2mZvsjG/eHyLwrnj8k IxlO8zH2l8dWcUNOKk7WiWYW6NrIKmgSM1zU5qEbESVVhCZ5TeKV8t0z0DRrU6CS7tMh z7yAOPJdvKdex9GhpycqC33EpDl2OYasL3BupojhiUSlLzFyONAo1uyXfEFw+zUT5qXD aJ7+zv4DrVk0gzGkN0bGWcFOccFvIBIeg5WUweSU0xfy52fhlxwh9uMYId+snOtij/0R IoAr+fR7E7HRpCakUBjo+YUAup5HwsMDv1ymDFCwCNXaYdswrFZ3y13m/HxyzjytgWyN EcXg== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:reply-to:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:dkim-signature :dkim-signature; bh=43KUO5pZ3osjy6nQRy6LW8cIMJ14A3p6oz8yC6ceQKk=; fh=ZI+FFDYc7ADnxKZr0hJKJizlni4eUKD7A4iltZpRk0I=; b=Zczj7mxybnD9VO6q+2mQ8da38F/6PACRHdHlfB1Iuy068Kv3ONure8upwLqJtZvM6q uvS4iEMk6MeZTqs3MpGRCgLIFbY97p6VXn8y0KX+AZgiO3sthJ0MuFBj6+2yh1nBssdi /bdpwp8RU5nDfo67ylMEaIYJK6PLLaCipkhlMRB+JSRUEuJ14y0KZ5Vg0K9CsWcrU6B7 b10BYo7Z2AimWTMNAex138yfq7/F4ER16TzumYUftUUUzmdeyqaw038HnrqamUBGYNuj N+mIDrd1e+pTJ3cQPtDtWPckNkX/wxlSn/Wn/0r6xVN4s4Ss9c3cFKYI9y5xoCcbNdXK 9lKw==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@sent.com header.s=fm3 header.b=LTPl4PvC; dkim=pass header.i=@messagingengine.com header.s=fm3 header.b=NS1rZYqI; arc=pass (i=1 spf=pass spfdomain=sent.com dkim=pass dkdomain=sent.com dkim=pass dkdomain=messagingengine.com dmarc=pass fromdomain=sent.com); spf=pass (google.com: domain of linux-kernel+bounces-64391-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-64391-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=sent.com X-Forwarded-Encrypted: i=2; AJvYcCWOBlnXmSGk3CjA13sM+D1sGnbC10vJMDWAaB3weykAVr4hGw457T4xElp3O62KEZJh3AjOUgZ7/g4Da1LNguuwNABewA== Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [2604:1380:4601:e00::3]) by mx.google.com with ESMTPS id y23-20020a170906471700b00a3cceff5dedsi1555304ejq.1026.2024.02.13.13.59.02 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 13 Feb 2024 13:59:02 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-64391-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) client-ip=2604:1380:4601:e00::3; Authentication-Results: mx.google.com; dkim=pass header.i=@sent.com header.s=fm3 header.b=LTPl4PvC; dkim=pass header.i=@messagingengine.com header.s=fm3 header.b=NS1rZYqI; arc=pass (i=1 spf=pass spfdomain=sent.com dkim=pass dkdomain=sent.com dkim=pass dkdomain=messagingengine.com dmarc=pass fromdomain=sent.com); spf=pass (google.com: domain of linux-kernel+bounces-64391-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-64391-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=sent.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id 214A71F2CE5B for ; Tue, 13 Feb 2024 21:59:02 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 0C63262805; Tue, 13 Feb 2024 21:55:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=sent.com header.i=@sent.com header.b="LTPl4PvC"; dkim=pass (2048-bit key) header.d=messagingengine.com header.i=@messagingengine.com header.b="NS1rZYqI" Received: from out5-smtp.messagingengine.com (out5-smtp.messagingengine.com [66.111.4.29]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 01F1062178; Tue, 13 Feb 2024 21:55:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=66.111.4.29 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707861350; cv=none; b=eKGkKXpYeE6nTpHeL1oAMR8dqOGF+elVWfRUh0tXfr0gIQd+HCCZ263zciuQEXzHBzIy2CgID3Giqw3KZdPcQTovge1vuLyHzAmXAGGy4opQPc6n6q64xtrz6tLJ4H9DyCAW2vfActqWqmn7Q08/PzzkPqdGhLFxzqZ5P/gnHFA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707861350; c=relaxed/simple; bh=38MdZLYuG2AJ1aXCJFRm5FPXaorCtatf6d4mFy+wv+s=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=NBrtBNJNVooWeLlVzjEMv/XQoeSsv94NBjBrWEeCTkjXSSbig8RtZpno/2DRu4/39BY1XH9aSIqqzaqmTZz8jvlz+4gfpO/Tb8UKPwXSD5gmjahh61FxJHtDdCXu2ehBnABdaxlYuwtFvHrHf74wHwnF+YkN6XE1VtKOkIZHCtI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=sent.com; spf=pass smtp.mailfrom=sent.com; dkim=pass (2048-bit key) header.d=sent.com header.i=@sent.com header.b=LTPl4PvC; dkim=pass (2048-bit key) header.d=messagingengine.com header.i=@messagingengine.com header.b=NS1rZYqI; arc=none smtp.client-ip=66.111.4.29 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=sent.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sent.com Received: from compute5.internal (compute5.nyi.internal [10.202.2.45]) by mailout.nyi.internal (Postfix) with ESMTP id D43925C00CA; Tue, 13 Feb 2024 16:55:47 -0500 (EST) Received: from mailfrontend2 ([10.202.2.163]) by compute5.internal (MEProxy); Tue, 13 Feb 2024 16:55:47 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sent.com; h=cc :cc:content-transfer-encoding:content-type:content-type:date :date:from:from:in-reply-to:in-reply-to:message-id:mime-version :references:reply-to:reply-to:subject:subject:to:to; s=fm3; t= 1707861347; x=1707947747; bh=43KUO5pZ3osjy6nQRy6LW8cIMJ14A3p6oz8 yC6ceQKk=; b=LTPl4PvCEFNUWDJZquJqdFcLG0owwHI9iL4COQiteBUhernnnNq LH6fO88G1V9ARLPVrpl41dw2Mvmbe7Dms1ZpgNKVpUx87Aw9LUnOSgvBw4+bcsbg Qts6nhxjVNsTFvJweIWEM8CzHPQin02SRUS+uxKYa8DO4jig+rm6zQMQSmM1gEmt JsRgaojL5wKkQqVb27ss0mEt3a+AdAxzleU7U+eGn6OrZMm3jMkUh7cmxMyzseoC UUG/n1QCRbZVEaCZEfaxkshdDkPg+XubSSDrDsUYGTbBl6axSRhmQBh9o6uW9Gi5 RBJ31EzlPhh4FML7qpweqDqojd6i9V4NVYA== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding :content-type:content-type:date:date:feedback-id:feedback-id :from:from:in-reply-to:in-reply-to:message-id:mime-version :references:reply-to:reply-to:subject:subject:to:to:x-me-proxy :x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm3; t= 1707861347; x=1707947747; bh=43KUO5pZ3osjy6nQRy6LW8cIMJ14A3p6oz8 yC6ceQKk=; b=NS1rZYqIDbBEaDSWg1X5bjk05SdO55tD+MgK8/KdvNjyX8L853q F9BmuErnawsM7nwDrkc7O3CFyEdZ6NkLWoNMUhlDOlWsbXuvOZl0MXQUdG4767R3 YN+egUo2h0XSD/tneJAwXP1569DVxKjnBpm1uCubpskQ4od7IPdHexduhByAfv9y a+JtAMDrL5trG/bCgFoqqGWga0eeMUDDEHdQAqfQnMdf2IUh0wJaq/kMBZ7PSgxW N/tZhT2zmPU1YsfUei38gjBePrKNQQH0Q2n4N4SX1FjCUUDajeiT11CuFsFk1yM8 50BlGNcQjabvDCvofI5zW4XvuKdaFuoV20w== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvledrudehgdduheegucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvvefufffkofgjfhhrgggtgfesthekredtredtjeenucfhrhhomhepkghi ucgjrghnuceoiihirdihrghnsehsvghnthdrtghomheqnecuggftrfgrthhtvghrnhepje ekteekffelleekudfftdefvddtjeejuedtuedtteegjefgvedtfedujeekieevnecuvehl uhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepiihirdihrghnse hsvghnthdrtghomh X-ME-Proxy: Feedback-ID: iccd040f4:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue, 13 Feb 2024 16:55:46 -0500 (EST) From: Zi Yan To: "Pankaj Raghav (Samsung)" , linux-mm@kvack.org Cc: Zi Yan , "Matthew Wilcox (Oracle)" , David Hildenbrand , Yang Shi , Yu Zhao , "Kirill A . Shutemov" , Ryan Roberts , =?utf-8?q?Michal_Koutn=C3=BD?= , Roman Gushchin , "Zach O'Keefe" , Hugh Dickins , Mcgrof Chamberlain , Andrew Morton , linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kselftest@vger.kernel.org Subject: [PATCH v4 1/7] mm/memcg: use order instead of nr in split_page_memcg() Date: Tue, 13 Feb 2024 16:55:14 -0500 Message-ID: <20240213215520.1048625-2-zi.yan@sent.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240213215520.1048625-1-zi.yan@sent.com> References: <20240213215520.1048625-1-zi.yan@sent.com> Reply-To: Zi Yan Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1790822625102777214 X-GMAIL-MSGID: 1790822625102777214 From: Zi Yan We do not have non power of two pages, using nr is error prone if nr is not power-of-two. Use page order instead. Signed-off-by: Zi Yan Acked-by: David Hildenbrand --- include/linux/memcontrol.h | 4 ++-- mm/huge_memory.c | 3 ++- mm/memcontrol.c | 3 ++- mm/page_alloc.c | 4 ++-- 4 files changed, 8 insertions(+), 6 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 4e4caeaea404..173bbb53c1ec 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -1163,7 +1163,7 @@ static inline void memcg_memory_event_mm(struct mm_struct *mm, rcu_read_unlock(); } -void split_page_memcg(struct page *head, unsigned int nr); +void split_page_memcg(struct page *head, int order); unsigned long mem_cgroup_soft_limit_reclaim(pg_data_t *pgdat, int order, gfp_t gfp_mask, @@ -1621,7 +1621,7 @@ void count_memcg_event_mm(struct mm_struct *mm, enum vm_event_item idx) { } -static inline void split_page_memcg(struct page *head, unsigned int nr) +static inline void split_page_memcg(struct page *head, int order) { } diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 016e20bd813e..0cd5fba0923c 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2877,9 +2877,10 @@ static void __split_huge_page(struct page *page, struct list_head *list, unsigned long offset = 0; unsigned int nr = thp_nr_pages(head); int i, nr_dropped = 0; + int order = folio_order(folio); /* complete memcg works before add pages to LRU */ - split_page_memcg(head, nr); + split_page_memcg(head, order); if (folio_test_anon(folio) && folio_test_swapcache(folio)) { offset = swp_offset(folio->swap); diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 93ad8640b741..404e529644c0 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -3608,11 +3608,12 @@ void obj_cgroup_uncharge(struct obj_cgroup *objcg, size_t size) /* * Because page_memcg(head) is not set on tails, set it now. */ -void split_page_memcg(struct page *head, unsigned int nr) +void split_page_memcg(struct page *head, int order) { struct folio *folio = page_folio(head); struct mem_cgroup *memcg = folio_memcg(folio); int i; + unsigned int nr = 1 << order; if (mem_cgroup_disabled() || !memcg) return; diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 7ae4b74c9e5c..7c927b84e16c 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -2653,7 +2653,7 @@ void split_page(struct page *page, unsigned int order) for (i = 1; i < (1 << order); i++) set_page_refcounted(page + i); split_page_owner(page, 1 << order); - split_page_memcg(page, 1 << order); + split_page_memcg(page, order); } EXPORT_SYMBOL_GPL(split_page); @@ -4838,7 +4838,7 @@ static void *make_alloc_exact(unsigned long addr, unsigned int order, struct page *last = page + nr; split_page_owner(page, 1 << order); - split_page_memcg(page, 1 << order); + split_page_memcg(page, order); while (page < --last) set_page_refcounted(last); From patchwork Tue Feb 13 21:55:15 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zi Yan X-Patchwork-Id: 200674 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:bc8a:b0:106:860b:bbdd with SMTP id dn10csp829955dyb; Tue, 13 Feb 2024 13:59:22 -0800 (PST) X-Forwarded-Encrypted: i=3; AJvYcCXyQfa1Dwni8TjXwn4uEeeztLtOqAP41OLMD/35M+G1JvLhYJiyfXuxPsFVScwElxOd2oZsDZ4I3XY0N8XGMUxSCHgNvA== X-Google-Smtp-Source: AGHT+IEj9oXcEXCtr+MjrEyGbWLCbiCT1egz39gLzRDyV5HlsH2e54BRuJz+yXe+B4iw/gusV+t0 X-Received: by 2002:a05:620a:100d:b0:785:c124:4fab with SMTP id z13-20020a05620a100d00b00785c1244fabmr936774qkj.10.1707861562580; Tue, 13 Feb 2024 13:59:22 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1707861562; cv=pass; d=google.com; s=arc-20160816; b=bv9mU3qySd6UcP3hINhl51Slax1sucrj5oMpB2LaVUHLaDy+EWTlCqFCxmlcmxg1uU X0Z7MvjzchbQEG8y232agSD1cb+4woObBnxNVJbHitS+0ER7pKRkvmNzB34vDkyvZ1Xy 3JsmlGK05bFcTjk7u1ZVzCkbc4CN1pqpo03sBZf6JK1yIbkpPaqTj2VN8sUPiJYKu3Wb M+WyUt2D9w8orhGgoxKjcJUW8CzsoSGke8vogi4pAI1UvfIbKBieuZ7Pq5Y1Mx27t8Ha tIJt3wveTC5hx5CuE9O5jgAHzlmRJb6pkuGgFbyZjG7AT48N1v/bv7VDzV3++qi1mcDM xcrQ== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:reply-to:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:dkim-signature :dkim-signature; bh=wJBfeFCeCXlsVboyNZCAlUASDo+mI/mq6x/aZS6p7CI=; fh=cZc3aR52obEN0u1wpSRQjctA896xqXbcdttYaUO3C8o=; b=dnJ/56HOjVChUx7SX3ndGIueKYb24BpJ8RMeW5A0dBVsk24+8N/7A2KjY4BsbSu2ng iyjgREIgcLasIA0i+V3+fLa6LUN0UT6kU39dbKDY4QcDo6ecDMNc1Wnr7CLVdL6nisWD jlQf/gAqrQt4UnJovakuQBSY2mG/BZlCFKQJXFAmak6ZwOO1viExB4Fn7Ie7GI3mhlwq 4a5Y3FwlelsY0utsYnE5hej9cb8DK4MnJ2qHzhAAZik/4LuzCK2AtTtrNU+NuxK+Csba VKSE8YPWqybfk3ekjzuVxj+x8R6AYUFb8AVYTQo+zvBiW0YIbuA2bMPjiEvsuT0hZoAj WiMQ==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@sent.com header.s=fm3 header.b=IFte4mTO; dkim=pass header.i=@messagingengine.com header.s=fm3 header.b=OUf61+g9; arc=pass (i=1 spf=pass spfdomain=sent.com dkim=pass dkdomain=sent.com dkim=pass dkdomain=messagingengine.com dmarc=pass fromdomain=sent.com); spf=pass (google.com: domain of linux-kernel+bounces-64392-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-64392-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=sent.com X-Forwarded-Encrypted: i=2; AJvYcCX2q1cHY8SpvJrQ97P6SN2aNM6wc8DpRf1ZGNTc17oSKs2e1hhqna0KrfxeCC1xT6A/3NRzu84S9R98LNq5jrMEJU/E2g== Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [2604:1380:45d1:ec00::1]) by mx.google.com with ESMTPS id z23-20020a05620a101700b00785c0ad1601si9314134qkj.22.2024.02.13.13.59.22 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 13 Feb 2024 13:59:22 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-64392-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) client-ip=2604:1380:45d1:ec00::1; Authentication-Results: mx.google.com; dkim=pass header.i=@sent.com header.s=fm3 header.b=IFte4mTO; dkim=pass header.i=@messagingengine.com header.s=fm3 header.b=OUf61+g9; arc=pass (i=1 spf=pass spfdomain=sent.com dkim=pass dkdomain=sent.com dkim=pass dkdomain=messagingengine.com dmarc=pass fromdomain=sent.com); spf=pass (google.com: domain of linux-kernel+bounces-64392-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-64392-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=sent.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id 517A91C229D9 for ; Tue, 13 Feb 2024 21:59:22 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id F0E8B63517; Tue, 13 Feb 2024 21:55:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=sent.com header.i=@sent.com header.b="IFte4mTO"; dkim=pass (2048-bit key) header.d=messagingengine.com header.i=@messagingengine.com header.b="OUf61+g9" Received: from out5-smtp.messagingengine.com (out5-smtp.messagingengine.com [66.111.4.29]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C53A86217C; Tue, 13 Feb 2024 21:55:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=66.111.4.29 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707861351; cv=none; b=KApB2vSCBMkaPKcwGah88sH6SwMhT2yIeFDg2h6DdBYrFVO/x8UEuqxq5urz3zrA1n7ysgrcZb/ZpSvnxA5zx9RkbuTJtTzYupMNlepIDCN7VhYNZke89Owf89a4t/2sEPxUktpwXcsLRcg0rb2kOqk170zNow4Owl8jY4veTJ8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707861351; c=relaxed/simple; bh=W+t9ky4Cs9vVz5Q7cEU44YRwRv+9h4htLo26lHE7ZEI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=aunpMtXRo2FaTf281ZZ8zyIVKIS8iz42YQgGmRst4s8g7dBx7G/LBx1XNBt1sG9jfUYssdNSolIyJ12cxAjN9sBiLEUbJSbcNNyMc1A/Th09j3dJm+ib6TnAcZWyU2Ut+P2MKWutHx6N457LS0+SYnXLRQ3Qig5CrEGVnbP8fSs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=sent.com; spf=pass smtp.mailfrom=sent.com; dkim=pass (2048-bit key) header.d=sent.com header.i=@sent.com header.b=IFte4mTO; dkim=pass (2048-bit key) header.d=messagingengine.com header.i=@messagingengine.com header.b=OUf61+g9; arc=none smtp.client-ip=66.111.4.29 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=sent.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sent.com Received: from compute5.internal (compute5.nyi.internal [10.202.2.45]) by mailout.nyi.internal (Postfix) with ESMTP id 9DBE15C010A; Tue, 13 Feb 2024 16:55:48 -0500 (EST) Received: from mailfrontend2 ([10.202.2.163]) by compute5.internal (MEProxy); Tue, 13 Feb 2024 16:55:48 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sent.com; h=cc :cc:content-transfer-encoding:content-type:content-type:date :date:from:from:in-reply-to:in-reply-to:message-id:mime-version :references:reply-to:reply-to:subject:subject:to:to; s=fm3; t= 1707861348; x=1707947748; bh=wJBfeFCeCXlsVboyNZCAlUASDo+mI/mq6x/ aZS6p7CI=; b=IFte4mTOyYLgDFzhTe1Wzq+D8KLldWfEdKzAAWhih+Jc94YdTIO R0BPQL5lKaXGs6sCl8paJTtefcKhm9m0bRwd/i6kdUFAuiPIMec2zURErXQ9n9sl kS6nagE2F7UorEzQnFoYn61dJNGJG5pkVoMNakmIrcraDu19kx5UHG5A9ynE67xO 7Uo/+Y2qjRm6uGlG13h3taJdK72R2jbLwghg9moZPJsmub8Q3Uzx3DI65/NFjREL of/pmsNfUj1phFgchBQ069nJ6mI4Q0BTUPTITAJQMiR5BKhmq8T6VS4FJsbBZsas arxVzg3miTSIGfy9pFSECe+I8Y6WQ4ia+bQ== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding :content-type:content-type:date:date:feedback-id:feedback-id :from:from:in-reply-to:in-reply-to:message-id:mime-version :references:reply-to:reply-to:subject:subject:to:to:x-me-proxy :x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm3; t= 1707861348; x=1707947748; bh=wJBfeFCeCXlsVboyNZCAlUASDo+mI/mq6x/ aZS6p7CI=; b=OUf61+g9RDW6eTwtmDEW0Y+9JYoyVi7jrk/8UURdj3XuQ8gJdqi D+Wm7S4JwLfZft36Xex237rwqqDOrYIAB8Vn4FC1eTEZrVPqTBeoYBltVx/8FyCb Xo2zmq/GKvveXx+/iA1jH7puduI3eQGrFpDrnHVFY/zhrVQvaXV/E/vlZ9POzSLY fyA1PPAPdyI/pN/tA1j+iaX5D2ROugh0o6JwgzqGQVLuppBadE1gVPR4m3NFqSNS m6QQvw7RO1QIS/sA9XMFPYx9i4liYS94t6zWHI9cCo4SHbXczMt38B7vVW/oZH6/ NIpeymsD6Ee0PX4V35EFUeGscJb/jyKAx+w== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvledrudehgdduheegucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvvefufffkofgjfhhrgggtgfesthekredtredtjeenucfhrhhomhepkghi ucgjrghnuceoiihirdihrghnsehsvghnthdrtghomheqnecuggftrfgrthhtvghrnhepje ekteekffelleekudfftdefvddtjeejuedtuedtteegjefgvedtfedujeekieevnecuvehl uhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepiihirdihrghnse hsvghnthdrtghomh X-ME-Proxy: Feedback-ID: iccd040f4:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue, 13 Feb 2024 16:55:47 -0500 (EST) From: Zi Yan To: "Pankaj Raghav (Samsung)" , linux-mm@kvack.org Cc: Zi Yan , "Matthew Wilcox (Oracle)" , David Hildenbrand , Yang Shi , Yu Zhao , "Kirill A . Shutemov" , Ryan Roberts , =?utf-8?q?Michal_Koutn=C3=BD?= , Roman Gushchin , "Zach O'Keefe" , Hugh Dickins , Mcgrof Chamberlain , Andrew Morton , linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kselftest@vger.kernel.org Subject: [PATCH v4 2/7] mm/page_owner: use order instead of nr in split_page_owner() Date: Tue, 13 Feb 2024 16:55:15 -0500 Message-ID: <20240213215520.1048625-3-zi.yan@sent.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240213215520.1048625-1-zi.yan@sent.com> References: <20240213215520.1048625-1-zi.yan@sent.com> Reply-To: Zi Yan Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1790822645888381458 X-GMAIL-MSGID: 1790822645888381458 From: Zi Yan We do not have non power of two pages, using nr is error prone if nr is not power-of-two. Use page order instead. Signed-off-by: Zi Yan Acked-by: David Hildenbrand --- include/linux/page_owner.h | 8 ++++---- mm/huge_memory.c | 2 +- mm/page_alloc.c | 4 ++-- mm/page_owner.c | 3 ++- 4 files changed, 9 insertions(+), 8 deletions(-) diff --git a/include/linux/page_owner.h b/include/linux/page_owner.h index 119a0c9d2a8b..d7878523adfc 100644 --- a/include/linux/page_owner.h +++ b/include/linux/page_owner.h @@ -11,7 +11,7 @@ extern struct page_ext_operations page_owner_ops; extern void __reset_page_owner(struct page *page, unsigned short order); extern void __set_page_owner(struct page *page, unsigned short order, gfp_t gfp_mask); -extern void __split_page_owner(struct page *page, unsigned int nr); +extern void __split_page_owner(struct page *page, int order); extern void __folio_copy_owner(struct folio *newfolio, struct folio *old); extern void __set_page_owner_migrate_reason(struct page *page, int reason); extern void __dump_page_owner(const struct page *page); @@ -31,10 +31,10 @@ static inline void set_page_owner(struct page *page, __set_page_owner(page, order, gfp_mask); } -static inline void split_page_owner(struct page *page, unsigned int nr) +static inline void split_page_owner(struct page *page, int order) { if (static_branch_unlikely(&page_owner_inited)) - __split_page_owner(page, nr); + __split_page_owner(page, order); } static inline void folio_copy_owner(struct folio *newfolio, struct folio *old) { @@ -60,7 +60,7 @@ static inline void set_page_owner(struct page *page, { } static inline void split_page_owner(struct page *page, - unsigned short order) + int order) { } static inline void folio_copy_owner(struct folio *newfolio, struct folio *folio) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 0cd5fba0923c..f079b02f1f59 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2919,7 +2919,7 @@ static void __split_huge_page(struct page *page, struct list_head *list, unlock_page_lruvec(lruvec); /* Caller disabled irqs, so they are still disabled here */ - split_page_owner(head, nr); + split_page_owner(head, order); /* See comment in __split_huge_page_tail() */ if (PageAnon(head)) { diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 7c927b84e16c..b6e8fe6fed67 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -2652,7 +2652,7 @@ void split_page(struct page *page, unsigned int order) for (i = 1; i < (1 << order); i++) set_page_refcounted(page + i); - split_page_owner(page, 1 << order); + split_page_owner(page, order); split_page_memcg(page, order); } EXPORT_SYMBOL_GPL(split_page); @@ -4837,7 +4837,7 @@ static void *make_alloc_exact(unsigned long addr, unsigned int order, struct page *page = virt_to_page((void *)addr); struct page *last = page + nr; - split_page_owner(page, 1 << order); + split_page_owner(page, order); split_page_memcg(page, order); while (page < --last) set_page_refcounted(last); diff --git a/mm/page_owner.c b/mm/page_owner.c index c4f9e5506e93..1319e402c2cf 100644 --- a/mm/page_owner.c +++ b/mm/page_owner.c @@ -292,11 +292,12 @@ void __set_page_owner_migrate_reason(struct page *page, int reason) page_ext_put(page_ext); } -void __split_page_owner(struct page *page, unsigned int nr) +void __split_page_owner(struct page *page, int order) { int i; struct page_ext *page_ext = page_ext_get(page); struct page_owner *page_owner; + unsigned int nr = 1 << order; if (unlikely(!page_ext)) return; From patchwork Tue Feb 13 21:55:16 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zi Yan X-Patchwork-Id: 200708 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:bc8a:b0:106:860b:bbdd with SMTP id dn10csp843203dyb; Tue, 13 Feb 2024 14:28:51 -0800 (PST) X-Forwarded-Encrypted: i=3; AJvYcCVwoJ6lulv4jqRqHpcW1YrYjjCmXJN8/pft72pO/u4XwCGkv9L1hkUb1Id7n5MiHGlinBaMK85M5WkX0Vwcx46Mc/cZqg== X-Google-Smtp-Source: AGHT+IHrn/fSl1VlS6zYXKiafskW1j6Oi3lPq/BXVhY4qM/+iGyu756C1LTMU+0sH5vHDqscjvda X-Received: by 2002:a05:6a00:93a7:b0:6e0:4e0f:62a8 with SMTP id ka39-20020a056a0093a700b006e04e0f62a8mr296792pfb.2.1707863331419; Tue, 13 Feb 2024 14:28:51 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1707863331; cv=pass; d=google.com; s=arc-20160816; b=IEBPIxjmJ/krE2HVrIF1/cnfr3hAOJA1YAs8OjFOrl0YfSPJPFQfxGNq/ZtpBAoQZO sl7DLQXIB6pE+7gmNtThjEPheRXAHlvr2oWonoPiPTo/0R9X1P0RG2uDTI8DiJdamVxG cqYgYLCLhvER7pP0v26W8uRZ0ihZ+Ai7DOzQ+6gEWAGTowUESehLs8Dxgg4lm0Q6ngzJ auOu5SnH8zIywluyNVbwLiFGmHwF925Kax7XY0G3BIL67BIsxMu7Yv/N+EIvLoB1/RoD 1WcoCU4/i2XbhyfXiFwntkhvnzS2Hk/dMRXA9gL3u+hjO2gSXU9aeVmkulr8Wdi6bwcF oRig== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:reply-to:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:dkim-signature :dkim-signature; bh=w6VhDK8TXvu8VmS0cIwR/bVucmXbgvOXS24MFWRb9kQ=; fh=MVWILE5sWa4VQH2wrFARiIo4am2bLTHLGnoMMGrOqIU=; b=qo7mzesIXkC9QbJDQDW6G5eKXzOV8zy15tRMm+6raglgqMVNVeDbVT3iYjtTn6zH2w SGyYxGj95MhdipqMkTCaSlQ56HEjxRC4McOjs0upPGp3RlA3jweCTcqsC9TysGMVlLYy vR4dK6j8fv25iY9+gXwuXFqZT8HFypj5vW7HTmc1MDP5d6OpTPEnny6M6uvWtIHJ1GOz KMSLu1oS5CflhkrriiNDPhiNn3BifMt/mpSGAu46XqIHYSa3mEgMefuvydfETtL8IpeT jIrPn9nf6AVR8A8DK/0b9cILzELcmgZN02TqMcLX5w13XT2ryZwk+Q1ZxrVhXP9z/j8F E/yw==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@sent.com header.s=fm3 header.b="C70tT3/h"; dkim=pass header.i=@messagingengine.com header.s=fm3 header.b=mfJ6oPkF; arc=pass (i=1 spf=pass spfdomain=sent.com dkim=pass dkdomain=sent.com dkim=pass dkdomain=messagingengine.com dmarc=pass fromdomain=sent.com); spf=pass (google.com: domain of linux-kernel+bounces-64393-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) smtp.mailfrom="linux-kernel+bounces-64393-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=sent.com X-Forwarded-Encrypted: i=2; AJvYcCUtnOZxsY6FrovHEF7gmNaffyuA4suJP4tqdB8xk+d/qXWd6E4sCwvWub/UnDJqljB/n46tukFTuGqetsgnKWtVZ6e28Q== Received: from sy.mirrors.kernel.org (sy.mirrors.kernel.org. [147.75.48.161]) by mx.google.com with ESMTPS id ca27-20020a056a02069b00b005dc854a1d02si1851696pgb.230.2024.02.13.14.28.51 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 13 Feb 2024 14:28:51 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-64393-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) client-ip=147.75.48.161; Authentication-Results: mx.google.com; dkim=pass header.i=@sent.com header.s=fm3 header.b="C70tT3/h"; dkim=pass header.i=@messagingengine.com header.s=fm3 header.b=mfJ6oPkF; arc=pass (i=1 spf=pass spfdomain=sent.com dkim=pass dkdomain=sent.com dkim=pass dkdomain=messagingengine.com dmarc=pass fromdomain=sent.com); spf=pass (google.com: domain of linux-kernel+bounces-64393-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) smtp.mailfrom="linux-kernel+bounces-64393-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=sent.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sy.mirrors.kernel.org (Postfix) with ESMTPS id 79005B2E13D for ; Tue, 13 Feb 2024 21:59:40 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id A85FA64A9A; Tue, 13 Feb 2024 21:55:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=sent.com header.i=@sent.com header.b="C70tT3/h"; dkim=pass (2048-bit key) header.d=messagingengine.com header.i=@messagingengine.com header.b="mfJ6oPkF" Received: from out5-smtp.messagingengine.com (out5-smtp.messagingengine.com [66.111.4.29]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5964B6214D; Tue, 13 Feb 2024 21:55:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=66.111.4.29 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707861352; cv=none; b=MISIKfJGDbki8WY/Ge/kZlgaRJXSDub7enexA2jv4v1iSvO7XUbK3Difot5n5h6qVC1L78sYWN78qtdMqCC3SP7vcD0HIf+lakh9JEDP4PWOevS+h41vYDlhS6cZIYsUTrgxstS3UeOEKxxP6UgptHIxGAiHYFJqrxmml4WNq0I= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707861352; c=relaxed/simple; bh=UqNu37izsdBNbfTcNTy5md1Flo14ewszOLHxEmwptdg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=H/FdOxVoLSb517b3svt4cE8Lj47jBMbCfaZ4p2OXJf4sSDERkZq5JkExqGoz6oyZtYd9ABgCOGRB6V51zgfd+ZNOJTB5ZtTCtlIEop5pB/8Y20HcTanGqOSWMn11i2CW2VSh/9SPafTkJQt1bdCokGRN3/oOBUbZ1RvgLRCVIBo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=sent.com; spf=pass smtp.mailfrom=sent.com; dkim=pass (2048-bit key) header.d=sent.com header.i=@sent.com header.b=C70tT3/h; dkim=pass (2048-bit key) header.d=messagingengine.com header.i=@messagingengine.com header.b=mfJ6oPkF; arc=none smtp.client-ip=66.111.4.29 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=sent.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sent.com Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailout.nyi.internal (Postfix) with ESMTP id 632325C0112; Tue, 13 Feb 2024 16:55:49 -0500 (EST) Received: from mailfrontend2 ([10.202.2.163]) by compute3.internal (MEProxy); Tue, 13 Feb 2024 16:55:49 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sent.com; h=cc :cc:content-transfer-encoding:content-type:content-type:date :date:from:from:in-reply-to:in-reply-to:message-id:mime-version :references:reply-to:reply-to:subject:subject:to:to; s=fm3; t= 1707861349; x=1707947749; bh=w6VhDK8TXvu8VmS0cIwR/bVucmXbgvOXS24 MFWRb9kQ=; b=C70tT3/hxvmwRWrVgxnS8fozz4YHtS6gPhAjMZn75BqWt0l64Kq 6NAknIY8hodeTZXxf1TcFW+hmSrY8Ez6UcvgHwAYCo/PzX/kaod9FkQg4YxM6Qap +Q0U0w8f96uywhMHVijq7849z79hzfhu9xT1WBbNjbnqePwm5/ZmuWlXZzd9dWG1 Ihah9K6AUWgSgv9aAkOqam/jPngyvV2GhLW2LtwWP+KQJ+smHMCekknDEyUOxKUI uL233jsmWJ9skW3QUzLrpOgUlN4+wsNmJGn6/f3rdboztQfhMVpqShelgfvJGMBK BzByecW+/ekBdo0qS4Pd0bLg+VuomGNJegA== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding :content-type:content-type:date:date:feedback-id:feedback-id :from:from:in-reply-to:in-reply-to:message-id:mime-version :references:reply-to:reply-to:subject:subject:to:to:x-me-proxy :x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm3; t= 1707861349; x=1707947749; bh=w6VhDK8TXvu8VmS0cIwR/bVucmXbgvOXS24 MFWRb9kQ=; b=mfJ6oPkFyNA+mLkOP/AUEVBP9KImpo/pDaBA8WA4QMCAq03CiTz 492X3jwKiuTbKwuWwNH62NYygt2WBIlP2hxakQ6u0PuV2KIigxiRUrBfckunC0MP nt/aNZDy4MAzjFvlZ6uLK75/N92HY+cpE2wE0m34hVBxGSzd/XcVGP8Fya3eleTG OdN/SgRo6balkm91Wog9MxEYiVA742kmaYLsnDsEPH9WryD4QW9km77C9HyraNOY ktQRHTwI//dgaj4iVwiDumZjJirsDYev9o0Uv1zHeLkNIc7VjvRc0XXR/UwtChiu Z5/e6Y2a0o5yXwC0zyZf0TbF28RyZw1gxnQ== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvledrudehgdduheegucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvvefufffkofgjfhhrgggtgfesthekredtredtjeenucfhrhhomhepkghi ucgjrghnuceoiihirdihrghnsehsvghnthdrtghomheqnecuggftrfgrthhtvghrnhepje ekteekffelleekudfftdefvddtjeejuedtuedtteegjefgvedtfedujeekieevnecuvehl uhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepiihirdihrghnse hsvghnthdrtghomh X-ME-Proxy: Feedback-ID: iccd040f4:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue, 13 Feb 2024 16:55:48 -0500 (EST) From: Zi Yan To: "Pankaj Raghav (Samsung)" , linux-mm@kvack.org Cc: Zi Yan , "Matthew Wilcox (Oracle)" , David Hildenbrand , Yang Shi , Yu Zhao , "Kirill A . Shutemov" , Ryan Roberts , =?utf-8?q?Michal_Koutn=C3=BD?= , Roman Gushchin , "Zach O'Keefe" , Hugh Dickins , Mcgrof Chamberlain , Andrew Morton , linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kselftest@vger.kernel.org Subject: [PATCH v4 3/7] mm: memcg: make memcg huge page split support any order split. Date: Tue, 13 Feb 2024 16:55:16 -0500 Message-ID: <20240213215520.1048625-4-zi.yan@sent.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240213215520.1048625-1-zi.yan@sent.com> References: <20240213215520.1048625-1-zi.yan@sent.com> Reply-To: Zi Yan Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1790824500551410741 X-GMAIL-MSGID: 1790824500551410741 From: Zi Yan It sets memcg information for the pages after the split. A new parameter new_order is added to tell the order of subpages in the new page, always 0 for now. It prepares for upcoming changes to support split huge page to any lower order. Signed-off-by: Zi Yan Acked-by: David Hildenbrand --- include/linux/memcontrol.h | 4 ++-- mm/huge_memory.c | 2 +- mm/memcontrol.c | 11 ++++++----- mm/page_alloc.c | 4 ++-- 4 files changed, 11 insertions(+), 10 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 173bbb53c1ec..9a2dea92be0e 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -1163,7 +1163,7 @@ static inline void memcg_memory_event_mm(struct mm_struct *mm, rcu_read_unlock(); } -void split_page_memcg(struct page *head, int order); +void split_page_memcg(struct page *head, int old_order, int new_order); unsigned long mem_cgroup_soft_limit_reclaim(pg_data_t *pgdat, int order, gfp_t gfp_mask, @@ -1621,7 +1621,7 @@ void count_memcg_event_mm(struct mm_struct *mm, enum vm_event_item idx) { } -static inline void split_page_memcg(struct page *head, int order) +static inline void split_page_memcg(struct page *head, int old_order, int new_order) { } diff --git a/mm/huge_memory.c b/mm/huge_memory.c index f079b02f1f59..3d30eccd3a7f 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2880,7 +2880,7 @@ static void __split_huge_page(struct page *page, struct list_head *list, int order = folio_order(folio); /* complete memcg works before add pages to LRU */ - split_page_memcg(head, order); + split_page_memcg(head, order, 0); if (folio_test_anon(folio) && folio_test_swapcache(folio)) { offset = swp_offset(folio->swap); diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 404e529644c0..27d53715d8dc 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -3608,23 +3608,24 @@ void obj_cgroup_uncharge(struct obj_cgroup *objcg, size_t size) /* * Because page_memcg(head) is not set on tails, set it now. */ -void split_page_memcg(struct page *head, int order) +void split_page_memcg(struct page *head, int old_order, int new_order) { struct folio *folio = page_folio(head); struct mem_cgroup *memcg = folio_memcg(folio); int i; - unsigned int nr = 1 << order; + unsigned int old_nr = 1 << old_order; + unsigned int new_nr = 1 << new_order; if (mem_cgroup_disabled() || !memcg) return; - for (i = 1; i < nr; i++) + for (i = new_nr; i < old_nr; i += new_nr) folio_page(folio, i)->memcg_data = folio->memcg_data; if (folio_memcg_kmem(folio)) - obj_cgroup_get_many(__folio_objcg(folio), nr - 1); + obj_cgroup_get_many(__folio_objcg(folio), old_nr / new_nr - 1); else - css_get_many(&memcg->css, nr - 1); + css_get_many(&memcg->css, old_nr / new_nr - 1); } #ifdef CONFIG_SWAP diff --git a/mm/page_alloc.c b/mm/page_alloc.c index b6e8fe6fed67..9d4dd41d0647 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -2653,7 +2653,7 @@ void split_page(struct page *page, unsigned int order) for (i = 1; i < (1 << order); i++) set_page_refcounted(page + i); split_page_owner(page, order); - split_page_memcg(page, order); + split_page_memcg(page, order, 0); } EXPORT_SYMBOL_GPL(split_page); @@ -4838,7 +4838,7 @@ static void *make_alloc_exact(unsigned long addr, unsigned int order, struct page *last = page + nr; split_page_owner(page, order); - split_page_memcg(page, order); + split_page_memcg(page, order, 0); while (page < --last) set_page_refcounted(last); From patchwork Tue Feb 13 21:55:17 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zi Yan X-Patchwork-Id: 200675 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:bc8a:b0:106:860b:bbdd with SMTP id dn10csp830183dyb; Tue, 13 Feb 2024 13:59:58 -0800 (PST) X-Forwarded-Encrypted: i=3; AJvYcCVXbEjUv0EJTTL5flVRb4dd7gIZwyWQu0UmqkJi4VuNpc2R73VTonx+n2m8LXOJOW4/4mYXZJpsgdr6gdNIVoWgdDvJig== X-Google-Smtp-Source: AGHT+IG3pDyDxBMqw8RqBACm6wYv2NLReC+811HQco1XsDc43Q8cjS6mSL/B8kDYrofYqxByYaLJ X-Received: by 2002:a05:6808:1405:b0:3c0:3a17:b9c4 with SMTP id w5-20020a056808140500b003c03a17b9c4mr788109oiv.6.1707861598697; Tue, 13 Feb 2024 13:59:58 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1707861598; cv=pass; d=google.com; s=arc-20160816; b=RwD+U9r8Xi6HJ+h4Lo8DrIJmSlaoeagaZojThbcZlQClBlBQllmnw56kw0pSxxk1Q2 VHDdOrkqbV9mCV/uHLyUU5lEGmyRuuYz07+PPsbV6Ris4V9GA2xxRLWOBWp7ACBkfRPt lnpv8f5AeurulzQx/Refh+shP7Qu3wEf1Bobl9aJur9svF56uyJmJ+hMqUIiUM5argCB 6XKAXlPQvjqSnhD0REK9RMNOn7Sbfs5P+tsYWPfoJDn1XN5eFB9+EG6odcshP6jlc6fk DukGvSX9PbP2ynZe0pNHityJ6IWRlc0Ga/vu1IbVGGvOB2jINFeZZOtWULalfsINqIvX P1/A== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:reply-to:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:dkim-signature :dkim-signature; bh=DFD67uQnb+ulb8US39wkV47wjWKTPIuVZ3EM1gZxQmo=; fh=Ppv68pgfc+L0n9q6PEridbnBuQs0NED0nnd3B7/KERU=; b=lWFZNMiE0f/dGmVLEzbBcvE2IiabvQK2wnWU05FHTK0YRbYq2ADNl0A467XLuErVpA iPpmm5WXTu5bjW0NECdy7jshyz0j3wS2b5DvxHZNu4Oi8PYX07T1cxIFhVbDXrjeEP7A AkRtc9ICCekdnZ2mhhmyhieLgUAkbyTsdMo3Fa1slHb0YJ35dEeWqvKAs2y4WfZd40qv RpDYF7vbYQFf581r8IXiunYB9VeV8iXcXPaU0g1iOUQZRWBoKlCpsnXTqhZ1IRle4Z6R 5GGPIUTljkTMpyGmFVyYE/aNpu1ROFKm0C9BNvq7k4zouGsslc7hcU3noenFug+acwNI fcsA==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@sent.com header.s=fm3 header.b=yYAQgjcj; dkim=pass header.i=@messagingengine.com header.s=fm3 header.b=bMCcmwqi; arc=pass (i=1 spf=pass spfdomain=sent.com dkim=pass dkdomain=sent.com dkim=pass dkdomain=messagingengine.com dmarc=pass fromdomain=sent.com); spf=pass (google.com: domain of linux-kernel+bounces-64394-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-64394-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=sent.com X-Forwarded-Encrypted: i=2; AJvYcCVH2ly1ug0+9y8f6lhSWu4kn3RJxgpJuRlc7h34hds7424hcCHX6BDoiqGYUnxOwo5btZM4aGsKKGkNwrQzhx8BXYpYaw== Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [2604:1380:45d1:ec00::1]) by mx.google.com with ESMTPS id w6-20020ac843c6000000b0042da9703dcdsi3070067qtn.125.2024.02.13.13.59.58 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 13 Feb 2024 13:59:58 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-64394-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) client-ip=2604:1380:45d1:ec00::1; Authentication-Results: mx.google.com; dkim=pass header.i=@sent.com header.s=fm3 header.b=yYAQgjcj; dkim=pass header.i=@messagingengine.com header.s=fm3 header.b=bMCcmwqi; arc=pass (i=1 spf=pass spfdomain=sent.com dkim=pass dkdomain=sent.com dkim=pass dkdomain=messagingengine.com dmarc=pass fromdomain=sent.com); spf=pass (google.com: domain of linux-kernel+bounces-64394-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-64394-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=sent.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id 6D0DC1C21935 for ; Tue, 13 Feb 2024 21:59:58 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id B2EE064CE6; Tue, 13 Feb 2024 21:55:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=sent.com header.i=@sent.com header.b="yYAQgjcj"; dkim=pass (2048-bit key) header.d=messagingengine.com header.i=@messagingengine.com header.b="bMCcmwqi" Received: from out5-smtp.messagingengine.com (out5-smtp.messagingengine.com [66.111.4.29]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3D268626D5; Tue, 13 Feb 2024 21:55:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=66.111.4.29 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707861353; cv=none; b=DNwaTxYaUsicxjXAOakZhlybrhlZg2b018ETyBk/VZ0IASycSuakCTAaMokXHjmlCsx/tnjg1LEBEjB6YyltamRKuoB6CEDaoOTVwdbDpiLHla4qZJpT+3vunOmT3ouxNh7I2TZHHCDg9+08dfOFIp6I2qIao2e1MUzDW4ZVxwM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707861353; c=relaxed/simple; bh=VGiJshFlzfJTNo+URBKg1KnL/Ou65I9YBkQp0I3HlGw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=ttDoqXW3MXfYHBt7HBrJ8Lh89VHag67JgwUO8BMmNzTfjlKCLkZh8FEpI7HAYNvoE+SLxNPAglZYMB+oD/o7BmukjccKlsxRqo9E367ANnhZpIxrMLUf1XQskfU+kGSlOtZAZBdfP+GNkzusj9c/Fcef1lyNir+yjFqInYOTjyk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=sent.com; spf=pass smtp.mailfrom=sent.com; dkim=pass (2048-bit key) header.d=sent.com header.i=@sent.com header.b=yYAQgjcj; dkim=pass (2048-bit key) header.d=messagingengine.com header.i=@messagingengine.com header.b=bMCcmwqi; arc=none smtp.client-ip=66.111.4.29 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=sent.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sent.com Received: from compute1.internal (compute1.nyi.internal [10.202.2.41]) by mailout.nyi.internal (Postfix) with ESMTP id 2BFDE5C0114; Tue, 13 Feb 2024 16:55:50 -0500 (EST) Received: from mailfrontend2 ([10.202.2.163]) by compute1.internal (MEProxy); Tue, 13 Feb 2024 16:55:50 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sent.com; h=cc :cc:content-transfer-encoding:content-type:content-type:date :date:from:from:in-reply-to:in-reply-to:message-id:mime-version :references:reply-to:reply-to:subject:subject:to:to; s=fm3; t= 1707861350; x=1707947750; bh=DFD67uQnb+ulb8US39wkV47wjWKTPIuVZ3E M1gZxQmo=; b=yYAQgjcjAjFkviiP5bDanth4ua6xtOef0fKzglSx07B9qGYi4dU mPqMNB+SAAhrcBNQAN/KKoYP2UPpagFwfqCyVHPlnH7gmgbi6z9CLg/dYiJ8JF0g j3iZGL7breFfiA0Q+JqtaRwazEhflTjXY9a7tfB+iefqXsUMySJOOwzZS1i0ThPY lL/XoDxyTlaXoCDtZXMNEoef0LMMKkKzPvFg6MBITMX182wUCADfrwVnru1eFfb5 B2NZ20NtFl3uKbAZqI2BC+GkMjhPc6z9bcHqG4bPHQseCdZfwmJAMnINX2Ay48C5 Lie+IbPo4PaG+33kaocLN+5h7sZUeVPONdw== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding :content-type:content-type:date:date:feedback-id:feedback-id :from:from:in-reply-to:in-reply-to:message-id:mime-version :references:reply-to:reply-to:subject:subject:to:to:x-me-proxy :x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm3; t= 1707861350; x=1707947750; bh=DFD67uQnb+ulb8US39wkV47wjWKTPIuVZ3E M1gZxQmo=; b=bMCcmwqij0ogO6M9GovWN06DDA9lmOzsRXUQ61nkq8KevW/A5rH VRaHwVra8JutQ9WL90i+sPClYOXvFHmA6KPK+C0nfApz8y3ISz/oBcygKF+/s0Mm akx1a2AGUdLT8mJDekRmk+DIOY2njm64l4iIb6i4KvOL2OVtYCGDqXZYsHCgO9yd 2ZzR5BcYQrI1iLNVzysnwfR10n+8hePLkZT17+ReRFaIDVxiABMDAM+ie7qreYqn tU21tFmNAV6R93PrejHiUOQBkR1cUTnd8p1r6/RITbkIHOQT7gsLlzaGOPu/ffTe KLm2OrTFVnzTGguYUazByunjCCRKq6/KZnw== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvledrudehgdduheefucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvvefufffkofgjfhhrgggtgfesthekredtredtjeenucfhrhhomhepkghi ucgjrghnuceoiihirdihrghnsehsvghnthdrtghomheqnecuggftrfgrthhtvghrnhepje ekteekffelleekudfftdefvddtjeejuedtuedtteegjefgvedtfedujeekieevnecuvehl uhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepiihirdihrghnse hsvghnthdrtghomh X-ME-Proxy: Feedback-ID: iccd040f4:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue, 13 Feb 2024 16:55:49 -0500 (EST) From: Zi Yan To: "Pankaj Raghav (Samsung)" , linux-mm@kvack.org Cc: Zi Yan , "Matthew Wilcox (Oracle)" , David Hildenbrand , Yang Shi , Yu Zhao , "Kirill A . Shutemov" , Ryan Roberts , =?utf-8?q?Michal_Koutn=C3=BD?= , Roman Gushchin , "Zach O'Keefe" , Hugh Dickins , Mcgrof Chamberlain , Andrew Morton , linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kselftest@vger.kernel.org Subject: [PATCH v4 4/7] mm: page_owner: add support for splitting to any order in split page_owner. Date: Tue, 13 Feb 2024 16:55:17 -0500 Message-ID: <20240213215520.1048625-5-zi.yan@sent.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240213215520.1048625-1-zi.yan@sent.com> References: <20240213215520.1048625-1-zi.yan@sent.com> Reply-To: Zi Yan Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1790822683704272110 X-GMAIL-MSGID: 1790822683704272110 From: Zi Yan It adds a new_order parameter to set new page order in page owner. It prepares for upcoming changes to support split huge page to any lower order. Signed-off-by: Zi Yan --- include/linux/page_owner.h | 10 +++++----- mm/huge_memory.c | 2 +- mm/page_alloc.c | 4 ++-- mm/page_owner.c | 9 +++++---- 4 files changed, 13 insertions(+), 12 deletions(-) diff --git a/include/linux/page_owner.h b/include/linux/page_owner.h index d7878523adfc..a784ba69f67f 100644 --- a/include/linux/page_owner.h +++ b/include/linux/page_owner.h @@ -11,7 +11,7 @@ extern struct page_ext_operations page_owner_ops; extern void __reset_page_owner(struct page *page, unsigned short order); extern void __set_page_owner(struct page *page, unsigned short order, gfp_t gfp_mask); -extern void __split_page_owner(struct page *page, int order); +extern void __split_page_owner(struct page *page, int old_order, int new_order); extern void __folio_copy_owner(struct folio *newfolio, struct folio *old); extern void __set_page_owner_migrate_reason(struct page *page, int reason); extern void __dump_page_owner(const struct page *page); @@ -31,10 +31,10 @@ static inline void set_page_owner(struct page *page, __set_page_owner(page, order, gfp_mask); } -static inline void split_page_owner(struct page *page, int order) +static inline void split_page_owner(struct page *page, int old_order, int new_order) { if (static_branch_unlikely(&page_owner_inited)) - __split_page_owner(page, order); + __split_page_owner(page, old_order, new_order); } static inline void folio_copy_owner(struct folio *newfolio, struct folio *old) { @@ -56,11 +56,11 @@ static inline void reset_page_owner(struct page *page, unsigned short order) { } static inline void set_page_owner(struct page *page, - unsigned int order, gfp_t gfp_mask) + unsigned short order, gfp_t gfp_mask) { } static inline void split_page_owner(struct page *page, - int order) + int old_order, int new_order) { } static inline void folio_copy_owner(struct folio *newfolio, struct folio *folio) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 3d30eccd3a7f..ad7133c97428 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2919,7 +2919,7 @@ static void __split_huge_page(struct page *page, struct list_head *list, unlock_page_lruvec(lruvec); /* Caller disabled irqs, so they are still disabled here */ - split_page_owner(head, order); + split_page_owner(head, order, 0); /* See comment in __split_huge_page_tail() */ if (PageAnon(head)) { diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 9d4dd41d0647..e0f107b21c98 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -2652,7 +2652,7 @@ void split_page(struct page *page, unsigned int order) for (i = 1; i < (1 << order); i++) set_page_refcounted(page + i); - split_page_owner(page, order); + split_page_owner(page, order, 0); split_page_memcg(page, order, 0); } EXPORT_SYMBOL_GPL(split_page); @@ -4837,7 +4837,7 @@ static void *make_alloc_exact(unsigned long addr, unsigned int order, struct page *page = virt_to_page((void *)addr); struct page *last = page + nr; - split_page_owner(page, order); + split_page_owner(page, order, 0); split_page_memcg(page, order, 0); while (page < --last) set_page_refcounted(last); diff --git a/mm/page_owner.c b/mm/page_owner.c index 1319e402c2cf..ebbffa0501db 100644 --- a/mm/page_owner.c +++ b/mm/page_owner.c @@ -292,19 +292,20 @@ void __set_page_owner_migrate_reason(struct page *page, int reason) page_ext_put(page_ext); } -void __split_page_owner(struct page *page, int order) +void __split_page_owner(struct page *page, int old_order, int new_order) { int i; struct page_ext *page_ext = page_ext_get(page); struct page_owner *page_owner; - unsigned int nr = 1 << order; + unsigned int old_nr = 1 << old_order; + unsigned int new_nr = 1 << new_order; if (unlikely(!page_ext)) return; - for (i = 0; i < nr; i++) { + for (i = 0; i < old_nr; i += new_nr) { page_owner = get_page_owner(page_ext); - page_owner->order = 0; + page_owner->order = new_order; page_ext = page_ext_next(page_ext); } page_ext_put(page_ext); From patchwork Tue Feb 13 21:55:18 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zi Yan X-Patchwork-Id: 200688 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:bc8a:b0:106:860b:bbdd with SMTP id dn10csp836264dyb; Tue, 13 Feb 2024 14:11:24 -0800 (PST) X-Forwarded-Encrypted: i=3; AJvYcCXOLILAdl2bQsh3j0RZR7GHbOYtHInfTQiIT2GKrotWkZQnO8WIMytwc2y+zMysYlJkGPl8eQ/QJWd6u97Zp7803XlUZQ== X-Google-Smtp-Source: AGHT+IGw8nXt3AyXX8d+OEaj29J0Srac6LN3v9DnkWLXybHRo9f1aZsjcxHuJuX13QFk5BVqhu1N X-Received: by 2002:a05:6a20:9d90:b0:19e:a19f:f4de with SMTP id mu16-20020a056a209d9000b0019ea19ff4demr954909pzb.41.1707862283964; Tue, 13 Feb 2024 14:11:23 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1707862283; cv=pass; d=google.com; s=arc-20160816; b=cqoogxW0JKNGg5GFrR7IxKpWXewM7YChozlYKW/LG747uKCn2n+UnW2vKTqCC+vi86 a+KI87VfyOmK9Wz2x7bLVzIEN+31IKChcSjmlJYlbwfUpuqm+EjzW5xUD7MWSnG+b3AG ZYvZHMccsra3m3ybLK+8EqK7QiFB1mfe32Iw7h7xxteVyEuyIoJFk7TNZca3HWaOs1tp esgHxTzhEwpzFzaHjP+vYhyClT9rzYEo0uS/MyWBsTWjdIaX/0feVFkm44OczoAbj9Z7 Xk0h4kv5tbTYU0zTgipI0a+lYeKmYq/oK713k6/YE0qhagDNNA0Wo3Jbc1LK5q8VpKMO CGMg== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:reply-to:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:dkim-signature :dkim-signature; bh=53gxtX7iv4MMGU2MQP7E5gE5detLt5TaDUQanNQvBhI=; fh=LUfFLUKn9BEJem6ixCK/ynDAykKgkLx2+U8g5THdq94=; b=Y29ZiwaYSG6IRY3Q4YUdc5gpM9d1Ma+yijtF3xuRNHSyyh2EPXqDss4D2gB9Lb+dlO VgD9rKrTBrzbM5KrVnrY73wr5O+TqYEtQzSkuqiVid1PpOCp4EVTGtLQK/Lr9KiTHDnB M6hZXIRGy5w6HCwjygkzLdhait/9elF/3j6qKjrw0HIsV2Dx9NIxz5LUu93wY5IKMYH7 3gBvIaB3G8u2f74+pMBdnOOVUWoQI0FPbqLBvkw3hIsnXp6Xu4f8w+T+wgJVinN8z3ah cbXBVYoE7NDJi6bTe8rwchGFAgOX/yRdxhvq0+p92FYrFezvfKc7NEnl6TUDhbMN5N/J Ud5w==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@sent.com header.s=fm3 header.b=xNCr4Eyx; dkim=pass header.i=@messagingengine.com header.s=fm3 header.b=kTV0K89w; arc=pass (i=1 spf=pass spfdomain=sent.com dkim=pass dkdomain=sent.com dkim=pass dkdomain=messagingengine.com dmarc=pass fromdomain=sent.com); spf=pass (google.com: domain of linux-kernel+bounces-64395-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-64395-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=sent.com X-Forwarded-Encrypted: i=2; AJvYcCXRWupnc6WPFVH5yMe8gR4Owp6uJTqt+ry8lXX2ezRKIU6GtpRZpoXPCF6wGB97UqhjfgsUsUU1vCzp7LljiSsNXlubMA== Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [139.178.88.99]) by mx.google.com with ESMTPS id p16-20020a1709027ed000b001db535f1530si361568plb.419.2024.02.13.14.11.23 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 13 Feb 2024 14:11:23 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-64395-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) client-ip=139.178.88.99; Authentication-Results: mx.google.com; dkim=pass header.i=@sent.com header.s=fm3 header.b=xNCr4Eyx; dkim=pass header.i=@messagingengine.com header.s=fm3 header.b=kTV0K89w; arc=pass (i=1 spf=pass spfdomain=sent.com dkim=pass dkdomain=sent.com dkim=pass dkdomain=messagingengine.com dmarc=pass fromdomain=sent.com); spf=pass (google.com: domain of linux-kernel+bounces-64395-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-64395-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=sent.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id A45A52871FC for ; Tue, 13 Feb 2024 22:00:25 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 0A941651BB; Tue, 13 Feb 2024 21:56:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=sent.com header.i=@sent.com header.b="xNCr4Eyx"; dkim=pass (2048-bit key) header.d=messagingengine.com header.i=@messagingengine.com header.b="kTV0K89w" Received: from out5-smtp.messagingengine.com (out5-smtp.messagingengine.com [66.111.4.29]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B439C62156; Tue, 13 Feb 2024 21:55:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=66.111.4.29 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707861354; cv=none; b=K0mlsU9fOcoN7eyV8+b+Fb+O99kNa4gy+IxpI2Xj3ybuum2L0F4g2+v9rMyWLAQViNBaI19xhA1tKo+I0kft5FWz+AUTVGxDXgNr0qxfwWtZ1+B+SAD5GnE39O948/+GUc2/qXssNGT4MBkPrXa0CFNVQbQFuJm0AaIYMWp//Hc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707861354; c=relaxed/simple; bh=JHkEikCgY3i/2eUJc8+QZcCJfr8NUuDadSm26SoKiOQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=gAY1e5ygThTC0fF5kWJTqIKRDy6TBlehfaBqjgHrjQ1vClGvclPDqR8yJkoPybEGJlw2gHSe06gOgqsjWjYTmFghtjQ35CrjgBaym3Wj6UkExu6XmHN+1ytyrpFu1akQBkPzhjd4nnKoYGVB3vREP1dKAgWbSiZrX+dz26g1Y10= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=sent.com; spf=pass smtp.mailfrom=sent.com; dkim=pass (2048-bit key) header.d=sent.com header.i=@sent.com header.b=xNCr4Eyx; dkim=pass (2048-bit key) header.d=messagingengine.com header.i=@messagingengine.com header.b=kTV0K89w; arc=none smtp.client-ip=66.111.4.29 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=sent.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sent.com Received: from compute5.internal (compute5.nyi.internal [10.202.2.45]) by mailout.nyi.internal (Postfix) with ESMTP id CDA975C010C; Tue, 13 Feb 2024 16:55:50 -0500 (EST) Received: from mailfrontend2 ([10.202.2.163]) by compute5.internal (MEProxy); Tue, 13 Feb 2024 16:55:50 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sent.com; h=cc :cc:content-transfer-encoding:content-type:content-type:date :date:from:from:in-reply-to:in-reply-to:message-id:mime-version :references:reply-to:reply-to:subject:subject:to:to; s=fm3; t= 1707861350; x=1707947750; bh=53gxtX7iv4MMGU2MQP7E5gE5detLt5TaDUQ anNQvBhI=; b=xNCr4EyxjGxqd5ULfKK3OPD5KTVkU9NXO+Lzcrk0IOhb4mSXvlq UcfsCv9V6oLlEwlMByomljWq8tGq7IoO5kc9DLSJKn97sNF/wCRtYLb+l+UFfPQK lWuhXmydaRbO9+/OnoXoPiuoPP7gK61p5wJzd4wI/ku3tkYP6a/aan4szsYsZ6dF xtaIFU1UP1kRIflWHy90+v4V0+MwGNMiasr3yA0H35nShrUfxxzwMP5Zwlu+jafz g6Q6K44gxCWbjhLjib6CT2N0u5YF056zZ3lijA+vlMIaT3QZ6pEftWnFkvC3/vIQ JuTdL78x+/59LVC0ilI1UbBmoMZPR/Ea5VQ== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding :content-type:content-type:date:date:feedback-id:feedback-id :from:from:in-reply-to:in-reply-to:message-id:mime-version :references:reply-to:reply-to:subject:subject:to:to:x-me-proxy :x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm3; t= 1707861350; x=1707947750; bh=53gxtX7iv4MMGU2MQP7E5gE5detLt5TaDUQ anNQvBhI=; b=kTV0K89wR4dDFy2qyiolbWIpksbaSJ3bo6a7ddpoRpQiRviYWSe 7IxKMPBhhrDQUulsuPyI/9FGyErLQLPENI7Z5yXfK8aTj/kiovVBxIe6liP70IEI KBmyUdPW5F5KMdJkIu+otGExEiashwZ1GMhbG0Wa2+ujjLHoc4NudbSpk3gE7fIn uXCYOTW1oliAQuvnZh6FXjpqKkBfu5badMEpa/tXUg5yAXZ+q22ApxUhsy3UCTma zdwrfq0KS0g1L+o6168BHfZViCRw7tcGDIdIY+vBeVAOY/jXaoDA5DzlCyVRM9CA SmjG4sdK6XSgbNOMp+S/djUu0g1e9FGRpdg== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvledrudehgdduheegucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucdnrfhurhgthhgrshgvucdluddtmdenucfjughrpe fhvfevufffkffojghfrhggtgfgsehtkeertdertdejnecuhfhrohhmpegkihcujggrnhcu oeiiihdrhigrnhesshgvnhhtrdgtohhmqeenucggtffrrghtthgvrhhnpeejkeetkeffle elkeduffdtfedvtdejjeeutdeutdetgeejgfevtdefudejkeeiveenucevlhhushhtvghr ufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpeiiihdrhigrnhesshgvnhhtrd gtohhm X-ME-Proxy: Feedback-ID: iccd040f4:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue, 13 Feb 2024 16:55:50 -0500 (EST) From: Zi Yan To: "Pankaj Raghav (Samsung)" , linux-mm@kvack.org Cc: Zi Yan , "Matthew Wilcox (Oracle)" , David Hildenbrand , Yang Shi , Yu Zhao , "Kirill A . Shutemov" , Ryan Roberts , =?utf-8?q?Michal_Koutn=C3=BD?= , Roman Gushchin , "Zach O'Keefe" , Hugh Dickins , Mcgrof Chamberlain , Andrew Morton , linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kselftest@vger.kernel.org Subject: [PATCH v4 5/7] mm: thp: split huge page to any lower order pages (except order-1). Date: Tue, 13 Feb 2024 16:55:18 -0500 Message-ID: <20240213215520.1048625-6-zi.yan@sent.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240213215520.1048625-1-zi.yan@sent.com> References: <20240213215520.1048625-1-zi.yan@sent.com> Reply-To: Zi Yan Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1790823402409713584 X-GMAIL-MSGID: 1790823402409713584 From: Zi Yan To split a THP to any lower order (except order-1) pages, we need to reform THPs on subpages at given order and add page refcount based on the new page order. Also we need to reinitialize page_deferred_list after removing the page from the split_queue, otherwise a subsequent split will see list corruption when checking the page_deferred_list again. It has many uses, like minimizing the number of pages after truncating a huge pagecache page. For anonymous THPs, we can only split them to order-0 like before until we add support for any size anonymous THPs. Order-1 folio is not supported because _deferred_list, which is used by partially mapped folios, is stored in subpage 2 and an order-1 folio only has subpage 0 and 1. Signed-off-by: Zi Yan --- include/linux/huge_mm.h | 21 +++++--- mm/huge_memory.c | 114 +++++++++++++++++++++++++++++++--------- 2 files changed, 101 insertions(+), 34 deletions(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 5adb86af35fc..de0c89105076 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -265,10 +265,11 @@ unsigned long thp_get_unmapped_area(struct file *filp, unsigned long addr, void folio_prep_large_rmappable(struct folio *folio); bool can_split_folio(struct folio *folio, int *pextra_pins); -int split_huge_page_to_list(struct page *page, struct list_head *list); +int split_huge_page_to_list_to_order(struct page *page, struct list_head *list, + unsigned int new_order); static inline int split_huge_page(struct page *page) { - return split_huge_page_to_list(page, NULL); + return split_huge_page_to_list_to_order(page, NULL, 0); } void deferred_split_folio(struct folio *folio); @@ -422,7 +423,8 @@ can_split_folio(struct folio *folio, int *pextra_pins) return false; } static inline int -split_huge_page_to_list(struct page *page, struct list_head *list) +split_huge_page_to_list_to_order(struct page *page, struct list_head *list, + unsigned int new_order) { return 0; } @@ -519,17 +521,20 @@ static inline bool thp_migration_supported(void) } #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ -static inline int split_folio_to_list(struct folio *folio, - struct list_head *list) +static inline int split_folio_to_list_to_order(struct folio *folio, + struct list_head *list, int new_order) { - return split_huge_page_to_list(&folio->page, list); + return split_huge_page_to_list_to_order(&folio->page, list, new_order); } -static inline int split_folio(struct folio *folio) +static inline int split_folio_to_order(struct folio *folio, int new_order) { - return split_folio_to_list(folio, NULL); + return split_folio_to_list_to_order(folio, NULL, new_order); } +#define split_folio_to_list(f, l) split_folio_to_list_to_order(f, l, 0) +#define split_folio(f) split_folio_to_order(f, 0) + /* * archs that select ARCH_WANTS_THP_SWAP but don't support THP_SWP due to * limitations in the implementation like arm64 MTE can override this to diff --git a/mm/huge_memory.c b/mm/huge_memory.c index ad7133c97428..d0e555a8ea98 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2718,11 +2718,14 @@ void vma_adjust_trans_huge(struct vm_area_struct *vma, static void unmap_folio(struct folio *folio) { - enum ttu_flags ttu_flags = TTU_RMAP_LOCKED | TTU_SPLIT_HUGE_PMD | - TTU_SYNC | TTU_BATCH_FLUSH; + enum ttu_flags ttu_flags = TTU_RMAP_LOCKED | TTU_SYNC | + TTU_BATCH_FLUSH; VM_BUG_ON_FOLIO(!folio_test_large(folio), folio); + if (folio_test_pmd_mappable(folio)) + ttu_flags |= TTU_SPLIT_HUGE_PMD; + /* * Anon pages need migration entries to preserve them, but file * pages can simply be left unmapped, then faulted back on demand. @@ -2756,7 +2759,6 @@ static void lru_add_page_tail(struct page *head, struct page *tail, struct lruvec *lruvec, struct list_head *list) { VM_BUG_ON_PAGE(!PageHead(head), head); - VM_BUG_ON_PAGE(PageCompound(tail), head); VM_BUG_ON_PAGE(PageLRU(tail), head); lockdep_assert_held(&lruvec->lru_lock); @@ -2777,7 +2779,8 @@ static void lru_add_page_tail(struct page *head, struct page *tail, } static void __split_huge_page_tail(struct folio *folio, int tail, - struct lruvec *lruvec, struct list_head *list) + struct lruvec *lruvec, struct list_head *list, + unsigned int new_order) { struct page *head = &folio->page; struct page *page_tail = head + tail; @@ -2847,10 +2850,15 @@ static void __split_huge_page_tail(struct folio *folio, int tail, * which needs correct compound_head(). */ clear_compound_head(page_tail); + if (new_order) { + prep_compound_page(page_tail, new_order); + folio_prep_large_rmappable(page_folio(page_tail)); + } /* Finally unfreeze refcount. Additional reference from page cache. */ - page_ref_unfreeze(page_tail, 1 + (!folio_test_anon(folio) || - folio_test_swapcache(folio))); + page_ref_unfreeze(page_tail, + 1 + ((!folio_test_anon(folio) || folio_test_swapcache(folio)) ? + folio_nr_pages(page_folio(page_tail)) : 0)); if (folio_test_young(folio)) folio_set_young(new_folio); @@ -2868,7 +2876,7 @@ static void __split_huge_page_tail(struct folio *folio, int tail, } static void __split_huge_page(struct page *page, struct list_head *list, - pgoff_t end) + pgoff_t end, unsigned int new_order) { struct folio *folio = page_folio(page); struct page *head = &folio->page; @@ -2877,10 +2885,11 @@ static void __split_huge_page(struct page *page, struct list_head *list, unsigned long offset = 0; unsigned int nr = thp_nr_pages(head); int i, nr_dropped = 0; + unsigned int new_nr = 1 << new_order; int order = folio_order(folio); /* complete memcg works before add pages to LRU */ - split_page_memcg(head, order, 0); + split_page_memcg(head, order, new_order); if (folio_test_anon(folio) && folio_test_swapcache(folio)) { offset = swp_offset(folio->swap); @@ -2893,8 +2902,8 @@ static void __split_huge_page(struct page *page, struct list_head *list, ClearPageHasHWPoisoned(head); - for (i = nr - 1; i >= 1; i--) { - __split_huge_page_tail(folio, i, lruvec, list); + for (i = nr - new_nr; i >= new_nr; i -= new_nr) { + __split_huge_page_tail(folio, i, lruvec, list, new_order); /* Some pages can be beyond EOF: drop them from page cache */ if (head[i].index >= end) { struct folio *tail = page_folio(head + i); @@ -2910,29 +2919,41 @@ static void __split_huge_page(struct page *page, struct list_head *list, __xa_store(&head->mapping->i_pages, head[i].index, head + i, 0); } else if (swap_cache) { + /* + * split anonymous THPs (including swapped out ones) to + * non-zero order not supported + */ + VM_WARN_ONCE(new_order, + "Split swap-cached anon folio to non-0 order not supported"); __xa_store(&swap_cache->i_pages, offset + i, head + i, 0); } } - ClearPageCompound(head); + if (!new_order) + ClearPageCompound(head); + else { + struct folio *new_folio = (struct folio *)head; + + folio_set_order(new_folio, new_order); + } unlock_page_lruvec(lruvec); /* Caller disabled irqs, so they are still disabled here */ - split_page_owner(head, order, 0); + split_page_owner(head, order, new_order); /* See comment in __split_huge_page_tail() */ if (PageAnon(head)) { /* Additional pin to swap cache */ if (PageSwapCache(head)) { - page_ref_add(head, 2); + page_ref_add(head, 1 + new_nr); xa_unlock(&swap_cache->i_pages); } else { page_ref_inc(head); } } else { /* Additional pin to page cache */ - page_ref_add(head, 2); + page_ref_add(head, 1 + new_nr); xa_unlock(&head->mapping->i_pages); } local_irq_enable(); @@ -2944,7 +2965,15 @@ static void __split_huge_page(struct page *page, struct list_head *list, if (folio_test_swapcache(folio)) split_swap_cluster(folio->swap); - for (i = 0; i < nr; i++) { + /* + * set page to its compound_head when split to non order-0 pages, so + * we can skip unlocking it below, since PG_locked is transferred to + * the compound_head of the page and the caller will unlock it. + */ + if (new_order) + page = compound_head(page); + + for (i = 0; i < nr; i += new_nr) { struct page *subpage = head + i; if (subpage == page) continue; @@ -2978,29 +3007,35 @@ bool can_split_folio(struct folio *folio, int *pextra_pins) } /* - * This function splits huge page into normal pages. @page can point to any - * subpage of huge page to split. Split doesn't change the position of @page. + * This function splits huge page into pages in @new_order. @page can point to + * any subpage of huge page to split. Split doesn't change the position of + * @page. + * + * NOTE: order-1 folio is not supported because _deferred_list, which is used + * by partially mapped folios, is stored in subpage 2 and an order-1 folio + * only has subpage 0 and 1. * * Only caller must hold pin on the @page, otherwise split fails with -EBUSY. * The huge page must be locked. * * If @list is null, tail pages will be added to LRU list, otherwise, to @list. * - * Both head page and tail pages will inherit mapping, flags, and so on from - * the hugepage. + * Pages in new_order will inherit mapping, flags, and so on from the hugepage. * - * GUP pin and PG_locked transferred to @page. Rest subpages can be freed if - * they are not mapped. + * GUP pin and PG_locked transferred to @page or the compound page @page belongs + * to. Rest subpages can be freed if they are not mapped. * * Returns 0 if the hugepage is split successfully. * Returns -EBUSY if the page is pinned or if anon_vma disappeared from under * us. */ -int split_huge_page_to_list(struct page *page, struct list_head *list) +int split_huge_page_to_list_to_order(struct page *page, struct list_head *list, + unsigned int new_order) { struct folio *folio = page_folio(page); struct deferred_split *ds_queue = get_deferred_split_queue(folio); - XA_STATE(xas, &folio->mapping->i_pages, folio->index); + /* reset xarray order to new order after split */ + XA_STATE_ORDER(xas, &folio->mapping->i_pages, folio->index, new_order); struct anon_vma *anon_vma = NULL; struct address_space *mapping = NULL; int extra_pins, ret; @@ -3010,6 +3045,26 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio); VM_BUG_ON_FOLIO(!folio_test_large(folio), folio); + /* Cannot split THP to order-1 (no order-1 THPs) */ + if (new_order == 1) { + VM_WARN_ONCE(1, "Cannot split to order-1 folio"); + return -EINVAL; + } + + if (new_order) { + /* Split shmem folio to non-zero order not supported */ + if (shmem_mapping(folio->mapping)) { + VM_WARN_ONCE(1, "Split shmem folio to non-0 order not support"); + return -EINVAL; + } + /* No split if the file system does not support large folio */ + if (!mapping_large_folio_support(folio->mapping)) { + VM_WARN_ONCE(1, "Split file folio to non-0 order not support"); + return -EINVAL; + } + } + + is_hzp = is_huge_zero_page(&folio->page); if (is_hzp) { pr_warn_ratelimited("Called split_huge_page for huge zero page\n"); @@ -3105,14 +3160,21 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) if (folio_ref_freeze(folio, 1 + extra_pins)) { if (!list_empty(&folio->_deferred_list)) { ds_queue->split_queue_len--; - list_del(&folio->_deferred_list); + /* + * Reinitialize page_deferred_list after removing the + * page from the split_queue, otherwise a subsequent + * split will see list corruption when checking the + * page_deferred_list. + */ + list_del_init(&folio->_deferred_list); } spin_unlock(&ds_queue->split_queue_lock); if (mapping) { int nr = folio_nr_pages(folio); xas_split(&xas, folio, folio_order(folio)); - if (folio_test_pmd_mappable(folio)) { + if (folio_test_pmd_mappable(folio) && + new_order < HPAGE_PMD_ORDER) { if (folio_test_swapbacked(folio)) { __lruvec_stat_mod_folio(folio, NR_SHMEM_THPS, -nr); @@ -3124,7 +3186,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) } } - __split_huge_page(page, list, end); + __split_huge_page(page, list, end, new_order); ret = 0; } else { spin_unlock(&ds_queue->split_queue_lock); From patchwork Tue Feb 13 21:55:19 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zi Yan X-Patchwork-Id: 200691 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:bc8a:b0:106:860b:bbdd with SMTP id dn10csp836434dyb; Tue, 13 Feb 2024 14:11:55 -0800 (PST) X-Forwarded-Encrypted: i=3; AJvYcCWRFBxEi3WTnJy2jsHpKoMPqMPPXXPe2ulrbGQpcF+glv4nTJZB1VZxVq6qkVTYWTNF+fkfxdIWDYQWQpMk/afUj0Q8gw== X-Google-Smtp-Source: AGHT+IFzxDMoTk474UY0PkHHm0yTvMZYrpr6TDJ+eEivqGkT0DGvnk+htGoIMhlTXH3lvC3S02qq X-Received: by 2002:a05:6a20:9d90:b0:19e:c44b:c8fe with SMTP id mu16-20020a056a209d9000b0019ec44bc8femr920763pzb.52.1707862315168; Tue, 13 Feb 2024 14:11:55 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1707862315; cv=pass; d=google.com; s=arc-20160816; b=YXzUKqMBQhs7e/2xhk9b6oWhiKV9znRYEYWN+ypU4o+T0qXeMzJZ7ggp0j71PYtfug PWZN8mzD/iZB+TX5/EaWeQrnx8LzmmrXYoW0CWIcO6eLg7XRsAtv1IcZCsF6V00XWpam FS+Jxny7/7mnv8rYWJsdC7CQ0ORhPwPZpN2cZdbACFGsq9gmLFLG+WcK4iX86NbeT4fF wwiuoM5+HuFgNbSBVlph7+2CpZNucqsnk74l94qIBJuLXuNXbuvTY08jov8gxqar1krb HXl5mQYThf2bE1kuMztrTaafMr0rDsgoAlTkCWtBrZk0CfzbQad578BkBFOsyY9t/SNb ggVg== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:reply-to:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:dkim-signature :dkim-signature; bh=oTNlS8iSN/pL1arvPd/5QTJjaetwy6e9fk4UTiJDrr4=; fh=do7GQ5krNxqlqtB4KiN8VPNystVLWgLIYMj4c9TuNKU=; b=sZFuLT/NfZD6JIbez8hBwlmfuSzjKVrsodOdh+SRVMFkP48EjImwSN3zGw0ogKZE6R /Bk1cQGww5XXeQhi4BKFfFAE2UbulyKRz3620yG+IrgxB7mEQSOxIOlJOgt+udBbN93F 36zGMicPTNgjAnvIgvJCpKUKMkHfibXLviffPSf91RbkVeXZkjKKwNiz4bNE6RYkW1sD 28wZGhRtVQNEEctfSRKw/Gx02gzJ9pHsZF8MRSpuEuXBn+gMBE+cb5RbTi0oQQl1hpS4 23X9W2EoQlVtA0yrh87wTQg9KHuGcb5bfMBhsu4+TBon9h5xDPC7GgM/gfn44Fo9NcHb X0yg==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@sent.com header.s=fm3 header.b=WvLMGzG8; dkim=pass header.i=@messagingengine.com header.s=fm3 header.b=sd8v5LS0; arc=pass (i=1 spf=pass spfdomain=sent.com dkim=pass dkdomain=sent.com dkim=pass dkdomain=messagingengine.com dmarc=pass fromdomain=sent.com); spf=pass (google.com: domain of linux-kernel+bounces-64396-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-64396-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=sent.com X-Forwarded-Encrypted: i=2; AJvYcCUf/KqixeHnzOjJhEBBxBRAvtuQjAcw1+MgzxNm/b0ZM+kDtCKsKNm4bJ43cJH91/37XpnnMIfcRKmLymw6jasmrBioRQ== Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [139.178.88.99]) by mx.google.com with ESMTPS id j190-20020a6380c7000000b005d80a51e1c0si2580834pgd.851.2024.02.13.14.11.55 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 13 Feb 2024 14:11:55 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-64396-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) client-ip=139.178.88.99; Authentication-Results: mx.google.com; dkim=pass header.i=@sent.com header.s=fm3 header.b=WvLMGzG8; dkim=pass header.i=@messagingengine.com header.s=fm3 header.b=sd8v5LS0; arc=pass (i=1 spf=pass spfdomain=sent.com dkim=pass dkdomain=sent.com dkim=pass dkdomain=messagingengine.com dmarc=pass fromdomain=sent.com); spf=pass (google.com: domain of linux-kernel+bounces-64396-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-64396-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=sent.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id 450D5285CD6 for ; Tue, 13 Feb 2024 22:00:17 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 4F220651A4; Tue, 13 Feb 2024 21:55:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=sent.com header.i=@sent.com header.b="WvLMGzG8"; dkim=pass (2048-bit key) header.d=messagingengine.com header.i=@messagingengine.com header.b="sd8v5LS0" Received: from out5-smtp.messagingengine.com (out5-smtp.messagingengine.com [66.111.4.29]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 89E3462A1C; Tue, 13 Feb 2024 21:55:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=66.111.4.29 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707861354; cv=none; b=ug7aSTNkCSZhsdrM5R1OC2lsd8BIMRQSa3iBIDZBfNQQsWGcIhPolL2Y9ugeJKKLnGRtvIk8NqawxqfCw4Ym0zqxYF3+QDiZ5+LMdMmDxSpz+TDKPE3fWvMnc/FEhqODs4a9Z88W+Howsc1rngKQUg541sgS4S0m4BWLcxXf2KQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707861354; c=relaxed/simple; bh=OvAIGuwlGYeYczWW+K0KagfhK17treRqEU1nj29vKyk=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=LhRI4XBX0SBYLpXHkIIWuC6bPHBVbi4SFa1mAR9BcpkdHP2dzPXoDCfq+s9dq685hPCJlSJ9bAZ2krr0nNtnOnlmerLAmV7Q4xNdZzDS2TeC97dun6KoP0whU5UyK1rgYrqkJfaR7KtKBUyhJBDc380OTkNYjkfWb40j6t8j2jY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=sent.com; spf=pass smtp.mailfrom=sent.com; dkim=pass (2048-bit key) header.d=sent.com header.i=@sent.com header.b=WvLMGzG8; dkim=pass (2048-bit key) header.d=messagingengine.com header.i=@messagingengine.com header.b=sd8v5LS0; arc=none smtp.client-ip=66.111.4.29 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=sent.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sent.com Received: from compute6.internal (compute6.nyi.internal [10.202.2.47]) by mailout.nyi.internal (Postfix) with ESMTP id 7B87D5C0117; Tue, 13 Feb 2024 16:55:51 -0500 (EST) Received: from mailfrontend2 ([10.202.2.163]) by compute6.internal (MEProxy); Tue, 13 Feb 2024 16:55:51 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sent.com; h=cc :cc:content-transfer-encoding:content-type:content-type:date :date:from:from:in-reply-to:in-reply-to:message-id:mime-version :references:reply-to:reply-to:subject:subject:to:to; s=fm3; t= 1707861351; x=1707947751; bh=oTNlS8iSN/pL1arvPd/5QTJjaetwy6e9fk4 UTiJDrr4=; b=WvLMGzG8wXGQ5WLFCyib254XrZQHLxadtOA5IFge31vgtZsUNbw FPxOtPftSxp73UoR9iEg0kRZSjAt3xVySIfSZPJZ+naXMvtREPI0U3qhxpCwOnqu wwMAdeHW4ZkZaro6be6Dqd0SALeZhxUxx/qjy5Bkw/ko2pogmrCh5Gh0XPG4e4kd dya0nrayzU3kjfIG4G00T9cPLYiLhzzsAz8mnQFVCyxAZ65g7Af73FntZ1IvA56I //RKnQY3tLcyiUYnnv4PzxhGicKR/FW8SnezFUktwatcPARjUIMdMF/G92989M6F oegvKTtsKcbbpLMFmwuQ52mp0uNLaz0pbZA== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding :content-type:content-type:date:date:feedback-id:feedback-id :from:from:in-reply-to:in-reply-to:message-id:mime-version :references:reply-to:reply-to:subject:subject:to:to:x-me-proxy :x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm3; t= 1707861351; x=1707947751; bh=oTNlS8iSN/pL1arvPd/5QTJjaetwy6e9fk4 UTiJDrr4=; b=sd8v5LS0k1pzxQL6VcVTcdNqUwTwF6t38tK/aSkdRt9DNR69YYW mgl3uyGhZ30NsrbitOLENl0DJQ2ZFqLAjEwrfl0soH7FXk90jzE7qFP38FVcL+M+ 4yjcb/L5toUeqbQW+1OeFqUN6+cxQNa2h5iKXgU8ITyuqdZn0m2lhUNtp8sscJkH 71gt5FNseF2R33sHskNxGOqZdSrEzuzUczAlVcw9KlQK7KCTx82uL61DXBoBG7/1 zb/Uxl+sqfZiUBXkoNgvExH/QammNJtjDbQg7c4N8sO7Y75+AIB7dzCV+RsHbai/ tkYuP3C6jL93ogiwew+KhZFSV9a/jRcl/AQ== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvledrudehgdduheegucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvvefufffkofgjfhhrgggtgfesthekredtredtjeenucfhrhhomhepkghi ucgjrghnuceoiihirdihrghnsehsvghnthdrtghomheqnecuggftrfgrthhtvghrnhepje ekteekffelleekudfftdefvddtjeejuedtuedtteegjefgvedtfedujeekieevnecuvehl uhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepiihirdihrghnse hsvghnthdrtghomh X-ME-Proxy: Feedback-ID: iccd040f4:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue, 13 Feb 2024 16:55:50 -0500 (EST) From: Zi Yan To: "Pankaj Raghav (Samsung)" , linux-mm@kvack.org Cc: Zi Yan , "Matthew Wilcox (Oracle)" , David Hildenbrand , Yang Shi , Yu Zhao , "Kirill A . Shutemov" , Ryan Roberts , =?utf-8?q?Michal_Koutn=C3=BD?= , Roman Gushchin , "Zach O'Keefe" , Hugh Dickins , Mcgrof Chamberlain , Andrew Morton , linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kselftest@vger.kernel.org Subject: [PATCH v4 6/7] mm: truncate: split huge page cache page to a non-zero order if possible. Date: Tue, 13 Feb 2024 16:55:19 -0500 Message-ID: <20240213215520.1048625-7-zi.yan@sent.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240213215520.1048625-1-zi.yan@sent.com> References: <20240213215520.1048625-1-zi.yan@sent.com> Reply-To: Zi Yan Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1790823434964616276 X-GMAIL-MSGID: 1790823434964616276 From: Zi Yan To minimize the number of pages after a huge page truncation, we do not need to split it all the way down to order-0. The huge page has at most three parts, the part before offset, the part to be truncated, the part remaining at the end. Find the greatest common divisor of them to calculate the new page order from it, so we can split the huge page to this order and keep the remaining pages as large and as few as possible. Signed-off-by: Zi Yan --- mm/truncate.c | 21 +++++++++++++++++++-- 1 file changed, 19 insertions(+), 2 deletions(-) diff --git a/mm/truncate.c b/mm/truncate.c index 725b150e47ac..49ddbbf7a617 100644 --- a/mm/truncate.c +++ b/mm/truncate.c @@ -21,6 +21,7 @@ #include #include #include +#include #include "internal.h" /* @@ -210,7 +211,8 @@ int truncate_inode_folio(struct address_space *mapping, struct folio *folio) bool truncate_inode_partial_folio(struct folio *folio, loff_t start, loff_t end) { loff_t pos = folio_pos(folio); - unsigned int offset, length; + unsigned int offset, length, remaining; + unsigned int new_order = folio_order(folio); if (pos < start) offset = start - pos; @@ -221,6 +223,7 @@ bool truncate_inode_partial_folio(struct folio *folio, loff_t start, loff_t end) length = length - offset; else length = end + 1 - pos - offset; + remaining = folio_size(folio) - offset - length; folio_wait_writeback(folio); if (length == folio_size(folio)) { @@ -235,11 +238,25 @@ bool truncate_inode_partial_folio(struct folio *folio, loff_t start, loff_t end) */ folio_zero_range(folio, offset, length); + /* + * Use the greatest common divisor of offset, length, and remaining + * as the smallest page size and compute the new order from it. So we + * can truncate a subpage as large as possible. Round up gcd to + * PAGE_SIZE, otherwise ilog2 can give -1 when gcd/PAGE_SIZE is 0. + */ + new_order = ilog2(round_up(gcd(gcd(offset, length), remaining), + PAGE_SIZE) / PAGE_SIZE); + + /* order-1 THP not supported, downgrade to order-0 */ + if (new_order == 1) + new_order = 0; + + if (folio_has_private(folio)) folio_invalidate(folio, offset, length); if (!folio_test_large(folio)) return true; - if (split_folio(folio) == 0) + if (split_huge_page_to_list_to_order(&folio->page, NULL, new_order) == 0) return true; if (folio_test_dirty(folio)) return false; From patchwork Tue Feb 13 21:55:20 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zi Yan X-Patchwork-Id: 200681 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:bc8a:b0:106:860b:bbdd with SMTP id dn10csp831742dyb; Tue, 13 Feb 2024 14:02:28 -0800 (PST) X-Forwarded-Encrypted: i=3; AJvYcCWNSSOaJo7OcwTccTg/R0dRYCkecn8+bgj68z9dHczd8OYhvAIM4afDtaWj/27pSXL9fCijQfRYj5vzMSWbZcKGalng5Q== X-Google-Smtp-Source: AGHT+IGMKpUeB4SXNa+JWnPmRYTSzu6Cd8vVKy8qrE8azu8bzjdHCxz6mtM2s1XFIAfwahAZFmo/ X-Received: by 2002:a05:620a:2237:b0:785:b921:40a0 with SMTP id n23-20020a05620a223700b00785b92140a0mr929852qkh.17.1707861748567; Tue, 13 Feb 2024 14:02:28 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1707861748; cv=pass; d=google.com; s=arc-20160816; b=W/9ggI6tL/nfYja1zq1aAgW4+5vYHTSkJwd4aIAT3rg3DVKFzs/Uh0WDow+28neLUg nXymQleo7QbkilTPizi389cr7zhqjmj02uI0uP9dodn49Sk6GWu9mSHeKbOcviAiENlP iP5KX30Z2Ij9vMocsniqWcEpqimG0vXQsLtI0ov9EKgWSnXQKIf6R8ZtsqzpyRPj53K8 lShkEccQdYK9lMoM+N65DJzPGUTWtXSzKKtvOhxnTqedhvPJZiLY9aKGfTD37n4lv1wq vEyV9rrfi4GcNWilvBzJRtePCX/aNlB+K8OgOei7QyeAgW6nD6z1+WQpWX04OJYabaP2 OOwA== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:reply-to:references:in-reply-to :message-id:date:subject:cc:to:from:feedback-id:dkim-signature :dkim-signature; bh=9fpAY5r5SIfyHPBsxmEOn10EsQJ32jaS6lTW3aElgBo=; fh=bDbUgoV5kpDxwDwBf4dJymDe6brV3dz/xXD6BIbrFSs=; b=m70jW134/7g7kNV9o68zp3AMbkE9gES40W6D/AMeehdqnrCDEryfoZAIkZeKSPQmyI bOTB/klZ9BRM+aP1sO+0bH4ykWzY0EKCfxiokg8u6hfNs/fFKVq7Rqtc2FAAi3+PS2Jq aLKCIPW3l6ZB1byeJJsyih9eNKLHLHFRk9dvEcVZxytBxtxYtcWDn3m4imwTHyNv4/nd RipuXHGKYxBaW7uQ9VCHsD26z1Qua8pSZ2+QzypdWucniw0Kugn+hs7bvJkLII6hNaCb /av4ET4j17HgJeWZpXpBBhm68uWecNHtGtfZEebSG3UU9AnoEnsT5azlJI7RYRSOKe5p ejJA==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@sent.com header.s=fm3 header.b=whIGQEXR; dkim=pass header.i=@messagingengine.com header.s=fm3 header.b=qMnl2LTg; arc=pass (i=1 spf=pass spfdomain=sent.com dkim=pass dkdomain=sent.com dkim=pass dkdomain=messagingengine.com dmarc=pass fromdomain=sent.com); spf=pass (google.com: domain of linux-kernel+bounces-64397-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-64397-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=sent.com X-Forwarded-Encrypted: i=2; AJvYcCViA3bFl9knoDu+ialRzsacDoWtcNpAN6iLd+fyc/Brwb248heYPMcOapOSb2IygEy5n5u09YahnFXBZykZlwARalKsDw== Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [2604:1380:45d1:ec00::1]) by mx.google.com with ESMTPS id g6-20020a05620a13c600b0078538b5cbc6si4721852qkl.386.2024.02.13.14.02.28 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 13 Feb 2024 14:02:28 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-64397-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) client-ip=2604:1380:45d1:ec00::1; Authentication-Results: mx.google.com; dkim=pass header.i=@sent.com header.s=fm3 header.b=whIGQEXR; dkim=pass header.i=@messagingengine.com header.s=fm3 header.b=qMnl2LTg; arc=pass (i=1 spf=pass spfdomain=sent.com dkim=pass dkdomain=sent.com dkim=pass dkdomain=messagingengine.com dmarc=pass fromdomain=sent.com); spf=pass (google.com: domain of linux-kernel+bounces-64397-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-64397-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=sent.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id 1BB321C283F8 for ; Tue, 13 Feb 2024 22:00:33 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 49912657BE; Tue, 13 Feb 2024 21:56:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=sent.com header.i=@sent.com header.b="whIGQEXR"; dkim=pass (2048-bit key) header.d=messagingengine.com header.i=@messagingengine.com header.b="qMnl2LTg" Received: from out5-smtp.messagingengine.com (out5-smtp.messagingengine.com [66.111.4.29]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AAFB4627F3; Tue, 13 Feb 2024 21:55:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=66.111.4.29 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707861355; cv=none; b=D2ZeerNERPHpHiRfLuYjfETWc2y2XwddoavijHaTpa/eWspiwaISeQMPys0co8p6gPp2YaMii1ljQ9i63KpQs701+YLcNsVHtIfdBZfl4lw0hclSQSbbevHsWNkubx00XhQjh9EYeNkAIJ9BDtFop3PyIky8UJGvQSXsLyrBdE0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707861355; c=relaxed/simple; bh=4k1J5WpOg9drdG/n/818LiZXytE3GDBXdp/FO/7NnF8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=mdAiINDDVfvs6crXwQTk/ItsSZDUk3ppuqCu1qyIXbiUx/tLXK2uepF0vCFp7czGNg/DgSF+Yau6C/6GKL/TmhbxgJZORRuNb5vFrjc+vZ0FlYb/HAu9Ha68UZg1XQ/vqeNRbflX9Q85YtnEh2HGj2THVBOwg9MOHzdNLLPVaNk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=sent.com; spf=pass smtp.mailfrom=sent.com; dkim=pass (2048-bit key) header.d=sent.com header.i=@sent.com header.b=whIGQEXR; dkim=pass (2048-bit key) header.d=messagingengine.com header.i=@messagingengine.com header.b=qMnl2LTg; arc=none smtp.client-ip=66.111.4.29 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=sent.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sent.com Received: from compute2.internal (compute2.nyi.internal [10.202.2.46]) by mailout.nyi.internal (Postfix) with ESMTP id EBC085C0116; Tue, 13 Feb 2024 16:55:52 -0500 (EST) Received: from mailfrontend2 ([10.202.2.163]) by compute2.internal (MEProxy); Tue, 13 Feb 2024 16:55:52 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sent.com; h=cc :cc:content-transfer-encoding:content-type:content-type:date :date:from:from:in-reply-to:in-reply-to:message-id:mime-version :references:reply-to:reply-to:subject:subject:to:to; s=fm3; t= 1707861352; x=1707947752; bh=9fpAY5r5SIfyHPBsxmEOn10EsQJ32jaS6lT W3aElgBo=; b=whIGQEXRoqx/qECUNPaK17aH59YKDK1HWsPrYdDSEtILmbdSFrw g3nI7hqiZ7llmvOeqFI5G8ydQXjyYEjGnGWbO/qTtAtrnYrUTyegpXqJVaSzBY6Q wrvN16JHKa26mAoRACslKKQw1/ZS4AkSZ15gN1f1SxK4vINwoyzOjILTscVUItIc BHOrrb9IE0umaiTROEs27PLpXiYKfg1rHjmiLHJNMR/2UMGTPFAmsb+iBmCVVyPx ulMh0mrhTkg7XFydf8oswl0j6NZBEo7VBRy636J5MnkcxOwDlLkAfuixBfKR3hpS INM4L8IWn669amjmF0Pz1zHioU79uKgL+RA== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding :content-type:content-type:date:date:feedback-id:feedback-id :from:from:in-reply-to:in-reply-to:message-id:mime-version :references:reply-to:reply-to:subject:subject:to:to:x-me-proxy :x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm3; t= 1707861352; x=1707947752; bh=9fpAY5r5SIfyHPBsxmEOn10EsQJ32jaS6lT W3aElgBo=; b=qMnl2LTgLhPI29ilM7TDeZqWes7B4ihFflMZTnlA6Zs3zsC+PTK 8vrWl3oqs9S8yyPK/+Vbd0MtsoyLsX0kfDHPTCFpHBQdMwfJC7c1E00z7Jvxid1U xSlAI5VNRynW6EVN5lhKbRyGRW5NDkkbsojPNzGS6GzwL6kdXX+I4vz3AW7J8mjU UKJ0Bv1wWqaSUCUF3vvpcccsWQRZ2OYwiRi9uNEvvLkDDu4/DeOoXnd1g4os8C/f WMhR6HjM1ut/4rDONxBak+A+KYl0fU9wdUQ9u2aGuCpwH4g5d8xrvW4aI2T/hObu YVeoysZuqmwnTJcKY6wvzBQVWskF/+uH4MQ== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvledrudehgdduheegucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvvefufffkofgjfhhrgggtgfesthekredtredtjeenucfhrhhomhepkghi ucgjrghnuceoiihirdihrghnsehsvghnthdrtghomheqnecuggftrfgrthhtvghrnhepje ekteekffelleekudfftdefvddtjeejuedtuedtteegjefgvedtfedujeekieevnecuvehl uhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepiihirdihrghnse hsvghnthdrtghomh X-ME-Proxy: Feedback-ID: iccd040f4:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue, 13 Feb 2024 16:55:51 -0500 (EST) From: Zi Yan To: "Pankaj Raghav (Samsung)" , linux-mm@kvack.org Cc: Zi Yan , "Matthew Wilcox (Oracle)" , David Hildenbrand , Yang Shi , Yu Zhao , "Kirill A . Shutemov" , Ryan Roberts , =?utf-8?q?Michal_Koutn=C3=BD?= , Roman Gushchin , "Zach O'Keefe" , Hugh Dickins , Mcgrof Chamberlain , Andrew Morton , linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kselftest@vger.kernel.org Subject: [PATCH v4 7/7] mm: huge_memory: enable debugfs to split huge pages to any order. Date: Tue, 13 Feb 2024 16:55:20 -0500 Message-ID: <20240213215520.1048625-8-zi.yan@sent.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240213215520.1048625-1-zi.yan@sent.com> References: <20240213215520.1048625-1-zi.yan@sent.com> Reply-To: Zi Yan Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1790822840744795860 X-GMAIL-MSGID: 1790822840744795860 From: Zi Yan It is used to test split_huge_page_to_list_to_order for pagecache THPs. Also add test cases for split_huge_page_to_list_to_order via both debugfs, truncating a file, and punching holes in a file. Signed-off-by: Zi Yan --- mm/huge_memory.c | 34 ++- .../selftests/mm/split_huge_page_test.c | 223 +++++++++++++++++- 2 files changed, 239 insertions(+), 18 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index d0e555a8ea98..0564b007cbd1 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -3399,7 +3399,7 @@ static inline bool vma_not_suitable_for_thp_split(struct vm_area_struct *vma) } static int split_huge_pages_pid(int pid, unsigned long vaddr_start, - unsigned long vaddr_end) + unsigned long vaddr_end, unsigned int new_order) { int ret = 0; struct task_struct *task; @@ -3463,13 +3463,19 @@ static int split_huge_pages_pid(int pid, unsigned long vaddr_start, goto next; total++; - if (!can_split_folio(folio, NULL)) + /* + * For folios with private, split_huge_page_to_list_to_order() + * will try to drop it before split and then check if the folio + * can be split or not. So skip the check here. + */ + if (!folio_test_private(folio) && + !can_split_folio(folio, NULL)) goto next; if (!folio_trylock(folio)) goto next; - if (!split_folio(folio)) + if (!split_folio_to_order(folio, new_order)) split++; folio_unlock(folio); @@ -3487,7 +3493,7 @@ static int split_huge_pages_pid(int pid, unsigned long vaddr_start, } static int split_huge_pages_in_file(const char *file_path, pgoff_t off_start, - pgoff_t off_end) + pgoff_t off_end, unsigned int new_order) { struct filename *file; struct file *candidate; @@ -3526,7 +3532,7 @@ static int split_huge_pages_in_file(const char *file_path, pgoff_t off_start, if (!folio_trylock(folio)) goto next; - if (!split_folio(folio)) + if (!split_folio_to_order(folio, new_order)) split++; folio_unlock(folio); @@ -3551,10 +3557,14 @@ static ssize_t split_huge_pages_write(struct file *file, const char __user *buf, { static DEFINE_MUTEX(split_debug_mutex); ssize_t ret; - /* hold pid, start_vaddr, end_vaddr or file_path, off_start, off_end */ + /* + * hold pid, start_vaddr, end_vaddr, new_order or + * file_path, off_start, off_end, new_order + */ char input_buf[MAX_INPUT_BUF_SZ]; int pid; unsigned long vaddr_start, vaddr_end; + unsigned int new_order = 0; ret = mutex_lock_interruptible(&split_debug_mutex); if (ret) @@ -3583,29 +3593,29 @@ static ssize_t split_huge_pages_write(struct file *file, const char __user *buf, goto out; } - ret = sscanf(buf, "0x%lx,0x%lx", &off_start, &off_end); - if (ret != 2) { + ret = sscanf(buf, "0x%lx,0x%lx,%d", &off_start, &off_end, &new_order); + if (ret != 2 && ret != 3) { ret = -EINVAL; goto out; } - ret = split_huge_pages_in_file(file_path, off_start, off_end); + ret = split_huge_pages_in_file(file_path, off_start, off_end, new_order); if (!ret) ret = input_len; goto out; } - ret = sscanf(input_buf, "%d,0x%lx,0x%lx", &pid, &vaddr_start, &vaddr_end); + ret = sscanf(input_buf, "%d,0x%lx,0x%lx,%d", &pid, &vaddr_start, &vaddr_end, &new_order); if (ret == 1 && pid == 1) { split_huge_pages_all(); ret = strlen(input_buf); goto out; - } else if (ret != 3) { + } else if (ret != 3 && ret != 4) { ret = -EINVAL; goto out; } - ret = split_huge_pages_pid(pid, vaddr_start, vaddr_end); + ret = split_huge_pages_pid(pid, vaddr_start, vaddr_end, new_order); if (!ret) ret = strlen(input_buf); out: diff --git a/tools/testing/selftests/mm/split_huge_page_test.c b/tools/testing/selftests/mm/split_huge_page_test.c index 7b698a848bab..ffed5ae24566 100644 --- a/tools/testing/selftests/mm/split_huge_page_test.c +++ b/tools/testing/selftests/mm/split_huge_page_test.c @@ -16,6 +16,7 @@ #include #include #include +#include #include "vm_util.h" #include "../kselftest.h" @@ -24,10 +25,12 @@ unsigned int pageshift; uint64_t pmd_pagesize; #define SPLIT_DEBUGFS "/sys/kernel/debug/split_huge_pages" +#define SMAP_PATH "/proc/self/smaps" +#define THP_FS_PATH "/mnt/thp_fs" #define INPUT_MAX 80 -#define PID_FMT "%d,0x%lx,0x%lx" -#define PATH_FMT "%s,0x%lx,0x%lx" +#define PID_FMT "%d,0x%lx,0x%lx,%d" +#define PATH_FMT "%s,0x%lx,0x%lx,%d" #define PFN_MASK ((1UL<<55)-1) #define KPF_THP (1UL<<22) @@ -102,7 +105,7 @@ void split_pmd_thp(void) /* split all THPs */ write_debugfs(PID_FMT, getpid(), (uint64_t)one_page, - (uint64_t)one_page + len); + (uint64_t)one_page + len, 0); for (i = 0; i < len; i++) if (one_page[i] != (char)i) @@ -177,7 +180,7 @@ void split_pte_mapped_thp(void) /* split all remapped THPs */ write_debugfs(PID_FMT, getpid(), (uint64_t)pte_mapped, - (uint64_t)pte_mapped + pagesize * 4); + (uint64_t)pte_mapped + pagesize * 4, 0); /* smap does not show THPs after mremap, use kpageflags instead */ thp_size = 0; @@ -237,7 +240,7 @@ void split_file_backed_thp(void) } /* split the file-backed THP */ - write_debugfs(PATH_FMT, testfile, pgoff_start, pgoff_end); + write_debugfs(PATH_FMT, testfile, pgoff_start, pgoff_end, 0); status = unlink(testfile); if (status) { @@ -265,8 +268,188 @@ void split_file_backed_thp(void) ksft_exit_fail_msg("Error occurred\n"); } +void create_pagecache_thp_and_fd(const char *testfile, size_t fd_size, int *fd, char **addr) +{ + size_t i; + int dummy; + + srand(time(NULL)); + + *fd = open(testfile, O_CREAT | O_RDWR, 0664); + if (*fd == -1) + ksft_exit_fail_msg("Failed to create a file at "THP_FS_PATH); + + for (i = 0; i < fd_size; i++) { + unsigned char byte = (unsigned char)i; + + write(*fd, &byte, sizeof(byte)); + } + close(*fd); + sync(); + *fd = open("/proc/sys/vm/drop_caches", O_WRONLY); + if (*fd == -1) { + ksft_perror("open drop_caches"); + goto err_out_unlink; + } + if (write(*fd, "3", 1) != 1) { + ksft_perror("write to drop_caches"); + goto err_out_unlink; + } + close(*fd); + + *fd = open(testfile, O_RDWR); + if (*fd == -1) { + ksft_perror("Failed to open a file at "THP_FS_PATH); + goto err_out_unlink; + } + + *addr = mmap(NULL, fd_size, PROT_READ|PROT_WRITE, MAP_SHARED, *fd, 0); + if (*addr == (char *)-1) { + ksft_perror("cannot mmap"); + goto err_out_close; + } + madvise(*addr, fd_size, MADV_HUGEPAGE); + + for (size_t i = 0; i < fd_size; i++) + dummy += *(*addr + i); + + if (!check_huge_file(*addr, fd_size / pmd_pagesize, pmd_pagesize)) { + ksft_print_msg("No large pagecache folio generated, please mount a filesystem supporting large folio at "THP_FS_PATH"\n"); + goto err_out_close; + } + return; +err_out_close: + close(*fd); +err_out_unlink: + unlink(testfile); + ksft_exit_fail_msg("Failed to create large pagecache folios\n"); +} + +void split_thp_in_pagecache_to_order(size_t fd_size, int order) +{ + int fd; + char *addr; + size_t i; + const char testfile[] = THP_FS_PATH "/test"; + int err = 0; + + create_pagecache_thp_and_fd(testfile, fd_size, &fd, &addr); + + write_debugfs(PID_FMT, getpid(), (uint64_t)addr, (uint64_t)addr + fd_size, order); + + for (i = 0; i < fd_size; i++) + if (*(addr + i) != (char)i) { + ksft_print_msg("%lu byte corrupted in the file\n", i); + err = EXIT_FAILURE; + goto out; + } + + if (!check_huge_file(addr, 0, pmd_pagesize)) { + ksft_print_msg("Still FilePmdMapped not split\n"); + err = EXIT_FAILURE; + goto out; + } + +out: + close(fd); + unlink(testfile); + if (err) + ksft_exit_fail_msg("Split PMD-mapped pagecache folio to order %d failed\n", order); + ksft_test_result_pass("Split PMD-mapped pagecache folio to order %d passed\n", order); +} + +void truncate_thp_in_pagecache_to_order(size_t fd_size, int order) +{ + int fd; + char *addr; + size_t i; + const char testfile[] = THP_FS_PATH "/test"; + int err = 0; + + create_pagecache_thp_and_fd(testfile, fd_size, &fd, &addr); + + ftruncate(fd, pagesize << order); + + for (i = 0; i < (pagesize << order); i++) + if (*(addr + i) != (char)i) { + ksft_print_msg("%lu byte corrupted in the file\n", i); + err = EXIT_FAILURE; + goto out; + } + + if (!check_huge_file(addr, 0, pmd_pagesize)) { + ksft_print_msg("Still FilePmdMapped not split after truncate\n"); + err = EXIT_FAILURE; + goto out; + } + +out: + close(fd); + unlink(testfile); + if (err) + ksft_exit_fail_msg("Truncate PMD-mapped pagecache folio to order %d failed\n", order); + ksft_test_result_pass("Truncate PMD-mapped pagecache folio to order %d passed\n", order); +} + +void punch_hole_in_pagecache_thp(size_t fd_size, off_t offset[], off_t len[], + int n, int num_left_thps) +{ + int fd, j; + char *addr; + size_t i; + const char testfile[] = THP_FS_PATH "/test"; + int err = 0; + + create_pagecache_thp_and_fd(testfile, fd_size, &fd, &addr); + + for (j = 0; j < n; j++) { + ksft_print_msg("punch a hole to %ld kB PMD-mapped pagecache page at addr: %lx, offset %ld, and len %ld ...\n", + fd_size >> 10, (unsigned long)addr, offset[j], len[j]); + fallocate(fd, FALLOC_FL_PUNCH_HOLE|FALLOC_FL_KEEP_SIZE, offset[j], len[j]); + } + + for (i = 0; i < fd_size; i++) { + int in_hole = 0; + + for (j = 0; j < n; j++) + if (i >= offset[j] && i < (offset[j] + len[j])) { + in_hole = 1; + break; + } + + if (in_hole) { + if (*(addr + i)) { + ksft_print_msg("%lu byte non-zero after punch\n", i); + err = EXIT_FAILURE; + goto out; + } + continue; + } + if (*(addr + i) != (char)i) { + ksft_print_msg("%lu byte corrupted in the file\n", i); + err = EXIT_FAILURE; + goto out; + } + } + + if (!check_huge_file(addr, num_left_thps, pmd_pagesize)) { + ksft_print_msg("Still FilePmdMapped not split after punch\n"); + goto out; + } +out: + close(fd); + unlink(testfile); + if (err) + ksft_exit_fail_msg("Punch holes in PMD-mapped pagecache folio failed\n"); + ksft_test_result_pass("Punch holes PMD-mapped pagecache folio passed\n"); +} + int main(int argc, char **argv) { + int i; + size_t fd_size; + off_t offset[2], len[2]; + ksft_print_header(); if (geteuid() != 0) { @@ -274,7 +457,7 @@ int main(int argc, char **argv) ksft_finished(); } - ksft_set_plan(3); + ksft_set_plan(3+8+9+2); pagesize = getpagesize(); pageshift = ffs(pagesize) - 1; @@ -282,9 +465,37 @@ int main(int argc, char **argv) if (!pmd_pagesize) ksft_exit_fail_msg("Reading PMD pagesize failed\n"); + fd_size = 2 * pmd_pagesize; + split_pmd_thp(); split_pte_mapped_thp(); split_file_backed_thp(); + for (i = 8; i >= 0; i--) + if (i != 1) + split_thp_in_pagecache_to_order(fd_size, i); + + /* + * for i is 1, truncate code in the kernel should create order-0 pages + * instead of order-1 THPs, since order-1 THP is not supported. No error + * is expected. + */ + for (i = 8; i >= 0; i--) + truncate_thp_in_pagecache_to_order(fd_size, i); + + offset[0] = 123; + offset[1] = 4 * pagesize; + len[0] = 200 * pagesize; + len[1] = 16 * pagesize; + punch_hole_in_pagecache_thp(fd_size, offset, len, 2, 1); + + offset[0] = 259 * pagesize + pagesize / 2; + offset[1] = 33 * pagesize; + len[0] = 129 * pagesize; + len[1] = 16 * pagesize; + punch_hole_in_pagecache_thp(fd_size, offset, len, 2, 1); + ksft_finished(); + + return 0; }