From patchwork Mon Feb 26 09:49:24 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Pankaj Raghav (Samsung)" X-Patchwork-Id: 207032 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:a81b:b0:108:e6aa:91d0 with SMTP id bq27csp2488617dyb; Mon, 26 Feb 2024 20:35:32 -0800 (PST) X-Forwarded-Encrypted: i=3; AJvYcCXVmiUoIhFLivYrQ+EbkTGvcuGp7s6BlvdJfyPd/FOKRoLRwGkKTDOf2pA5gfkdnDgY28+64DGIIYFKEsmuH7MQStQz/A== X-Google-Smtp-Source: AGHT+IE8JncrgmPITzTK0V8ejVN5SZoOIH/Zmm7gSQp6kLRKe3PFWfcQ6MOd8gxxy2ARnrGV4FzL X-Received: by 2002:a05:6a20:c116:b0:1a0:fc33:17b8 with SMTP id bh22-20020a056a20c11600b001a0fc3317b8mr812173pzb.24.1709008532200; Mon, 26 Feb 2024 20:35:32 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1709008532; cv=pass; d=google.com; s=arc-20160816; b=yPDtrCUepx1Srk1+ui68ZOIoDZaV2OUbxRO7O6bH3hY9c5ilsRM1ofk8TjxPzioCYK PojuDTH0LSqpk5JNqV2dnt7lv1RDlunw0ZPRH+/37PRrKu+4FWIVSuIAIcHrKX5SIyQl jv+dIEcAT3ojQpA79Bwjqx/fxUvvTJ5s8Hh/dHr1eEr1bMscR6c86nBMkgFabsaaaSMy gnaTLY54ZS8zA46xWTr2cNy5Cf6awg+HqnDQnQbqxdUwfggbwSuJvWgmXSi7P2ajyM9p bnPkDMN3NvwCS5FEYJ8tkIammMX9uQnGfzln20lY/a+0okyQey1IkpSHymVEYQehvcCu dTVg== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=O82T73AIK6qGhlYorGzOupTzei7wEk6+FPA8o3URJUE=; fh=Xirgs0kusnTeXPReIRpnub4L9AAhVMJdFOhOcygaqRw=; b=FISgY0NB3QKSV/05N7Vjp3440hWWVam5eKim4DSUdtv+gzsZMHxmDURHntbj2kBY/A 6rJ6daSXQZvnVPH9xTcNwrR9LNoCwe8cMJw/h8CS0H3TB237OBxO2ZysqH5kk9VdBuWz MAVhHYe8Oic2KEWaLS8c3nfNtnOCjYOGUer+T8ZA+OR7g8gAMMQ/sHKn//K2cY5CfOru CAQU4lGKCCHPsd3Mfrp0kAELhYxCWM9p1gS+7BTb/nZOBsBOybgWEugrc3KPF4Rb5PDg HejkbbGl8yFjTfSEIe2T/xMFHxBoTTh2l7a1TvNBt1tyQEl33eaQYcWa5gEpEyZIWHUJ B6+g==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@pankajraghav.com header.s=MBO0001 header.b=fFQkG18o; arc=pass (i=1 spf=pass spfdomain=pankajraghav.com dkim=pass dkdomain=pankajraghav.com); spf=pass (google.com: domain of linux-kernel+bounces-81098-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-81098-ouuuleilei=gmail.com@vger.kernel.org" Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [2604:1380:45e3:2400::1]) by mx.google.com with ESMTPS id 22-20020a631556000000b005d8bea05154si4687298pgv.619.2024.02.26.20.35.32 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 26 Feb 2024 20:35:32 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-81098-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) client-ip=2604:1380:45e3:2400::1; Authentication-Results: mx.google.com; dkim=pass header.i=@pankajraghav.com header.s=MBO0001 header.b=fFQkG18o; arc=pass (i=1 spf=pass spfdomain=pankajraghav.com dkim=pass dkdomain=pankajraghav.com); spf=pass (google.com: domain of linux-kernel+bounces-81098-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-81098-ouuuleilei=gmail.com@vger.kernel.org" Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id 998A828BCA3 for ; Mon, 26 Feb 2024 10:12:09 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 8BF7C65BD8; Mon, 26 Feb 2024 09:50:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=pankajraghav.com header.i=@pankajraghav.com header.b="fFQkG18o" Received: from mout-p-103.mailbox.org (mout-p-103.mailbox.org [80.241.56.161]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 90461634F3; Mon, 26 Feb 2024 09:49:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=80.241.56.161 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708940998; cv=none; b=bQVawrXTRA/c8vZVw/mvd9435sT4tJSeEC+Pfh6ozgVg8IgzXhn297nLEJcDh4upLhTBZF1UHzdVuVRW0daaEtcRxlXGusPfjNYUW/Z1ca5efWz4jsWMuqvjzNVLkz4Fr8JIawfon36E4CRqp8GhoRuHyNZalEtsVMPypkUfdQs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708940998; c=relaxed/simple; bh=BY0+lJrnKbexBUzETPtFYZ+OJuge9ZodiUfVOjc2hK8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=RyISfggWyD+U1hNDMvF8KZK4r82AmGAAAHoZglpA6OwnAXsgvyVxP3xd42fZ89fhUuBg2R3C3VN9WznUMWfipqodrssO5mL2pkRGDfwH0w0ffxd027lkdxV2Je32SoOjEtqqA7CSMUBJT1MTLuLrRjuoKnJS3RXLLLnOcRyc9iU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=pankajraghav.com; spf=pass smtp.mailfrom=pankajraghav.com; dkim=pass (2048-bit key) header.d=pankajraghav.com header.i=@pankajraghav.com header.b=fFQkG18o; arc=none smtp.client-ip=80.241.56.161 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=pankajraghav.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=pankajraghav.com Received: from smtp202.mailbox.org (smtp202.mailbox.org [10.196.197.202]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by mout-p-103.mailbox.org (Postfix) with ESMTPS id 4Tjwnk5kDgz9sT6; Mon, 26 Feb 2024 10:49:46 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pankajraghav.com; s=MBO0001; t=1708940986; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=O82T73AIK6qGhlYorGzOupTzei7wEk6+FPA8o3URJUE=; b=fFQkG18oqQYxyzAvUU/y6Xu/XNWxZbMy3Hto7NjxLMdpH4n7PnYGdCLyHr4oXKUWbXSdbt 3kIMgsOe+HNJi2CzUuOzIN8/o3Hwj48BF9JSpk1ELMgKLrxDpkLn7dQHexpKbAy6k3on3h tAY9LeXQaVX3Z0icnnr5KaXGnxzwdyT+lKKy4CYh251ECjLnS+cXrDxGEVm36OGNJwQwRd 47PB4v+rzOClutu21cDUGzMEc1J6st9TU1ML2CZo9yDdBjMu2qdfyN76b5tzCIv54RiQR8 MMZn9aLw+WeROCUn9RtgqMYpFq+RNdlieT4wk3ZhHKWPj+rdmiFDG5WMdnHFZg== From: "Pankaj Raghav (Samsung)" To: linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org Cc: linux-kernel@vger.kernel.org, david@fromorbit.com, chandan.babu@oracle.com, akpm@linux-foundation.org, mcgrof@kernel.org, ziy@nvidia.com, hare@suse.de, djwong@kernel.org, gost.dev@samsung.com, linux-mm@kvack.org, willy@infradead.org Subject: [PATCH 01/13] mm: Support order-1 folios in the page cache Date: Mon, 26 Feb 2024 10:49:24 +0100 Message-ID: <20240226094936.2677493-2-kernel@pankajraghav.com> In-Reply-To: <20240226094936.2677493-1-kernel@pankajraghav.com> References: <20240226094936.2677493-1-kernel@pankajraghav.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1792025330649151234 X-GMAIL-MSGID: 1792025330649151234 From: "Matthew Wilcox (Oracle)" Folios of order 1 have no space to store the deferred list. This is not a problem for the page cache as file-backed folios are never placed on the deferred list. All we need to do is prevent the core MM from touching the deferred list for order 1 folios and remove the code which prevented us from allocating order 1 folios. Link: https://lore.kernel.org/linux-mm/90344ea7-4eec-47ee-5996-0c22f42d6a6a@google.com/ Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/huge_mm.h | 7 +++++-- mm/filemap.c | 2 -- mm/huge_memory.c | 23 ++++++++++++++++++----- mm/internal.h | 4 +--- mm/readahead.c | 3 --- 5 files changed, 24 insertions(+), 15 deletions(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 5adb86af35fc..916a2a539517 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -263,7 +263,7 @@ unsigned long thp_vma_allowable_orders(struct vm_area_struct *vma, unsigned long thp_get_unmapped_area(struct file *filp, unsigned long addr, unsigned long len, unsigned long pgoff, unsigned long flags); -void folio_prep_large_rmappable(struct folio *folio); +struct folio *folio_prep_large_rmappable(struct folio *folio); bool can_split_folio(struct folio *folio, int *pextra_pins); int split_huge_page_to_list(struct page *page, struct list_head *list); static inline int split_huge_page(struct page *page) @@ -410,7 +410,10 @@ static inline unsigned long thp_vma_allowable_orders(struct vm_area_struct *vma, return 0; } -static inline void folio_prep_large_rmappable(struct folio *folio) {} +static inline struct folio *folio_prep_large_rmappable(struct folio *folio) +{ + return folio; +} #define transparent_hugepage_flags 0UL diff --git a/mm/filemap.c b/mm/filemap.c index 750e779c23db..2b00442b9d19 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1912,8 +1912,6 @@ struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index, gfp_t alloc_gfp = gfp; err = -ENOMEM; - if (order == 1) - order = 0; if (order > 0) alloc_gfp |= __GFP_NORETRY | __GFP_NOWARN; folio = filemap_alloc_folio(alloc_gfp, order); diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 94c958f7ebb5..81fd1ba57088 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -788,11 +788,15 @@ struct deferred_split *get_deferred_split_queue(struct folio *folio) } #endif -void folio_prep_large_rmappable(struct folio *folio) +struct folio *folio_prep_large_rmappable(struct folio *folio) { - VM_BUG_ON_FOLIO(folio_order(folio) < 2, folio); - INIT_LIST_HEAD(&folio->_deferred_list); + if (!folio || !folio_test_large(folio)) + return folio; + if (folio_order(folio) > 1) + INIT_LIST_HEAD(&folio->_deferred_list); folio_set_large_rmappable(folio); + + return folio; } static inline bool is_transparent_hugepage(struct folio *folio) @@ -3082,7 +3086,8 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) /* Prevent deferred_split_scan() touching ->_refcount */ spin_lock(&ds_queue->split_queue_lock); if (folio_ref_freeze(folio, 1 + extra_pins)) { - if (!list_empty(&folio->_deferred_list)) { + if (folio_order(folio) > 1 && + !list_empty(&folio->_deferred_list)) { ds_queue->split_queue_len--; list_del(&folio->_deferred_list); } @@ -3133,6 +3138,9 @@ void folio_undo_large_rmappable(struct folio *folio) struct deferred_split *ds_queue; unsigned long flags; + if (folio_order(folio) <= 1) + return; + /* * At this point, there is no one trying to add the folio to * deferred_list. If folio is not in deferred_list, it's safe @@ -3158,7 +3166,12 @@ void deferred_split_folio(struct folio *folio) #endif unsigned long flags; - VM_BUG_ON_FOLIO(folio_order(folio) < 2, folio); + /* + * Order 1 folios have no space for a deferred list, but we also + * won't waste much memory by not adding them to the deferred list. + */ + if (folio_order(folio) <= 1) + return; /* * The try_to_unmap() in page reclaim path might reach here too, diff --git a/mm/internal.h b/mm/internal.h index f309a010d50f..5174b5b0c344 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -419,9 +419,7 @@ static inline struct folio *page_rmappable_folio(struct page *page) { struct folio *folio = (struct folio *)page; - if (folio && folio_order(folio) > 1) - folio_prep_large_rmappable(folio); - return folio; + return folio_prep_large_rmappable(folio); } static inline void prep_compound_head(struct page *page, unsigned int order) diff --git a/mm/readahead.c b/mm/readahead.c index 2648ec4f0494..369c70e2be42 100644 --- a/mm/readahead.c +++ b/mm/readahead.c @@ -516,9 +516,6 @@ void page_cache_ra_order(struct readahead_control *ractl, /* Don't allocate pages past EOF */ while (index + (1UL << order) - 1 > limit) order--; - /* THP machinery does not support order-1 */ - if (order == 1) - order = 0; err = ra_alloc_folio(ractl, index, mark, order, gfp); if (err) break; From patchwork Mon Feb 26 09:49:30 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Pankaj Raghav (Samsung)" X-Patchwork-Id: 207717 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:a81b:b0:108:e6aa:91d0 with SMTP id bq27csp3219270dyb; Wed, 28 Feb 2024 01:00:18 -0800 (PST) X-Forwarded-Encrypted: i=3; AJvYcCU2x3c6UBRqeDiDafQ5RHKieyf6hD5TuITwKtVi85r5vK7PCt+Jb540hTO0Z2uB7EHVNrppsXdlnl/Z1nmzHMNrB2prYg== X-Google-Smtp-Source: AGHT+IFze9XiLR9YrRbayf7JS6ToxEN4M5hvR/8ZGHxRdG9igndXXgpR53i5/SBMNP0KBBsor+k2 X-Received: by 2002:a05:6358:561e:b0:178:75cb:18dc with SMTP id b30-20020a056358561e00b0017875cb18dcmr15556020rwf.10.1709110817770; Wed, 28 Feb 2024 01:00:17 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1709110817; cv=pass; d=google.com; s=arc-20160816; b=iVChSb4RQd+PCqImGdYGd6LrGJ8PrsYI4k3H0HcCFYgaOdGsXHq53z7sHYx8V+zbrB POBsXvKl9gGbKmqIPLY4iS1HmlSl63PrM7L7Z8j8oYQRetLKaKUlrVDVwNH0GfQ32Ksu 5o6EpMBmJv+k7w+gmV6YLFZYR7L3snEGjQdRVtEUKwD6xT0Zg8UnBwTkbm2lo+fGK1Yi HJ1MKUvz02IR0Uuwu5uMUE7cWXuB/XxUXHGh3U91QNi+KDJF21akspIm8W5qVLq4bBoU QyrPqcjZOTx9W3D/kI3p2Rp/GB7FuP1Y/fn19R3t2Ggep/+CiZ0u6VvmiQnlNwVsfFkw 55aw== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=i9yxkr7ZYzLuGR2ramAYSlLdnRgu3ddqhwDDXgb/dQg=; fh=dtth5LLws4gScLJjY7ffnMwcGPbv2fWqQb3zujMa8AQ=; b=uj0k/F78tUiUL0KOsonrfP1E6ltqoGZLUD2xanhJEb1FAqx/HMIxeg+gZUZXcqICjl gJTYuVo9aT55maT73mPb+YnugNIOAaoDCdSKW6ooZB7E73Mvoc4m53nSZp9PUFDDBff6 Q2ZneVgTDUMZk4Ybh354IZ6LS9/fBTNZqT9fi5+BKc892ixdVV/MOL7eaB42QTFb0Hpj iJcD3XWdPerQxAQg//iXwqI7PRMxNE/gHzL+iMaAqHwa4tSi45hDGYG9ReREDZsjzZSV vU8cIhOfFNLKyJOn+wwRRe9EKJRpV6ivSIkKGf9jvowG1hq23s+oTGEJwcw+vRBuVVm/ cTLA==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@pankajraghav.com header.s=MBO0001 header.b=s1FvJLOV; arc=pass (i=1 spf=pass spfdomain=pankajraghav.com dkim=pass dkdomain=pankajraghav.com); spf=pass (google.com: domain of linux-kernel+bounces-81102-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-81102-ouuuleilei=gmail.com@vger.kernel.org" Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [2604:1380:45d1:ec00::1]) by mx.google.com with ESMTPS id v20-20020a05622a131400b0042e8d4f4bdbsi5365212qtk.259.2024.02.28.01.00.17 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 28 Feb 2024 01:00:17 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-81102-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) client-ip=2604:1380:45d1:ec00::1; Authentication-Results: mx.google.com; dkim=pass header.i=@pankajraghav.com header.s=MBO0001 header.b=s1FvJLOV; arc=pass (i=1 spf=pass spfdomain=pankajraghav.com dkim=pass dkdomain=pankajraghav.com); spf=pass (google.com: domain of linux-kernel+bounces-81102-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-81102-ouuuleilei=gmail.com@vger.kernel.org" Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id 96E201C270AE for ; Mon, 26 Feb 2024 10:13:26 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 38C0569DED; Mon, 26 Feb 2024 09:50:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=pankajraghav.com header.i=@pankajraghav.com header.b="s1FvJLOV" Received: from mout-p-202.mailbox.org (mout-p-202.mailbox.org [80.241.56.172]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0BB6467E79; Mon, 26 Feb 2024 09:50:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=80.241.56.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708941017; cv=none; b=dAYx5EDivjHCU3dC53xXENCu+wBO+y/fyUDzhey/Z1PsiYIuOViQD+juyFJ0OsJU2ChGZ87hRLlcIatvHFCNMWzxZFBIpDYCysWPW7kFifeG6FiSj2K19+VbNPu5jj5LeQh64lKhF5b3DKIEVRZ8ZJlyWZ1v3BdIRJNh+99zzZI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708941017; c=relaxed/simple; bh=4ejCWPI5/fRYMUcY9eJYWQ4JohLb3zMfCtxtjMK5oW8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ixrAQKVwMXSRNNwXha3A48lmQ3UjpoIoZspHguMfKNmntxesNkoDeBfLT5U/39A5XP4eFC6/2D0t0FtZ9BM7Rw97Tr5+vvDX9AQROnw+yzMyEHomnRfWoTVcj/wuHud2c9LRfMwvxYbR8ctSKEIUsy97KEZ83j4dGxWewQbm1bA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=pankajraghav.com; spf=pass smtp.mailfrom=pankajraghav.com; dkim=pass (2048-bit key) header.d=pankajraghav.com header.i=@pankajraghav.com header.b=s1FvJLOV; arc=none smtp.client-ip=80.241.56.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=pankajraghav.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=pankajraghav.com Received: from smtp2.mailbox.org (smtp2.mailbox.org [10.196.197.2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by mout-p-202.mailbox.org (Postfix) with ESMTPS id 4TjwpD4yqgz9sQ6; Mon, 26 Feb 2024 10:50:12 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pankajraghav.com; s=MBO0001; t=1708941012; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=i9yxkr7ZYzLuGR2ramAYSlLdnRgu3ddqhwDDXgb/dQg=; b=s1FvJLOV6O9BDoEZy9rfAWAdOmTdsY69l/3r70AQ4MWdtRAAjRw8jrLMD2N1T8oH7bb3FE Zo5P5AF31/WRErUzGHULK/N/M18PaNHpCSVQ/5UjkSyvUBhL6ARcSVNtDNa7LFFihtGyuX zwLRbtKGFial2tfZRA/t6m9w4WUIikd/uSVFERTvKPvVNwLbAywF7r57ufmFDElaWknTj8 dQx12H3rSonr9IpI4qrobmgSjtp6J/VscSXIGgYelBUaxhicKdv9lyM5eovuOQEre1YR3i LWU2h+G8ukjtVd6mKiPDMISh406FjzorZrXGDipZ54XIyY/I6PB1osHX3sl0NA== From: "Pankaj Raghav (Samsung)" To: linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org Cc: linux-kernel@vger.kernel.org, david@fromorbit.com, chandan.babu@oracle.com, akpm@linux-foundation.org, mcgrof@kernel.org, ziy@nvidia.com, hare@suse.de, djwong@kernel.org, gost.dev@samsung.com, linux-mm@kvack.org, willy@infradead.org, Pankaj Raghav Subject: [PATCH 07/13] readahead: rework loop in page_cache_ra_unbounded() Date: Mon, 26 Feb 2024 10:49:30 +0100 Message-ID: <20240226094936.2677493-8-kernel@pankajraghav.com> In-Reply-To: <20240226094936.2677493-1-kernel@pankajraghav.com> References: <20240226094936.2677493-1-kernel@pankajraghav.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1792132584874276112 X-GMAIL-MSGID: 1792132584874276112 From: Hannes Reinecke Rework the loop in page_cache_ra_unbounded() to advance with the number of pages in a folio instead of just one page at a time. Signed-off-by: Hannes Reinecke Co-developed-by: Pankaj Raghav Signed-off-by: Pankaj Raghav Acked-by: Darrick J. Wong --- mm/readahead.c | 13 +++++++------ 1 file changed, 7 insertions(+), 6 deletions(-) diff --git a/mm/readahead.c b/mm/readahead.c index 325a25e4ee3a..ef0004147952 100644 --- a/mm/readahead.c +++ b/mm/readahead.c @@ -212,7 +212,7 @@ void page_cache_ra_unbounded(struct readahead_control *ractl, struct address_space *mapping = ractl->mapping; unsigned long index = readahead_index(ractl); gfp_t gfp_mask = readahead_gfp_mask(mapping); - unsigned long i; + unsigned long i = 0; /* * Partway through the readahead operation, we will have added @@ -230,7 +230,7 @@ void page_cache_ra_unbounded(struct readahead_control *ractl, /* * Preallocate as many pages as we will need. */ - for (i = 0; i < nr_to_read; i++) { + while (i < nr_to_read) { struct folio *folio = xa_load(&mapping->i_pages, index + i); if (folio && !xa_is_value(folio)) { @@ -243,8 +243,8 @@ void page_cache_ra_unbounded(struct readahead_control *ractl, * not worth getting one just for that. */ read_pages(ractl); - ractl->_index++; - i = ractl->_index + ractl->_nr_pages - index - 1; + ractl->_index += folio_nr_pages(folio); + i = ractl->_index + ractl->_nr_pages - index; continue; } @@ -256,13 +256,14 @@ void page_cache_ra_unbounded(struct readahead_control *ractl, folio_put(folio); read_pages(ractl); ractl->_index++; - i = ractl->_index + ractl->_nr_pages - index - 1; + i = ractl->_index + ractl->_nr_pages - index; continue; } if (i == nr_to_read - lookahead_size) folio_set_readahead(folio); ractl->_workingset |= folio_test_workingset(folio); - ractl->_nr_pages++; + ractl->_nr_pages += folio_nr_pages(folio); + i += folio_nr_pages(folio); } /*