Message ID | 20231119194740.94101-22-ryncsn@gmail.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:9910:0:b0:403:3b70:6f57 with SMTP id i16csp1810195vqn; Sun, 19 Nov 2023 11:50:31 -0800 (PST) X-Google-Smtp-Source: AGHT+IGOomIzgKgzoBw+iKw77J2FeYrHzvSg3gmNpp7pnIN3uHGRYXQi9TP7ZyCVOkZJhQbnU2P0 X-Received: by 2002:a17:903:1108:b0:1cc:3bd3:73d8 with SMTP id n8-20020a170903110800b001cc3bd373d8mr6208541plh.59.1700423431228; Sun, 19 Nov 2023 11:50:31 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1700423431; cv=none; d=google.com; s=arc-20160816; b=O3on4ErSZnRSPTNZENnPTg+R1U5V9vFPo8gwA1TF7rmsrY/hn2HxMC9X/llY/UyxDu QN34q4DRvZ8CT+yodDS3GCl5t2ZIYO13XBnwoZzxKA5SpJaj/sgPGJbbILZ8wujqpDth gfP5BEahp6OKMDFk9ZRzXuWKr3jLJRPVzUsbAHyTgolPor4yqi2fG4kUT2RrP3KPZTVy P3Q6NUETJ6QIxQdtyN0BEOFs2cQybHE1heIfPIJO5BJfq2nLSjVNOpv9gZfUFB/QOt3Z nf+S9nirFrCxXa0Hj8lTN7wkKd53jnoaFpgHZLaZ7+MzmXUJeHpFteKphpsRSTNGY9Yh X/LQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version:reply-to :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=w2h0VsFq1SfFmbRfFbDLLmtclse5HYqoYk4CcDA6qDI=; fh=4HE/piJoUCKuBTCCBiej4//zvvzywHdOLL9QM/KYjYM=; b=mVVNRmDOX+T1iRwgq+iM/f+rB0WCeyJDK+24e0peDnal/mjsFtjVUvgMgK4sFnflj4 tjleKjkIp/zxJRRgGpLRc7FW/kjiOVRVJgkgD4fTKY4Ln1FxahPIX0jrqAQLNk9iMlxX xz7TJEGPkbjz97E4yteFVNlEISqZzVlSw67Dr0NvJmN8Eg4T4sf9w3+0AN0ZAQS8KthZ x+ehA3siSXv1bZw7cGti6rVn6NBB6ZWGT+TZgEj7Gx/AFyqrZwoCmSJ9TjOScY91cFcd 4W7i0OfCiHgy55/NUv/Q0qlkdA4oWBhJZ/b0ZcjsnJBknh7Zxa4oQdIgn1e2CQFWLLxs OYnw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20230601 header.b=kPVLQgwQ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from snail.vger.email (snail.vger.email. [23.128.96.37]) by mx.google.com with ESMTPS id i17-20020a170902c95100b001c3411c9b83si6852690pla.454.2023.11.19.11.50.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 19 Nov 2023 11:50:31 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) client-ip=23.128.96.37; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20230601 header.b=kPVLQgwQ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by snail.vger.email (Postfix) with ESMTP id A476E803E796; Sun, 19 Nov 2023 11:50:29 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at snail.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231739AbjKSTuM (ORCPT <rfc822;jaysivo@gmail.com> + 29 others); Sun, 19 Nov 2023 14:50:12 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58926 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231748AbjKSTtt (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Sun, 19 Nov 2023 14:49:49 -0500 Received: from mail-pf1-x42f.google.com (mail-pf1-x42f.google.com [IPv6:2607:f8b0:4864:20::42f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1671B1704 for <linux-kernel@vger.kernel.org>; Sun, 19 Nov 2023 11:49:10 -0800 (PST) Received: by mail-pf1-x42f.google.com with SMTP id d2e1a72fcca58-6c115026985so3870434b3a.1 for <linux-kernel@vger.kernel.org>; Sun, 19 Nov 2023 11:49:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1700423349; x=1701028149; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:from:to:cc:subject :date:message-id:reply-to; bh=w2h0VsFq1SfFmbRfFbDLLmtclse5HYqoYk4CcDA6qDI=; b=kPVLQgwQevJOFiEY0KJ3SPO+qeDa6dHv8o5dvT8mV5RwVnvACWUcAZ09RNYS17O0uJ F5rUlZSZYZJSP3McplnFbayZBp3zJi4oniSKsO9H8xecRURD72zLnK9uzZs31NqN+FrE LF3atFZV3hgZHgdhmuSJ/RuCscN2YzlkQSPcYOHSmqnb3XOY4MxFL9qBy51K9+1mQPoZ ck8UYcQqX568dEFMmZBWVs8elpC8wi4GDirZVqtBxZVK4E8o5R2ZNdLTL325oViQy4W9 F/gRx4taMjSmqYErjPpedjaO3X5zf6tTBo8pad0BnIk+u5a2JlxqT1YPZpkEs9kWg3oO DkqA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700423349; x=1701028149; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=w2h0VsFq1SfFmbRfFbDLLmtclse5HYqoYk4CcDA6qDI=; b=A7tSNiJiiEpGtL4wf3bUp04p93syIY+rRQWQ85OXb2xmPneZKkyvnfHYM5FJDbkquE CI8nWL3aQUeKRJ0mFVlGEhFZmjh2w9tyeP4s9CouR1MbVbNKSGw/qL40EDAjTsRCZ5SM UmmsBrck9jpj8k4NwP9LjEnfpA7bhCjb52Ub40P+EGJVcksgPS5qnKzc2oXMmrwyeEc/ OXZC/tQZ8TMXHJ9tf46ywF/bIrF9WdsdSE8QO5Ym0wJm7ik1FAn8t7m7KJsTjWRbTIqb z8nohzw1pKOkZO/Bgvz3Msl6mcL9elP6jIQtQ6i2dKhbrpv1aEXA5zSjnQUxhiMqaWLF lggg== X-Gm-Message-State: AOJu0YwroDZ4VCarqa6mw9jY2e/KLe3DLC7JbleeCYdZueQkAEC3fOPZ aITERZ1MYRnVW15rOoI5zq0= X-Received: by 2002:a05:6a00:1988:b0:6cb:a1fe:5217 with SMTP id d8-20020a056a00198800b006cba1fe5217mr787020pfl.16.1700423349552; Sun, 19 Nov 2023 11:49:09 -0800 (PST) Received: from KASONG-MB2.tencent.com ([115.171.40.79]) by smtp.gmail.com with ESMTPSA id a6-20020aa78646000000b006cb7feae74fsm1237140pfo.164.2023.11.19.11.49.06 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Sun, 19 Nov 2023 11:49:08 -0800 (PST) From: Kairui Song <ryncsn@gmail.com> To: linux-mm@kvack.org Cc: Andrew Morton <akpm@linux-foundation.org>, "Huang, Ying" <ying.huang@intel.com>, David Hildenbrand <david@redhat.com>, Hugh Dickins <hughd@google.com>, Johannes Weiner <hannes@cmpxchg.org>, Matthew Wilcox <willy@infradead.org>, Michal Hocko <mhocko@suse.com>, linux-kernel@vger.kernel.org, Kairui Song <kasong@tencent.com> Subject: [PATCH 21/24] swap: make swapin_readahead result checking argument mandatory Date: Mon, 20 Nov 2023 03:47:37 +0800 Message-ID: <20231119194740.94101-22-ryncsn@gmail.com> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20231119194740.94101-1-ryncsn@gmail.com> References: <20231119194740.94101-1-ryncsn@gmail.com> Reply-To: Kairui Song <kasong@tencent.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (snail.vger.email [0.0.0.0]); Sun, 19 Nov 2023 11:50:29 -0800 (PST) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1783023199821762830 X-GMAIL-MSGID: 1783023199821762830 |
Series |
Swapin path refactor for optimization and bugfix
|
|
Commit Message
Kairui Song
Nov. 19, 2023, 7:47 p.m. UTC
From: Kairui Song <kasong@tencent.com> This is only one caller now in page fault path, make the result return argument mandatory. Signed-off-by: Kairui Song <kasong@tencent.com> --- mm/swap_state.c | 17 +++++++---------- 1 file changed, 7 insertions(+), 10 deletions(-)
Comments
On Sun, Nov 19, 2023 at 11:49 AM Kairui Song <ryncsn@gmail.com> wrote: > > From: Kairui Song <kasong@tencent.com> > > This is only one caller now in page fault path, make the result return > argument mandatory. > > Signed-off-by: Kairui Song <kasong@tencent.com> > --- > mm/swap_state.c | 17 +++++++---------- > 1 file changed, 7 insertions(+), 10 deletions(-) > > diff --git a/mm/swap_state.c b/mm/swap_state.c > index 6f39aa8394f1..0433a2586c6d 100644 > --- a/mm/swap_state.c > +++ b/mm/swap_state.c > @@ -913,7 +913,6 @@ static struct page *swapin_no_readahead(swp_entry_t entry, gfp_t gfp_mask, > struct page *swapin_readahead(swp_entry_t entry, gfp_t gfp_mask, > struct vm_fault *vmf, enum swap_cache_result *result) > { > - enum swap_cache_result cache_result; > struct swap_info_struct *si; > struct mempolicy *mpol; > void *shadow = NULL; > @@ -928,29 +927,27 @@ struct page *swapin_readahead(swp_entry_t entry, gfp_t gfp_mask, > > folio = swap_cache_get_folio(entry, vmf, &shadow); > if (folio) { > + *result = SWAP_CACHE_HIT; > page = folio_file_page(folio, swp_offset(entry)); > - cache_result = SWAP_CACHE_HIT; > goto done; > } > > mpol = get_vma_policy(vmf->vma, vmf->address, 0, &ilx); > if (swap_use_no_readahead(si, swp_offset(entry))) { > + *result = SWAP_CACHE_BYPASS; Each of this "*result" will compile into memory store instructions. The compiler most likely can't optimize and combine them together because the store can cause segfault from the compiler's point of view. The multiple local variable assignment can be compiled into a few registers assignment so it does not cost as much as multiple memory stores. > page = swapin_no_readahead(entry, gfp_mask, mpol, ilx, vmf->vma->vm_mm); > - cache_result = SWAP_CACHE_BYPASS; > if (shadow) > workingset_refault(page_folio(page), shadow); > - } else if (swap_use_vma_readahead(si)) { > - page = swap_vma_readahead(entry, gfp_mask, mpol, ilx, vmf); > - cache_result = SWAP_CACHE_MISS; > } else { > - page = swap_cluster_readahead(entry, gfp_mask, mpol, ilx); > - cache_result = SWAP_CACHE_MISS; > + *result = SWAP_CACHE_MISS; > + if (swap_use_vma_readahead(si)) > + page = swap_vma_readahead(entry, gfp_mask, mpol, ilx, vmf); > + else > + page = swap_cluster_readahead(entry, gfp_mask, mpol, ilx); I recall you introduce or heavy modify this function in previous patch before. Consider combine some of the patch and present the final version sooner. From the reviewing point of view, don't need to review so many internal version which get over written any way. > } > mpol_cond_put(mpol); > done: > put_swap_device(si); > - if (result) > - *result = cache_result; The original version with check and assign it at one place is better. Safer and produce better code. Chris
Chris Li <chrisl@kernel.org> 于2023年11月22日周三 13:18写道: > > On Sun, Nov 19, 2023 at 11:49 AM Kairui Song <ryncsn@gmail.com> wrote: > > > > From: Kairui Song <kasong@tencent.com> > > > > This is only one caller now in page fault path, make the result return > > argument mandatory. > > > > Signed-off-by: Kairui Song <kasong@tencent.com> > > --- > > mm/swap_state.c | 17 +++++++---------- > > 1 file changed, 7 insertions(+), 10 deletions(-) > > > > diff --git a/mm/swap_state.c b/mm/swap_state.c > > index 6f39aa8394f1..0433a2586c6d 100644 > > --- a/mm/swap_state.c > > +++ b/mm/swap_state.c > > @@ -913,7 +913,6 @@ static struct page *swapin_no_readahead(swp_entry_t entry, gfp_t gfp_mask, > > struct page *swapin_readahead(swp_entry_t entry, gfp_t gfp_mask, > > struct vm_fault *vmf, enum swap_cache_result *result) > > { > > - enum swap_cache_result cache_result; > > struct swap_info_struct *si; > > struct mempolicy *mpol; > > void *shadow = NULL; > > @@ -928,29 +927,27 @@ struct page *swapin_readahead(swp_entry_t entry, gfp_t gfp_mask, > > > > folio = swap_cache_get_folio(entry, vmf, &shadow); > > if (folio) { > > + *result = SWAP_CACHE_HIT; > > page = folio_file_page(folio, swp_offset(entry)); > > - cache_result = SWAP_CACHE_HIT; > > goto done; > > } > > > > mpol = get_vma_policy(vmf->vma, vmf->address, 0, &ilx); > > if (swap_use_no_readahead(si, swp_offset(entry))) { > > + *result = SWAP_CACHE_BYPASS; > > Each of this "*result" will compile into memory store instructions. > The compiler most likely can't optimize and combine them together > because the store can cause segfault from the compiler's point of > view. The multiple local variable assignment can be compiled into a > few registers assignment so it does not cost as much as multiple > memory stores. > > > page = swapin_no_readahead(entry, gfp_mask, mpol, ilx, vmf->vma->vm_mm); > > - cache_result = SWAP_CACHE_BYPASS; > > if (shadow) > > workingset_refault(page_folio(page), shadow); > > - } else if (swap_use_vma_readahead(si)) { > > - page = swap_vma_readahead(entry, gfp_mask, mpol, ilx, vmf); > > - cache_result = SWAP_CACHE_MISS; > > } else { > > - page = swap_cluster_readahead(entry, gfp_mask, mpol, ilx); > > - cache_result = SWAP_CACHE_MISS; > > + *result = SWAP_CACHE_MISS; > > + if (swap_use_vma_readahead(si)) > > + page = swap_vma_readahead(entry, gfp_mask, mpol, ilx, vmf); > > + else > > + page = swap_cluster_readahead(entry, gfp_mask, mpol, ilx); > > I recall you introduce or heavy modify this function in previous patch before. > Consider combine some of the patch and present the final version sooner. > From the reviewing point of view, don't need to review so many > internal version which get over written any way. > > > } > > mpol_cond_put(mpol); > > done: > > put_swap_device(si); > > - if (result) > > - *result = cache_result; > > The original version with check and assign it at one place is better. > Safer and produce better code. Yes, that's less error-prone indeed, saving a "if" seems doesn't worth all the potential trouble, will drop this.
diff --git a/mm/swap_state.c b/mm/swap_state.c index 6f39aa8394f1..0433a2586c6d 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -913,7 +913,6 @@ static struct page *swapin_no_readahead(swp_entry_t entry, gfp_t gfp_mask, struct page *swapin_readahead(swp_entry_t entry, gfp_t gfp_mask, struct vm_fault *vmf, enum swap_cache_result *result) { - enum swap_cache_result cache_result; struct swap_info_struct *si; struct mempolicy *mpol; void *shadow = NULL; @@ -928,29 +927,27 @@ struct page *swapin_readahead(swp_entry_t entry, gfp_t gfp_mask, folio = swap_cache_get_folio(entry, vmf, &shadow); if (folio) { + *result = SWAP_CACHE_HIT; page = folio_file_page(folio, swp_offset(entry)); - cache_result = SWAP_CACHE_HIT; goto done; } mpol = get_vma_policy(vmf->vma, vmf->address, 0, &ilx); if (swap_use_no_readahead(si, swp_offset(entry))) { + *result = SWAP_CACHE_BYPASS; page = swapin_no_readahead(entry, gfp_mask, mpol, ilx, vmf->vma->vm_mm); - cache_result = SWAP_CACHE_BYPASS; if (shadow) workingset_refault(page_folio(page), shadow); - } else if (swap_use_vma_readahead(si)) { - page = swap_vma_readahead(entry, gfp_mask, mpol, ilx, vmf); - cache_result = SWAP_CACHE_MISS; } else { - page = swap_cluster_readahead(entry, gfp_mask, mpol, ilx); - cache_result = SWAP_CACHE_MISS; + *result = SWAP_CACHE_MISS; + if (swap_use_vma_readahead(si)) + page = swap_vma_readahead(entry, gfp_mask, mpol, ilx, vmf); + else + page = swap_cluster_readahead(entry, gfp_mask, mpol, ilx); } mpol_cond_put(mpol); done: put_swap_device(si); - if (result) - *result = cache_result; return page; }