Message ID | 20230522070905.16773-6-ying.huang@intel.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp1272834vqo; Mon, 22 May 2023 00:46:06 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6et0hKPkA9zabhBUrQIBGYqa7LVJX71KD+54mmtaWsAbTjskyj6eCKLkQkrCs1vjpMRox9 X-Received: by 2002:a17:902:ec8a:b0:1ae:5212:748b with SMTP id x10-20020a170902ec8a00b001ae5212748bmr11069428plg.49.1684741566652; Mon, 22 May 2023 00:46:06 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1684741566; cv=none; d=google.com; s=arc-20160816; b=kDJmVDTZG2x1COQfTS4+7A0YfAm+dsqfr2+zuwqhr3SAcqiIluppBxQl++dSzGemXW OdGbj1Dk20DEyQsubynETInJqq71z/mu4y4fDXOo6HLZbZ+fv6fpyytoq4Jul6/jiAxF BVC6+FvG7O8/ZJUjrR/0mOfWzNiHZL37PQjLV7K645373ic/qsfhPvkNG9jxozUepkkp PMnpPP6/D6oPkgy0P5Vuw+Jpuiti6ZNByrlIf1raBRBM+s7BsK+M0FSRsUxhTh7BwUyV k80SNC5be8hin6vN9pKJdVT/qoMUnH2Kt/SDTakJOzUjwtaGFjxxB60HZpBHYe3rDLdd yyUw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=783Bty5BOC1F0ZcikWUj9ZBiF3bJgnEki1bFOjoWQOc=; b=RdACPa9fjff6hMn6Bm+EZwpMy2ccdNCyjs6WsCGjNNQ9O6cKfWV0yLLQ9Sl7W/YuUG 5mKdOzJ3M/ymEUGAZMzPFdrYQJltAue0ZMh6pi0t87A7dEN2GZh3ROfccKRZKAoQXpU/ tLZaUVmNYqDOguZveUsKIoHWfwqlblXZhToMGKD9QmaxrE4ztYwxo1tp2DQUm34YlgwA 0AdVjb7Jqy8DXfPT/SyCfJYak9bvtIe9NG492HaA17xCl+CUVs7cIz/QMuiq5rfK+W+U LzobL5V95ZVQKGCwc2rqtdanEFWes0ZS+XFdajB+W1lDOWcSVwi5WthEtRFP/fcHLgHt G4PQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=JUARmWfc; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id 20-20020a170902e9d400b001ae5e5c275asi2217446plk.364.2023.05.22.00.45.54; Mon, 22 May 2023 00:46:06 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=JUARmWfc; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232380AbjEVHK7 (ORCPT <rfc822;cscallsign@gmail.com> + 99 others); Mon, 22 May 2023 03:10:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39174 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232486AbjEVHKD (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Mon, 22 May 2023 03:10:03 -0400 Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7E9F2E56 for <linux-kernel@vger.kernel.org>; Mon, 22 May 2023 00:09:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1684739378; x=1716275378; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=pFsVEqpuPiRv19w/rqm+IY358s8W3zh5r+eFbJEC2g0=; b=JUARmWfctMfmKL6sWx4fMCOtvswpiF2eMo/ghaJS8hRaZ/lFgceibrOP Mzfnnxhcs7f/yUNPKSmlmf3EPvXaCw9+K+y9d2RvGMERZI/Z/it1LK8n2 aE8tAQGV5BRj9a3rwtcQNgNLbiS+4D/NADSLqH8IuM+CKPgVzKXX3hM0b TL9mLJpcOVexwjmWVbOR3MKmMxX+eiySzS3eZeZLqSwP+YdX5JeVkTmKf vdjmxeLqM8UO5n/k4iK05Gw8IBdiwSI015RastyRPXY1FU6Kjg/quZagl oTsRiIP/7loM5JkeMSqxoDx5W9LUJzCJzWcjAU0Bpf5RKOm7gAKpVvNjA g==; X-IronPort-AV: E=McAfee;i="6600,9927,10717"; a="337437062" X-IronPort-AV: E=Sophos;i="6.00,183,1681196400"; d="scan'208";a="337437062" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 May 2023 00:09:38 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10717"; a="773212746" X-IronPort-AV: E=Sophos;i="6.00,183,1681196400"; d="scan'208";a="773212746" Received: from yhuang6-mobl2.sh.intel.com ([10.238.5.152]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 May 2023 00:09:34 -0700 From: Huang Ying <ying.huang@intel.com> To: Andrew Morton <akpm@linux-foundation.org> Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Huang Ying <ying.huang@intel.com>, David Hildenbrand <david@redhat.com>, Hugh Dickins <hughd@google.com>, Johannes Weiner <hannes@cmpxchg.org>, Matthew Wilcox <willy@infradead.org>, Michal Hocko <mhocko@suse.com>, Minchan Kim <minchan@kernel.org>, Tim Chen <tim.c.chen@linux.intel.com>, Yang Shi <shy828301@gmail.com>, Yu Zhao <yuzhao@google.com> Subject: [PATCH -V2 5/5] swap: comments get_swap_device() with usage rule Date: Mon, 22 May 2023 15:09:05 +0800 Message-Id: <20230522070905.16773-6-ying.huang@intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230522070905.16773-1-ying.huang@intel.com> References: <20230522070905.16773-1-ying.huang@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,SPF_HELO_NONE, SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1766579573090947986?= X-GMAIL-MSGID: =?utf-8?q?1766579573090947986?= |
Series |
swap: cleanup get/put_swap_device() usage
|
|
Commit Message
Huang, Ying
May 22, 2023, 7:09 a.m. UTC
The general rule to use a swap entry is as follows.
When we get a swap entry, if there isn't some other way to prevent
swapoff, such as page lock for swap cache, page table lock, etc., the
swap entry may become invalid because of swapoff. Then, we need to
enclose all swap related functions with get_swap_device() and
put_swap_device(), unless the swap functions call
get/put_swap_device() by themselves.
Add the rule as comments of get_swap_device().
Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Tim Chen <tim.c.chen@linux.intel.com>
Cc: Yang Shi <shy828301@gmail.com>
Cc: Yu Zhao <yuzhao@google.com>
---
mm/swapfile.c | 12 +++++++++---
1 file changed, 9 insertions(+), 3 deletions(-)
Comments
On 22.05.23 09:09, Huang Ying wrote: > The general rule to use a swap entry is as follows. > > When we get a swap entry, if there isn't some other way to prevent > swapoff, such as page lock for swap cache, page table lock, etc., the > swap entry may become invalid because of swapoff. Then, we need to > enclose all swap related functions with get_swap_device() and > put_swap_device(), unless the swap functions call > get/put_swap_device() by themselves. > > Add the rule as comments of get_swap_device(). > > Signed-off-by: "Huang, Ying" <ying.huang@intel.com> > Cc: David Hildenbrand <david@redhat.com> > Cc: Hugh Dickins <hughd@google.com> > Cc: Johannes Weiner <hannes@cmpxchg.org> > Cc: Matthew Wilcox <willy@infradead.org> > Cc: Michal Hocko <mhocko@suse.com> > Cc: Minchan Kim <minchan@kernel.org> > Cc: Tim Chen <tim.c.chen@linux.intel.com> > Cc: Yang Shi <shy828301@gmail.com> > Cc: Yu Zhao <yuzhao@google.com> > --- > mm/swapfile.c | 12 +++++++++--- > 1 file changed, 9 insertions(+), 3 deletions(-) > > diff --git a/mm/swapfile.c b/mm/swapfile.c > index 4dbaea64635d..0c1cb935b2eb 100644 > --- a/mm/swapfile.c > +++ b/mm/swapfile.c > @@ -1219,6 +1219,13 @@ static unsigned char __swap_entry_free_locked(struct swap_info_struct *p, > } > > /* > + * When we get a swap entry, if there isn't some other way to prevent > + * swapoff, such as page lock for swap cache, page table lock, etc., Again "page lock for swap cache" might be imprecise. > + * the swap entry may become invalid because of swapoff. Then, we > + * need to enclose all swap related functions with get_swap_device() > + * and put_swap_device(), unless the swap functions call > + * get/put_swap_device() by themselves. > + * > * Check whether swap entry is valid in the swap device. If so, > * return pointer to swap_info_struct, and keep the swap entry valid > * via preventing the swap device from being swapoff, until > @@ -1227,9 +1234,8 @@ static unsigned char __swap_entry_free_locked(struct swap_info_struct *p, > * Notice that swapoff or swapoff+swapon can still happen before the > * percpu_ref_tryget_live() in get_swap_device() or after the > * percpu_ref_put() in put_swap_device() if there isn't any other way > - * to prevent swapoff, such as page lock, page table lock, etc. The > - * caller must be prepared for that. For example, the following > - * situation is possible. > + * to prevent swapoff. The caller must be prepared for that. For > + * example, the following situation is possible. > * > * CPU1 CPU2 > * do_swap_page() Reviewed-by: David Hildenbrand <david@redhat.com>
David Hildenbrand <david@redhat.com> writes: > On 22.05.23 09:09, Huang Ying wrote: >> The general rule to use a swap entry is as follows. >> When we get a swap entry, if there isn't some other way to prevent >> swapoff, such as page lock for swap cache, page table lock, etc., the >> swap entry may become invalid because of swapoff. Then, we need to >> enclose all swap related functions with get_swap_device() and >> put_swap_device(), unless the swap functions call >> get/put_swap_device() by themselves. >> Add the rule as comments of get_swap_device(). >> Signed-off-by: "Huang, Ying" <ying.huang@intel.com> >> Cc: David Hildenbrand <david@redhat.com> >> Cc: Hugh Dickins <hughd@google.com> >> Cc: Johannes Weiner <hannes@cmpxchg.org> >> Cc: Matthew Wilcox <willy@infradead.org> >> Cc: Michal Hocko <mhocko@suse.com> >> Cc: Minchan Kim <minchan@kernel.org> >> Cc: Tim Chen <tim.c.chen@linux.intel.com> >> Cc: Yang Shi <shy828301@gmail.com> >> Cc: Yu Zhao <yuzhao@google.com> >> --- >> mm/swapfile.c | 12 +++++++++--- >> 1 file changed, 9 insertions(+), 3 deletions(-) >> diff --git a/mm/swapfile.c b/mm/swapfile.c >> index 4dbaea64635d..0c1cb935b2eb 100644 >> --- a/mm/swapfile.c >> +++ b/mm/swapfile.c >> @@ -1219,6 +1219,13 @@ static unsigned char __swap_entry_free_locked(struct swap_info_struct *p, >> } >> /* >> + * When we get a swap entry, if there isn't some other way to prevent >> + * swapoff, such as page lock for swap cache, page table lock, etc., > > Again "page lock for swap cache" might be imprecise. Sure. Will revise this. >> + * the swap entry may become invalid because of swapoff. Then, we >> + * need to enclose all swap related functions with get_swap_device() >> + * and put_swap_device(), unless the swap functions call >> + * get/put_swap_device() by themselves. >> + * >> * Check whether swap entry is valid in the swap device. If so, >> * return pointer to swap_info_struct, and keep the swap entry valid >> * via preventing the swap device from being swapoff, until >> @@ -1227,9 +1234,8 @@ static unsigned char __swap_entry_free_locked(struct swap_info_struct *p, >> * Notice that swapoff or swapoff+swapon can still happen before the >> * percpu_ref_tryget_live() in get_swap_device() or after the >> * percpu_ref_put() in put_swap_device() if there isn't any other way >> - * to prevent swapoff, such as page lock, page table lock, etc. The >> - * caller must be prepared for that. For example, the following >> - * situation is possible. >> + * to prevent swapoff. The caller must be prepared for that. For >> + * example, the following situation is possible. >> * >> * CPU1 CPU2 >> * do_swap_page() > > Reviewed-by: David Hildenbrand <david@redhat.com> Thanks! Best Regards, Huang, Ying
On Mon, May 22, 2023 at 12:09 AM Huang Ying <ying.huang@intel.com> wrote: > > The general rule to use a swap entry is as follows. > > When we get a swap entry, if there isn't some other way to prevent > swapoff, such as page lock for swap cache, page table lock, etc., the > swap entry may become invalid because of swapoff. Then, we need to > enclose all swap related functions with get_swap_device() and > put_swap_device(), unless the swap functions call > get/put_swap_device() by themselves. > > Add the rule as comments of get_swap_device(). > > Signed-off-by: "Huang, Ying" <ying.huang@intel.com> > Cc: David Hildenbrand <david@redhat.com> > Cc: Hugh Dickins <hughd@google.com> > Cc: Johannes Weiner <hannes@cmpxchg.org> > Cc: Matthew Wilcox <willy@infradead.org> > Cc: Michal Hocko <mhocko@suse.com> > Cc: Minchan Kim <minchan@kernel.org> > Cc: Tim Chen <tim.c.chen@linux.intel.com> > Cc: Yang Shi <shy828301@gmail.com> > Cc: Yu Zhao <yuzhao@google.com> > --- > mm/swapfile.c | 12 +++++++++--- > 1 file changed, 9 insertions(+), 3 deletions(-) > > diff --git a/mm/swapfile.c b/mm/swapfile.c > index 4dbaea64635d..0c1cb935b2eb 100644 > --- a/mm/swapfile.c > +++ b/mm/swapfile.c > @@ -1219,6 +1219,13 @@ static unsigned char __swap_entry_free_locked(struct swap_info_struct *p, > } > > /* > + * When we get a swap entry, if there isn't some other way to prevent > + * swapoff, such as page lock for swap cache, page table lock, etc., > + * the swap entry may become invalid because of swapoff. Then, we > + * need to enclose all swap related functions with get_swap_device() > + * and put_swap_device(), unless the swap functions call > + * get/put_swap_device() by themselves. > + * > * Check whether swap entry is valid in the swap device. If so, > * return pointer to swap_info_struct, and keep the swap entry valid > * via preventing the swap device from being swapoff, until > @@ -1227,9 +1234,8 @@ static unsigned char __swap_entry_free_locked(struct swap_info_struct *p, > * Notice that swapoff or swapoff+swapon can still happen before the > * percpu_ref_tryget_live() in get_swap_device() or after the > * percpu_ref_put() in put_swap_device() if there isn't any other way > - * to prevent swapoff, such as page lock, page table lock, etc. The > - * caller must be prepared for that. For example, the following > - * situation is possible. > + * to prevent swapoff. The caller must be prepared for that. For > + * example, the following situation is possible. > * > * CPU1 CPU2 > * do_swap_page() > -- > 2.39.2 > > Thanks for clarifying the code! With David's comments: Reviewed-by: Yosry Ahmed <yosryahmed@google.com>
diff --git a/mm/swapfile.c b/mm/swapfile.c index 4dbaea64635d..0c1cb935b2eb 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -1219,6 +1219,13 @@ static unsigned char __swap_entry_free_locked(struct swap_info_struct *p, } /* + * When we get a swap entry, if there isn't some other way to prevent + * swapoff, such as page lock for swap cache, page table lock, etc., + * the swap entry may become invalid because of swapoff. Then, we + * need to enclose all swap related functions with get_swap_device() + * and put_swap_device(), unless the swap functions call + * get/put_swap_device() by themselves. + * * Check whether swap entry is valid in the swap device. If so, * return pointer to swap_info_struct, and keep the swap entry valid * via preventing the swap device from being swapoff, until @@ -1227,9 +1234,8 @@ static unsigned char __swap_entry_free_locked(struct swap_info_struct *p, * Notice that swapoff or swapoff+swapon can still happen before the * percpu_ref_tryget_live() in get_swap_device() or after the * percpu_ref_put() in put_swap_device() if there isn't any other way - * to prevent swapoff, such as page lock, page table lock, etc. The - * caller must be prepared for that. For example, the following - * situation is possible. + * to prevent swapoff. The caller must be prepared for that. For + * example, the following situation is possible. * * CPU1 CPU2 * do_swap_page()