Message ID | 20240118111036.72641-3-21cnbao@gmail.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel+bounces-30037-ouuuleilei=gmail.com@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7301:2bc4:b0:101:a8e8:374 with SMTP id hx4csp267840dyb; Thu, 18 Jan 2024 03:15:19 -0800 (PST) X-Google-Smtp-Source: AGHT+IHeJ4vLu4SPEWIENniMIxgpML2l8vcwPsi7zJIBqS1A43PdjO7tdVwwLdjrjasZ8rYDuuIi X-Received: by 2002:a05:6359:6e89:b0:175:7be5:7355 with SMTP id ti9-20020a0563596e8900b001757be57355mr620476rwb.31.1705576519122; Thu, 18 Jan 2024 03:15:19 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1705576519; cv=pass; d=google.com; s=arc-20160816; b=R8kHpIeZijVHzuTmCHtph127EptgySxZwEEzWAKkSFvzAVLsGA6auLRPVf0nJjVphD MAZHtAJ/iXmP+9zaB/+ivkaeps2m1hVIXH0RCw49ZU10EbTUXXgnq7t/EMXYlkvTqLzN WOjG+vHMnIdzz8xfN3s+r5KxxAnk9AoN6iG1uZrciqT+Fku5lxWRDxBghQTKYbOrPgPR 7ZZw4YCbFgbCHlaj0wq3THK4diqkZ6gg6yUc1cHCXIV/hiCRlANiCUpiqsABqXqi+PQV X/dCw0sju/Y+WM0Gvc7At6kNU+bRQhOzjzEWlhqrR+6ulVDvunQYuuwzAGKrSQNXOIAr w/+Q== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=yrbTbn/Co4xZrxsx5gjY1Btb4+PFsZ1QZ40ExDZVluk=; fh=HoForU5yJzZ7+tbHNDgwWa4PuJoKilOkY6/CHbWKADs=; b=PJyXtQac88oADqtmL7ZVGdskSiH0yhZMgCRmd7Nu76aFjkGU/HJt9uCMOp836bOf3q eD3mGT9wbByVwOlM0bT7c+crNgyZbzDP0M0owtbd3OM7iSoLIeg/iLHufPDli90i7HYl Xzwxlgz0hZKTrCvN4B700gNOKRqnhstqtqTeguBCERtyk2IRint8nWs/mMIr3fz6mGEq gLro3aYIaXYIWaov/iEU4l/u7IJ518YlkGMiVbc3lAqjTi9j78WLvvGcefdvldTVMHwU X8isQ0j4riOIYOUKYUHLPkLfXnVNZM9BnAsKBbWGOitAYHQbffBNpNYC54vZtfPgaurg NB+A== ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@gmail.com header.s=20230601 header.b=lDhrubko; arc=pass (i=1 spf=pass spfdomain=gmail.com dkim=pass dkdomain=gmail.com dmarc=pass fromdomain=gmail.com); spf=pass (google.com: domain of linux-kernel+bounces-30037-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-30037-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [139.178.88.99]) by mx.google.com with ESMTPS id h36-20020a63f924000000b005cd77a0fd8dsi1286195pgi.484.2024.01.18.03.15.18 for <ouuuleilei@gmail.com> (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 18 Jan 2024 03:15:19 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-30037-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) client-ip=139.178.88.99; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20230601 header.b=lDhrubko; arc=pass (i=1 spf=pass spfdomain=gmail.com dkim=pass dkdomain=gmail.com dmarc=pass fromdomain=gmail.com); spf=pass (google.com: domain of linux-kernel+bounces-30037-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-30037-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id 9B092289F58 for <ouuuleilei@gmail.com>; Thu, 18 Jan 2024 11:13:22 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id A275D24B5D; Thu, 18 Jan 2024 11:11:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="lDhrubko" Received: from mail-pf1-f177.google.com (mail-pf1-f177.google.com [209.85.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A31DB20B1C for <linux-kernel@vger.kernel.org>; Thu, 18 Jan 2024 11:11:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.177 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1705576294; cv=none; b=ouFaID1an1gpGFRElCfyyiZBpJUXHvhVwW1bOb5o/oFjngYxgqt+8/3UUv+qyivcJpYmbXs6Hlroz+rIMDdMGR3o+ICfL/M1XL/EQsDhOlP/yE2nRiu6EdvrMi86ZuT2uGjXqcvJp46dCi2hExOwlz0E4KjhMFC3K0HAQDeYuKg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1705576294; c=relaxed/simple; bh=W8f0NHx3Q000IPONH4BGK4nCqN6LfRPEfKrciV5t9a8=; h=Received:DKIM-Signature:X-Google-DKIM-Signature: X-Gm-Message-State:X-Google-Smtp-Source:X-Received:Received:From: To:Cc:Subject:Date:Message-Id:X-Mailer:In-Reply-To:References: MIME-Version:Content-Transfer-Encoding; b=CA16hygBrZArJwSYhgW8oJUKooXngzmcEnQeoIY57oQh3XrWWNXfjNke94P4vTAi+oHj1/yuAclUgKlbzdtsG+nsY7NDlyRNp76LsAkzH8WnZXJPsXum0M2oETg1LdzBCJDl6PpLSK3ulleMMN60AsZYlXXqiUorYO1zpctsui8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=lDhrubko; arc=none smtp.client-ip=209.85.210.177 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Received: by mail-pf1-f177.google.com with SMTP id d2e1a72fcca58-6db786df38dso3845147b3a.0 for <linux-kernel@vger.kernel.org>; Thu, 18 Jan 2024 03:11:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1705576293; x=1706181093; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=yrbTbn/Co4xZrxsx5gjY1Btb4+PFsZ1QZ40ExDZVluk=; b=lDhrubkoyTi2uAetPFp5DK+6HMUBjSh+L10SoBkZqz8LPj+BC+/oDSy2qLbpDdxj+z a0opjwTtHywfW7yevk8cO9FMR1CKSsRaKiCqTMeWFQ1TpgwJ1gXToh2USeDSE6RNbTS1 7YEyqYEXciBIJ2w9MdPkiIMvew4nfU9qRya8/N4Db5crr2CuW7cKm58n0UasS9URLbDK 8Q1wdghAzbGHyiuAPEh4qEkpqW7Ow0bwAeNRXZwsO/bcB3Sy1Apz9gxTXu3J+4UgvhGy +06CTOT9uVREgl28HSdXTgSGiE0b5AQu52ibpMGrGfLa5T6rc63PujDLJMNY28ujMIJ9 cE+A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1705576293; x=1706181093; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=yrbTbn/Co4xZrxsx5gjY1Btb4+PFsZ1QZ40ExDZVluk=; b=uwe36LMprAQ16yn5+CtSRXD5YjgrJcS7cuLR2Xe9DAKw9wm5MlpChYyLhyqA3Re7MQ 8K1J3/canxBNtpIUzK0nH24Zm6hBma9REaTc0WCxkNvQ0lkkZPklsc4W43CTArLOmFR4 d6Yv3j7KKDEaQ2cOhDhrazhY5zUlGiz4PKloG8a9p2bc8tQfpx3G5Cu0SHUQqMvS0nhA Zbs+LKbDf0ziCQ04akT8d+E2UhM3cnPV0f3TCkMQL3HC3KW4oNq2qReVticcXD0FM7Si /g5m4X9dkGFH1ZldFmdBCNQ2re0jBbmf+7ILZysOUogvvA1OlVE8TWp0pRSIcG0qYvXF Q5UQ== X-Gm-Message-State: AOJu0Yw3NkiCHb+4nW2Rtk/tFl7xloTx0E3y+2ZXaGL8Oyk30rg175K8 1zguir9AC0eirLN/VHOAvyY3GW3fIHX6oYGdz6mcLkNtk/3Ixq5L X-Received: by 2002:a05:6a00:174e:b0:6d9:bd63:e3e5 with SMTP id j14-20020a056a00174e00b006d9bd63e3e5mr689678pfc.26.1705576292986; Thu, 18 Jan 2024 03:11:32 -0800 (PST) Received: from barry-desktop.. (143.122.224.49.dyn.cust.vf.net.nz. [49.224.122.143]) by smtp.gmail.com with ESMTPSA id t19-20020a056a0021d300b006d9be753ac7sm3039107pfj.108.2024.01.18.03.11.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 18 Jan 2024 03:11:32 -0800 (PST) From: Barry Song <21cnbao@gmail.com> To: ryan.roberts@arm.com, akpm@linux-foundation.org, david@redhat.com, linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, mhocko@suse.com, shy828301@gmail.com, wangkefeng.wang@huawei.com, willy@infradead.org, xiang@kernel.org, ying.huang@intel.com, yuzhao@google.com, surenb@google.com, steven.price@arm.com, Chuanhua Han <hanchuanhua@oppo.com>, Barry Song <v-songbaohua@oppo.com> Subject: [PATCH RFC 2/6] mm: swap: introduce swap_nr_free() for batched swap_free() Date: Fri, 19 Jan 2024 00:10:32 +1300 Message-Id: <20240118111036.72641-3-21cnbao@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240118111036.72641-1-21cnbao@gmail.com> References: <20231025144546.577640-1-ryan.roberts@arm.com> <20240118111036.72641-1-21cnbao@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: <linux-kernel.vger.kernel.org> List-Subscribe: <mailto:linux-kernel+subscribe@vger.kernel.org> List-Unsubscribe: <mailto:linux-kernel+unsubscribe@vger.kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1788426604097774403 X-GMAIL-MSGID: 1788426604097774403 |
Series |
mm: support large folios swap-in
|
|
Commit Message
Barry Song
Jan. 18, 2024, 11:10 a.m. UTC
From: Chuanhua Han <hanchuanhua@oppo.com> While swapping in a large folio, we need to free swaps related to the whole folio. To avoid frequently acquiring and releasing swap locks, it is better to introduce an API for batched free. Signed-off-by: Chuanhua Han <hanchuanhua@oppo.com> Co-developed-by: Barry Song <v-songbaohua@oppo.com> Signed-off-by: Barry Song <v-songbaohua@oppo.com> --- include/linux/swap.h | 6 ++++++ mm/swapfile.c | 29 +++++++++++++++++++++++++++++ 2 files changed, 35 insertions(+)
Comments
On Thu, Jan 18, 2024 at 3:11 AM Barry Song <21cnbao@gmail.com> wrote: > > From: Chuanhua Han <hanchuanhua@oppo.com> > > While swapping in a large folio, we need to free swaps related to the whole > folio. To avoid frequently acquiring and releasing swap locks, it is better > to introduce an API for batched free. > > Signed-off-by: Chuanhua Han <hanchuanhua@oppo.com> > Co-developed-by: Barry Song <v-songbaohua@oppo.com> > Signed-off-by: Barry Song <v-songbaohua@oppo.com> > --- > include/linux/swap.h | 6 ++++++ > mm/swapfile.c | 29 +++++++++++++++++++++++++++++ > 2 files changed, 35 insertions(+) > > diff --git a/include/linux/swap.h b/include/linux/swap.h > index 4db00ddad261..31a4ee2dcd1c 100644 > --- a/include/linux/swap.h > +++ b/include/linux/swap.h > @@ -478,6 +478,7 @@ extern void swap_shmem_alloc(swp_entry_t); > extern int swap_duplicate(swp_entry_t); > extern int swapcache_prepare(swp_entry_t); > extern void swap_free(swp_entry_t); > +extern void swap_nr_free(swp_entry_t entry, int nr_pages); > extern void swapcache_free_entries(swp_entry_t *entries, int n); > extern int free_swap_and_cache(swp_entry_t); > int swap_type_of(dev_t device, sector_t offset); > @@ -553,6 +554,11 @@ static inline void swap_free(swp_entry_t swp) > { > } > > +void swap_nr_free(swp_entry_t entry, int nr_pages) > +{ > + > +} > + > static inline void put_swap_folio(struct folio *folio, swp_entry_t swp) > { > } > diff --git a/mm/swapfile.c b/mm/swapfile.c > index 556ff7347d5f..6321bda96b77 100644 > --- a/mm/swapfile.c > +++ b/mm/swapfile.c > @@ -1335,6 +1335,35 @@ void swap_free(swp_entry_t entry) > __swap_entry_free(p, entry); > } > > +void swap_nr_free(swp_entry_t entry, int nr_pages) > +{ > + int i; > + struct swap_cluster_info *ci; > + struct swap_info_struct *p; > + unsigned type = swp_type(entry); > + unsigned long offset = swp_offset(entry); > + DECLARE_BITMAP(usage, SWAPFILE_CLUSTER) = { 0 }; > + > + VM_BUG_ON(offset % SWAPFILE_CLUSTER + nr_pages > SWAPFILE_CLUSTER); BUG_ON here seems a bit too developer originated. Maybe warn once and roll back to free one by one? How big is your typical SWAPFILE_CUSTER and nr_pages typically in arm? I ask this question because nr_ppages > 64, that is a totally different game, we can completely bypass the swap cache slots. > + > + if (nr_pages == 1) { > + swap_free(entry); > + return; > + } > + > + p = _swap_info_get(entry); > + > + ci = lock_cluster(p, offset); > + for (i = 0; i < nr_pages; i++) { > + if (__swap_entry_free_locked(p, offset + i, 1)) > + __bitmap_set(usage, i, 1); > + } > + unlock_cluster(ci); > + > + for_each_clear_bit(i, usage, nr_pages) > + free_swap_slot(swp_entry(type, offset + i)); Notice that free_swap_slot() internal has per CPU cache batching as well. Every free_swap_slot will get some per_cpu swap slot cache and cache->lock. There is double batching here. If the typical batch size here is bigger than 64 entries, we can go directly to batching swap_entry_free and avoid the free_swap_slot() batching altogether. Unlike free_swap_slot_entries(), here swap slots are all from one swap device, there is no need to sort and group the swap slot by swap devices. Chris Chris > +} > + > /* > * Called after dropping swapcache to decrease refcnt to swap entries. > */ > -- > 2.34.1 > >
Hi Chris, Thanks! On Sat, Jan 27, 2024 at 12:17 PM Chris Li <chrisl@kernel.org> wrote: > > On Thu, Jan 18, 2024 at 3:11 AM Barry Song <21cnbao@gmail.com> wrote: > > > > From: Chuanhua Han <hanchuanhua@oppo.com> > > > > While swapping in a large folio, we need to free swaps related to the whole > > folio. To avoid frequently acquiring and releasing swap locks, it is better > > to introduce an API for batched free. > > > > Signed-off-by: Chuanhua Han <hanchuanhua@oppo.com> > > Co-developed-by: Barry Song <v-songbaohua@oppo.com> > > Signed-off-by: Barry Song <v-songbaohua@oppo.com> > > --- > > include/linux/swap.h | 6 ++++++ > > mm/swapfile.c | 29 +++++++++++++++++++++++++++++ > > 2 files changed, 35 insertions(+) > > > > diff --git a/include/linux/swap.h b/include/linux/swap.h > > index 4db00ddad261..31a4ee2dcd1c 100644 > > --- a/include/linux/swap.h > > +++ b/include/linux/swap.h > > @@ -478,6 +478,7 @@ extern void swap_shmem_alloc(swp_entry_t); > > extern int swap_duplicate(swp_entry_t); > > extern int swapcache_prepare(swp_entry_t); > > extern void swap_free(swp_entry_t); > > +extern void swap_nr_free(swp_entry_t entry, int nr_pages); > > extern void swapcache_free_entries(swp_entry_t *entries, int n); > > extern int free_swap_and_cache(swp_entry_t); > > int swap_type_of(dev_t device, sector_t offset); > > @@ -553,6 +554,11 @@ static inline void swap_free(swp_entry_t swp) > > { > > } > > > > +void swap_nr_free(swp_entry_t entry, int nr_pages) > > +{ > > + > > +} > > + > > static inline void put_swap_folio(struct folio *folio, swp_entry_t swp) > > { > > } > > diff --git a/mm/swapfile.c b/mm/swapfile.c > > index 556ff7347d5f..6321bda96b77 100644 > > --- a/mm/swapfile.c > > +++ b/mm/swapfile.c > > @@ -1335,6 +1335,35 @@ void swap_free(swp_entry_t entry) > > __swap_entry_free(p, entry); > > } > > > > +void swap_nr_free(swp_entry_t entry, int nr_pages) > > +{ > > + int i; > > + struct swap_cluster_info *ci; > > + struct swap_info_struct *p; > > + unsigned type = swp_type(entry); > > + unsigned long offset = swp_offset(entry); > > + DECLARE_BITMAP(usage, SWAPFILE_CLUSTER) = { 0 }; > > + > > + VM_BUG_ON(offset % SWAPFILE_CLUSTER + nr_pages > SWAPFILE_CLUSTER); > > BUG_ON here seems a bit too developer originated. Maybe warn once and > roll back to free one by one? The function is used only for the case we are quite sure we are freeing some contiguous swap entries within a cluster. if it is not the case, we will need an array of entries[]. will people be more comfortable to have a WARN_ON instead? but the problem is if that really happens, it is a bug, WARN isn't enough. > > How big is your typical SWAPFILE_CUSTER and nr_pages typically in arm? My case is SWAPFILE_CLUSTER = HPAGE_PMD_NR = 2MB/4KB = 512. > > I ask this question because nr_ppages > 64, that is a totally > different game, we can completely bypass the swap cache slots. > I agree we have a chance to bypass slot cache if nr_pages is bigger than SWAP_SLOTS_CACHE_SIZE. on the other hand, even when nr_pages < 64, we still have a good chance to optimize free_swap_slot() by batching as there are many spin_lock and sort() for each single entry. > > + > > + if (nr_pages == 1) { > > + swap_free(entry); > > + return; > > + } > > + > > + p = _swap_info_get(entry); > > + > > + ci = lock_cluster(p, offset); > > + for (i = 0; i < nr_pages; i++) { > > + if (__swap_entry_free_locked(p, offset + i, 1)) > > + __bitmap_set(usage, i, 1); > > + } > > + unlock_cluster(ci); > > + > > + for_each_clear_bit(i, usage, nr_pages) > > + free_swap_slot(swp_entry(type, offset + i)); > > Notice that free_swap_slot() internal has per CPU cache batching as > well. Every free_swap_slot will get some per_cpu swap slot cache and > cache->lock. There is double batching here. > If the typical batch size here is bigger than 64 entries, we can go > directly to batching swap_entry_free and avoid the free_swap_slot() > batching altogether. Unlike free_swap_slot_entries(), here swap slots > are all from one swap device, there is no need to sort and group the > swap slot by swap devices. I agree. you are completely right! However, to make the patchset smaller at the beginning, I prefer these optimizations to be deferred as a separate patchset after this one. > > Chris > > > Chris > > > +} > > + > > /* > > * Called after dropping swapcache to decrease refcnt to swap entries. > > */ > > -- > > 2.34.1 Thanks Barry
diff --git a/include/linux/swap.h b/include/linux/swap.h index 4db00ddad261..31a4ee2dcd1c 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -478,6 +478,7 @@ extern void swap_shmem_alloc(swp_entry_t); extern int swap_duplicate(swp_entry_t); extern int swapcache_prepare(swp_entry_t); extern void swap_free(swp_entry_t); +extern void swap_nr_free(swp_entry_t entry, int nr_pages); extern void swapcache_free_entries(swp_entry_t *entries, int n); extern int free_swap_and_cache(swp_entry_t); int swap_type_of(dev_t device, sector_t offset); @@ -553,6 +554,11 @@ static inline void swap_free(swp_entry_t swp) { } +void swap_nr_free(swp_entry_t entry, int nr_pages) +{ + +} + static inline void put_swap_folio(struct folio *folio, swp_entry_t swp) { } diff --git a/mm/swapfile.c b/mm/swapfile.c index 556ff7347d5f..6321bda96b77 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -1335,6 +1335,35 @@ void swap_free(swp_entry_t entry) __swap_entry_free(p, entry); } +void swap_nr_free(swp_entry_t entry, int nr_pages) +{ + int i; + struct swap_cluster_info *ci; + struct swap_info_struct *p; + unsigned type = swp_type(entry); + unsigned long offset = swp_offset(entry); + DECLARE_BITMAP(usage, SWAPFILE_CLUSTER) = { 0 }; + + VM_BUG_ON(offset % SWAPFILE_CLUSTER + nr_pages > SWAPFILE_CLUSTER); + + if (nr_pages == 1) { + swap_free(entry); + return; + } + + p = _swap_info_get(entry); + + ci = lock_cluster(p, offset); + for (i = 0; i < nr_pages; i++) { + if (__swap_entry_free_locked(p, offset + i, 1)) + __bitmap_set(usage, i, 1); + } + unlock_cluster(ci); + + for_each_clear_bit(i, usage, nr_pages) + free_swap_slot(swp_entry(type, offset + i)); +} + /* * Called after dropping swapcache to decrease refcnt to swap entries. */