Message ID | 20231212-btrfs_map_block-cleanup-v1-11-b2d954d9a55b@wdc.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:bcd1:0:b0:403:3b70:6f57 with SMTP id r17csp7692501vqy; Tue, 12 Dec 2023 04:39:17 -0800 (PST) X-Google-Smtp-Source: AGHT+IF6E6tnHxg6Dr7RYOWiupf29OeMvUTGdFcYZZZXtFjhq5n1iMeyuIiM0S+L1y7mcYsEsm8a X-Received: by 2002:a17:902:e885:b0:1ca:7f91:aa5d with SMTP id w5-20020a170902e88500b001ca7f91aa5dmr6710742plg.16.1702384757579; Tue, 12 Dec 2023 04:39:17 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1702384757; cv=none; d=google.com; s=arc-20160816; b=bK2Ecl3ar2m03FS4f7WW7llrFEoQpQNPXzir7B2CfMFWGDhx2DL6Z9NnwYKZsshCL+ PZWCRp1O8Q5j7P0NsuaG+PK0uQHIeLk3Pg2hGHhSaHUayc2eJQ0i4O3dJh3/XhB550LG vxCn8p5eDOz7X0fUBb4wvYYwkcFUbR18O0wfuLhtBP3mzOenQNR9HZ1jq2GMk1v2KnYS y7MVSdt5DvxXaztuI73mnIpB9hZ1hF6goefR/cjgJ++CVPvc8FXeCHbJH+eQrcyOHlJ+ bz6ijgI+3flvx0HcROou9Gn2JSuXyluP7REpJEh8OMH1Phnky/7tZHlV1pNuPKU7Ww/5 9oyw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:in-reply-to:references:message-id :content-transfer-encoding:mime-version:subject:date:from :wdcironportexception:ironport-sdr:ironport-sdr:dkim-signature; bh=At+hls3D6DBQLKNLgMK3qn7xHxqvXh3uB5AMaLua6jM=; fh=RkzPy8j9Bgyvxc5aNhoOEUjthII6WQwvTfVAiEIVvUg=; b=CbAvDppvOQOAiH0/zRjidf0jmQXrudDFWUFdOvgkyHgphkvAkIRu1aGLCldNug7U9W WDh8Vyj0cFho+ZsFrwfMVy/tEJLZPgA98V8x6nQLPVl0TcIyp9zq7h2/R9Qde/rD7bBA drn4AFTnjoRjbYaPksnqhd5RZ3wIiH+ot4n3k3c1g79LudL25rE9iYFN8YZ/EYk+EVzS B4It95bYaT/TzA9B6vUptdvR8jSay1rvuycns1UkR4x2x2i9hxrCTybj16TG5DQdB3vB odzqKIuvD1MuTBVi6fPBSb0bLBwTwHEkz4EK0VVp0G6rOAH29s5EdLJQM245dZE4Atw6 VyqQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@wdc.com header.s=dkim.wdc.com header.b=HpJqw8zq; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:3 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=wdc.com Received: from lipwig.vger.email (lipwig.vger.email. [2620:137:e000::3:3]) by mx.google.com with ESMTPS id q14-20020a170902eb8e00b001d0bdb270a7si7889756plg.259.2023.12.12.04.39.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Dec 2023 04:39:17 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:3 as permitted sender) client-ip=2620:137:e000::3:3; Authentication-Results: mx.google.com; dkim=fail header.i=@wdc.com header.s=dkim.wdc.com header.b=HpJqw8zq; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:3 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=wdc.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by lipwig.vger.email (Postfix) with ESMTP id 5C312804C577; Tue, 12 Dec 2023 04:39:14 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at lipwig.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346720AbjLLMjD (ORCPT <rfc822;dexuan.linux@gmail.com> + 99 others); Tue, 12 Dec 2023 07:39:03 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43600 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346713AbjLLMii (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Tue, 12 Dec 2023 07:38:38 -0500 Received: from esa2.hgst.iphmx.com (esa2.hgst.iphmx.com [68.232.143.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7FDDB1AE; Tue, 12 Dec 2023 04:38:36 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1702384716; x=1733920716; h=from:date:subject:mime-version:content-transfer-encoding: message-id:references:in-reply-to:to:cc; bh=m+k8zZj7fOiwTsk10O7T+w0cafCSoG6d6aneRjaYfXI=; b=HpJqw8zqjpnIcr6+/FtBRwUNWkzc/ONLklWqRpLVv7gvMq75ktQzGZhD XrvAUHcEHZgswvBUWDKBrbHWLMgNYAspFAlD1u2QFL2zllRPkDRBSxRCM +KL4hx3y4KMvQc77sXJgu1EQeU2buOdAsk2nbRoTD8orqfTzzyHFj3pQ6 vUz/zAJg4VATn8dlx5H8BAtYzjhdWCzt4GVCOy7qiVuUcTUHet/OG21og 9VI8eGRuvss/aZSwcckhzEq0sMDxV0+KCy9Sd6ilqYs/g+AzC8ExEjhph 5JNLhyU3JNtfFLiH3f/G+NV1iSC3Mo2gOHDsV7ETtM1YXU6Fwi09zm6GP A==; X-CSE-ConnectionGUID: IOXfBpj9SzmwLK28ikQpeA== X-CSE-MsgGUID: zlpYzwIKTPeDpf7laSGAUg== X-IronPort-AV: E=Sophos;i="6.04,270,1695657600"; d="scan'208";a="4629805" Received: from uls-op-cesaip02.wdc.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 12 Dec 2023 20:38:28 +0800 IronPort-SDR: EEZx+MTXO5/EPfiEE0Oqo3Y+V7UigsN4Syyf7I8P7QAy9DwuxGXUslS55HOyNj4viY/v4jWdYb Pj/lRF0LEBig== Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES128-GCM-SHA256; 12 Dec 2023 03:43:41 -0800 IronPort-SDR: fsU/wqkD+KVd+7gZJQZyM/42QDym7jl2WKEY8ZV6rVeYnw/77LWRt4Ax0M273k/kZL5F6hMNBT 2pCl9aEMoPag== WDCIronportException: Internal Received: from unknown (HELO redsun91.ssa.fujisawa.hgst.com) ([10.149.66.6]) by uls-op-cesaip02.wdc.com with ESMTP; 12 Dec 2023 04:38:27 -0800 From: Johannes Thumshirn <johannes.thumshirn@wdc.com> Date: Tue, 12 Dec 2023 04:38:09 -0800 Subject: [PATCH 11/13] btrfs: open code set_io_stripe for RAID56 MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Message-Id: <20231212-btrfs_map_block-cleanup-v1-11-b2d954d9a55b@wdc.com> References: <20231212-btrfs_map_block-cleanup-v1-0-b2d954d9a55b@wdc.com> In-Reply-To: <20231212-btrfs_map_block-cleanup-v1-0-b2d954d9a55b@wdc.com> To: Chris Mason <clm@fb.com>, Josef Bacik <josef@toxicpanda.com>, David Sterba <dsterba@suse.com> Cc: linux-btrfs@vger.kernel.org, linux-kernel@vger.kernel.org, Johannes Thumshirn <johannes.thumshirn@wdc.com> X-Mailer: b4 0.12.3 X-Developer-Signature: v=1; a=ed25519-sha256; t=1702384691; l=1349; i=johannes.thumshirn@wdc.com; s=20230613; h=from:subject:message-id; bh=m+k8zZj7fOiwTsk10O7T+w0cafCSoG6d6aneRjaYfXI=; b=xbO8DGuIlP9LDF+dOqBn9tQlgUQ3uaT19PWrwV0MrDVeQCBR9suWMeo8QrFScmaHU0ztpPxSS q8Vd2PG2TF4Dgo7pmaOOJKF9oElSi9Vsvn0xWedCUaDzEeKjv3OAlu3 X-Developer-Key: i=johannes.thumshirn@wdc.com; a=ed25519; pk=TGmHKs78FdPi+QhrViEvjKIGwReUGCfa+3LEnGoR2KM= X-Spam-Status: No, score=-0.6 required=5.0 tests=DKIM_INVALID,DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lipwig.vger.email Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (lipwig.vger.email [0.0.0.0]); Tue, 12 Dec 2023 04:39:14 -0800 (PST) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1785079799348323467 X-GMAIL-MSGID: 1785079799348323467 |
Series |
btrfs: clean up RAID I/O geometry calculation
|
|
Commit Message
Johannes Thumshirn
Dec. 12, 2023, 12:38 p.m. UTC
Open code set_io_stripe() for RAID56, as it a) uses a different method to
calculate the stripe_index and b) doesn't need to go through raid-stripe-tree
mapping code.
Signed-off-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
---
fs/btrfs/volumes.c | 17 ++++++++++-------
1 file changed, 10 insertions(+), 7 deletions(-)
Comments
On Tue, Dec 12, 2023 at 04:38:09AM -0800, Johannes Thumshirn wrote: > Open code set_io_stripe() for RAID56, as it a) uses a different method to > calculate the stripe_index and b) doesn't need to go through raid-stripe-tree > mapping code. Looks good: Reviewed-by: Christoph Hellwig <hch@lst.de> I think raid stripe tree handling also really should move out of set_io_stripe. Below is the latest I have, although it probably won't apply to your tree: --- From ac208da48d7f9d11eef8a01ac0c6fbf9681665b5 Mon Sep 17 00:00:00 2001 From: Christoph Hellwig <hch@lst.de> Date: Thu, 22 Jun 2023 05:53:13 +0200 Subject: btrfs: move raid-stripe-tree handling out of set_io_stripe set_io_stripe gets a little too complicated with the raid-stripe-tree handling. Move it out into the only callers that actually needs it. The only reads with more than a single stripe is the parity raid recovery case thast will need very special handling anyway once implemented. Signed-off-by: Christoph Hellwig <hch@lst.de> --- fs/btrfs/volumes.c | 61 ++++++++++++++++++++-------------------------- 1 file changed, 27 insertions(+), 34 deletions(-) diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c index 30ee5d1670d034..e32eefa242b0a4 100644 --- a/fs/btrfs/volumes.c +++ b/fs/btrfs/volumes.c @@ -6233,22 +6233,12 @@ static u64 btrfs_max_io_len(struct map_lookup *map, enum btrfs_map_op op, return U64_MAX; } -static int set_io_stripe(struct btrfs_fs_info *fs_info, enum btrfs_map_op op, - u64 logical, u64 *length, struct btrfs_io_stripe *dst, - struct map_lookup *map, u32 stripe_index, - u64 stripe_offset, u64 stripe_nr) +static void set_io_stripe(struct btrfs_io_stripe *dst, const struct map_lookup *map, + u32 stripe_index, u64 stripe_offset, u32 stripe_nr) { dst->dev = map->stripes[stripe_index].dev; - - if (op == BTRFS_MAP_READ && - btrfs_use_raid_stripe_tree(fs_info, map->type)) - return btrfs_get_raid_extent_offset(fs_info, logical, length, - map->type, stripe_index, - dst); - dst->physical = map->stripes[stripe_index].physical + stripe_offset + ((u64)stripe_nr << BTRFS_STRIPE_LEN_SHIFT); - return 0; } int btrfs_map_block(struct btrfs_fs_info *fs_info, enum btrfs_map_op op, @@ -6423,15 +6413,24 @@ int btrfs_map_block(struct btrfs_fs_info *fs_info, enum btrfs_map_op op, * physical block information on the stack instead of allocating an * I/O context structure. */ - if (smap && num_alloc_stripes == 1 && - !(btrfs_use_raid_stripe_tree(fs_info, map->type) && - op != BTRFS_MAP_READ) && - !((map->type & BTRFS_BLOCK_GROUP_RAID56_MASK) && mirror_num > 1)) { - ret = set_io_stripe(fs_info, op, logical, length, smap, map, - stripe_index, stripe_offset, stripe_nr); - *mirror_num_ret = mirror_num; - *bioc_ret = NULL; - goto out; + if (smap && num_alloc_stripes == 1) { + if (op == BTRFS_MAP_READ && + btrfs_use_raid_stripe_tree(fs_info, map->type)) { + ret = btrfs_get_raid_extent_offset(fs_info, logical, + length, map->type, + stripe_index, smap); + *mirror_num_ret = mirror_num; + *bioc_ret = NULL; + goto out; + } else if (!(map->type & BTRFS_BLOCK_GROUP_RAID56_MASK) || + mirror_num == 0) { + set_io_stripe(smap, map, stripe_index, stripe_offset, + stripe_nr); + *mirror_num_ret = mirror_num; + *bioc_ret = NULL; + ret = 0; + goto out; + } } bioc = alloc_btrfs_io_context(fs_info, logical, num_alloc_stripes); @@ -6448,6 +6447,8 @@ int btrfs_map_block(struct btrfs_fs_info *fs_info, enum btrfs_map_op op, * * It's still mostly the same as other profiles, just with extra rotation. */ + ASSERT(op != BTRFS_MAP_READ || + btrfs_use_raid_stripe_tree(fs_info, map->type)); if (map->type & BTRFS_BLOCK_GROUP_RAID56_MASK && need_raid_map && (op != BTRFS_MAP_READ || mirror_num > 1)) { /* @@ -6461,29 +6462,21 @@ int btrfs_map_block(struct btrfs_fs_info *fs_info, enum btrfs_map_op op, bioc->full_stripe_logical = em->start + ((stripe_nr * data_stripes) << BTRFS_STRIPE_LEN_SHIFT); for (i = 0; i < num_stripes; i++) - ret = set_io_stripe(fs_info, op, logical, length, - &bioc->stripes[i], map, - (i + stripe_nr) % num_stripes, - stripe_offset, stripe_nr); + set_io_stripe(&bioc->stripes[i], map, + (i + stripe_nr) % num_stripes, + stripe_offset, stripe_nr); } else { /* * For all other non-RAID56 profiles, just copy the target * stripe into the bioc. */ for (i = 0; i < num_stripes; i++) { - ret = set_io_stripe(fs_info, op, logical, length, - &bioc->stripes[i], map, stripe_index, - stripe_offset, stripe_nr); + set_io_stripe(&bioc->stripes[i], map, stripe_index, + stripe_offset, stripe_nr); stripe_index++; } } - if (ret) { - *bioc_ret = NULL; - btrfs_put_bioc(bioc); - goto out; - } - if (op != BTRFS_MAP_READ) max_errors = btrfs_chunk_max_errors(map);
On 13.12.23 09:58, Christoph Hellwig wrote: > On Tue, Dec 12, 2023 at 04:38:09AM -0800, Johannes Thumshirn wrote: >> Open code set_io_stripe() for RAID56, as it a) uses a different method to >> calculate the stripe_index and b) doesn't need to go through raid-stripe-tree >> mapping code. > > Looks good: > > Reviewed-by: Christoph Hellwig <hch@lst.de> > > I think raid stripe tree handling also really should move out of > set_io_stripe. Below is the latest I have, although it probably won't > apply to your tree: > That would work as well and replace patch 1 then. Let me think about it.
On Wed, Dec 13, 2023 at 09:09:47AM +0000, Johannes Thumshirn wrote: > > I think raid stripe tree handling also really should move out of > > set_io_stripe. Below is the latest I have, although it probably won't > > apply to your tree: > > > > That would work as well and replace patch 1 then. Let me think about it. I actually really like splitting that check into a helper for documentation purposes. Btw, this my full tree that the patch is from in case it is useful: http://git.infradead.org/users/hch/misc.git/shortlog/refs/heads/raid-stripe-tree-cleanups
On 13.12.23 10:17, hch@infradead.org wrote: > On Wed, Dec 13, 2023 at 09:09:47AM +0000, Johannes Thumshirn wrote: >>> I think raid stripe tree handling also really should move out of >>> set_io_stripe. Below is the latest I have, although it probably won't >>> apply to your tree: >>> >> >> That would work as well and replace patch 1 then. Let me think about it. > > I actually really like splitting that check into a helper for > documentation purposes. Btw, this my full tree that the patch is from > in case it is useful: > > http://git.infradead.org/users/hch/misc.git/shortlog/refs/heads/raid-stripe-tree-cleanups > Cool thanks, I'll have a look :)
On 13.12.23 09:58, Christoph Hellwig wrote: > > I think raid stripe tree handling also really should move out of > set_io_stripe. Below is the latest I have, although it probably won't > apply to your tree: I've decided to add that one afterwards giving the attribution to you. There are some other patches in your tree as well, which I want to have a look at too. > --- > From ac208da48d7f9d11eef8a01ac0c6fbf9681665b5 Mon Sep 17 00:00:00 2001 > From: Christoph Hellwig <hch@lst.de> > Date: Thu, 22 Jun 2023 05:53:13 +0200 > Subject: btrfs: move raid-stripe-tree handling out of set_io_stripe > > set_io_stripe gets a little too complicated with the raid-stripe-tree > handling. Move it out into the only callers that actually needs it. > > The only reads with more than a single stripe is the parity raid recovery > case thast will need very special handling anyway once implemented. > > Signed-off-by: Christoph Hellwig <hch@lst.de> > --- > fs/btrfs/volumes.c | 61 ++++++++++++++++++++-------------------------- > 1 file changed, 27 insertions(+), 34 deletions(-) > > diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c > index 30ee5d1670d034..e32eefa242b0a4 100644 > --- a/fs/btrfs/volumes.c > +++ b/fs/btrfs/volumes.c > @@ -6233,22 +6233,12 @@ static u64 btrfs_max_io_len(struct map_lookup *map, enum btrfs_map_op op, > return U64_MAX; > } > > -static int set_io_stripe(struct btrfs_fs_info *fs_info, enum btrfs_map_op op, > - u64 logical, u64 *length, struct btrfs_io_stripe *dst, > - struct map_lookup *map, u32 stripe_index, > - u64 stripe_offset, u64 stripe_nr) > +static void set_io_stripe(struct btrfs_io_stripe *dst, const struct map_lookup *map, > + u32 stripe_index, u64 stripe_offset, u32 stripe_nr) > { > dst->dev = map->stripes[stripe_index].dev; > - > - if (op == BTRFS_MAP_READ && > - btrfs_use_raid_stripe_tree(fs_info, map->type)) > - return btrfs_get_raid_extent_offset(fs_info, logical, length, > - map->type, stripe_index, > - dst); > - > dst->physical = map->stripes[stripe_index].physical + > stripe_offset + ((u64)stripe_nr << BTRFS_STRIPE_LEN_SHIFT); > - return 0; > } > > int btrfs_map_block(struct btrfs_fs_info *fs_info, enum btrfs_map_op op, > @@ -6423,15 +6413,24 @@ int btrfs_map_block(struct btrfs_fs_info *fs_info, enum btrfs_map_op op, > * physical block information on the stack instead of allocating an > * I/O context structure. > */ > - if (smap && num_alloc_stripes == 1 && > - !(btrfs_use_raid_stripe_tree(fs_info, map->type) && > - op != BTRFS_MAP_READ) && > - !((map->type & BTRFS_BLOCK_GROUP_RAID56_MASK) && mirror_num > 1)) { > - ret = set_io_stripe(fs_info, op, logical, length, smap, map, > - stripe_index, stripe_offset, stripe_nr); > - *mirror_num_ret = mirror_num; > - *bioc_ret = NULL; > - goto out; > + if (smap && num_alloc_stripes == 1) { > + if (op == BTRFS_MAP_READ && > + btrfs_use_raid_stripe_tree(fs_info, map->type)) { > + ret = btrfs_get_raid_extent_offset(fs_info, logical, > + length, map->type, > + stripe_index, smap); > + *mirror_num_ret = mirror_num; > + *bioc_ret = NULL; > + goto out; > + } else if (!(map->type & BTRFS_BLOCK_GROUP_RAID56_MASK) || > + mirror_num == 0) { > + set_io_stripe(smap, map, stripe_index, stripe_offset, > + stripe_nr); > + *mirror_num_ret = mirror_num; > + *bioc_ret = NULL; > + ret = 0; > + goto out; > + } > } > > bioc = alloc_btrfs_io_context(fs_info, logical, num_alloc_stripes); > @@ -6448,6 +6447,8 @@ int btrfs_map_block(struct btrfs_fs_info *fs_info, enum btrfs_map_op op, > * > * It's still mostly the same as other profiles, just with extra rotation. > */ > + ASSERT(op != BTRFS_MAP_READ || > + btrfs_use_raid_stripe_tree(fs_info, map->type)); > if (map->type & BTRFS_BLOCK_GROUP_RAID56_MASK && need_raid_map && > (op != BTRFS_MAP_READ || mirror_num > 1)) { > /* > @@ -6461,29 +6462,21 @@ int btrfs_map_block(struct btrfs_fs_info *fs_info, enum btrfs_map_op op, > bioc->full_stripe_logical = em->start + > ((stripe_nr * data_stripes) << BTRFS_STRIPE_LEN_SHIFT); > for (i = 0; i < num_stripes; i++) > - ret = set_io_stripe(fs_info, op, logical, length, > - &bioc->stripes[i], map, > - (i + stripe_nr) % num_stripes, > - stripe_offset, stripe_nr); > + set_io_stripe(&bioc->stripes[i], map, > + (i + stripe_nr) % num_stripes, > + stripe_offset, stripe_nr); > } else { > /* > * For all other non-RAID56 profiles, just copy the target > * stripe into the bioc. > */ > for (i = 0; i < num_stripes; i++) { > - ret = set_io_stripe(fs_info, op, logical, length, > - &bioc->stripes[i], map, stripe_index, > - stripe_offset, stripe_nr); > + set_io_stripe(&bioc->stripes[i], map, stripe_index, > + stripe_offset, stripe_nr); > stripe_index++; > } > } > > - if (ret) { > - *bioc_ret = NULL; > - btrfs_put_bioc(bioc); > - goto out; > - } > - > if (op != BTRFS_MAP_READ) > max_errors = btrfs_chunk_max_errors(map); >
diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c index 946333c8c331..7df991a81c4b 100644 --- a/fs/btrfs/volumes.c +++ b/fs/btrfs/volumes.c @@ -6670,13 +6670,16 @@ int btrfs_map_block(struct btrfs_fs_info *fs_info, enum btrfs_map_op op, btrfs_stripe_nr_to_offset(io_geom.stripe_nr * nr_data_stripes(map)); for (int i = 0; i < io_geom.num_stripes; i++) { - ret = set_io_stripe(fs_info, op, logical, length, - &bioc->stripes[i], map, - (i + io_geom.stripe_nr) % io_geom.num_stripes, - io_geom.stripe_offset, - io_geom.stripe_nr); - if (ret < 0) - break; + struct btrfs_io_stripe *dst = &bioc->stripes[i]; + u32 stripe_index; + + stripe_index = + (i + io_geom.stripe_nr) % io_geom.num_stripes; + dst->dev = map->stripes[stripe_index].dev; + dst->physical = + map->stripes[stripe_index].physical + + io_geom.stripe_offset + + btrfs_stripe_nr_to_offset(io_geom.stripe_nr); } } else { /*