From patchwork Sat Nov 26 00:57:55 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jingbo Xu X-Patchwork-Id: 26173 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp4384461wrr; Fri, 25 Nov 2022 16:59:44 -0800 (PST) X-Google-Smtp-Source: AA0mqf7t6bPQOqFOCP55dCTcOdp5soZM7aauoNUw85aTCKSeY7TskAKIo7DNIGfqhlpoVFA7APmg X-Received: by 2002:a17:90a:1a11:b0:213:f398:ed51 with SMTP id 17-20020a17090a1a1100b00213f398ed51mr42734851pjk.216.1669424384208; Fri, 25 Nov 2022 16:59:44 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1669424384; cv=none; d=google.com; s=arc-20160816; b=V0+T4e5KGrMchY5lX/5gz4AOVRtc2pMklx2zOKy+rLDttUOV2NeiunpZcHOaaLwvD4 yCe75pQr4cgd9hfQF5P+XpxNdeAA+bEsJObGkMdufaGdGSqa7U4ioF/STsLuDJ6vljhH qQeslH66d90PsLH4L7Ej7pMbyUc6bq/KBeR4spBq7QlXLw0AO+W+ofkokwT/AWq3m1QS 0emdpXg+4AC1ntp0QsYFMOVMNjJNk4ZhK5fb1oypFGTAG9r1ckPM/A+SGJ5tgKfUxSvu h+z1veQflqAw+AIWpBYJV83l5g6NVd/XBoy21v3wXFE3AmeHfcRx8MwvmNYdd8IlyeRj b5hg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=C4Lwl8im1FzZqdlwlsdkYMSrRTcYY3HKt8ro4Q9QOqo=; b=CrpP+upP44ILqQQ2+cs/LAMLnY+15nMvrRHhcMWLagYznT2dm4Hh6WEOAte+9qeLvM giPlx3Ldw1igDlmroasYf2tLpiqmP57tjlovGZZEdPXdSYMhucunoOF88zwOrqXgtT/B MYUXoKx8ZPVRvl+ahq0lKh8cgphPAEgURrl5bQy5xXHoVGLYDNkXmuO42D25HDOMKfvU KfZWbm1qTAwEw4J1ZNnp7hi0YejS8duzajOtJEZBAanQYY1jF5Y8ylNqVB8M4idbCcpS B6kR+PPz+mIFGyCWkXBOUCGTltue3jGiyyfBlW+4t2KqLdwDihfDdNxK69z8nbQRsLJu 6scA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id u24-20020a056a00099800b0052da2ea956csi6054662pfg.371.2022.11.25.16.59.31; Fri, 25 Nov 2022 16:59:44 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230109AbiKZA6D (ORCPT + 99 others); Fri, 25 Nov 2022 19:58:03 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33726 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230080AbiKZA6C (ORCPT ); Fri, 25 Nov 2022 19:58:02 -0500 Received: from out30-45.freemail.mail.aliyun.com (out30-45.freemail.mail.aliyun.com [115.124.30.45]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A2F3D5B5B5 for ; Fri, 25 Nov 2022 16:58:00 -0800 (PST) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R741e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046050;MF=jefflexu@linux.alibaba.com;NM=1;PH=DS;RN=4;SR=0;TI=SMTPD_---0VVgyv.D_1669424277; Received: from localhost(mailfrom:jefflexu@linux.alibaba.com fp:SMTPD_---0VVgyv.D_1669424277) by smtp.aliyun-inc.com; Sat, 26 Nov 2022 08:57:58 +0800 From: Jingbo Xu To: xiang@kernel.org, chao@kernel.org, linux-erofs@lists.ozlabs.org Cc: linux-kernel@vger.kernel.org Subject: [PATCH 1/2] erofs: support large folio in fscache mode Date: Sat, 26 Nov 2022 08:57:55 +0800 Message-Id: <20221126005756.7662-2-jefflexu@linux.alibaba.com> X-Mailer: git-send-email 2.19.1.6.gb485710b In-Reply-To: <20221126005756.7662-1-jefflexu@linux.alibaba.com> References: <20221126005756.7662-1-jefflexu@linux.alibaba.com> MIME-Version: 1.0 X-Spam-Status: No, score=-9.9 required=5.0 tests=BAYES_00, ENV_AND_HDR_SPF_MATCH,RCVD_IN_DNSWL_NONE,RCVD_IN_MSPIKE_H2, SPF_HELO_NONE,SPF_PASS,UNPARSEABLE_RELAY,USER_IN_DEF_SPF_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1750518273445735414?= X-GMAIL-MSGID: =?utf-8?q?1750518343090578015?= When large folio supported, one folio can be split into several slices, each of which may be mapped to META/UNMAPPED/MAPPED, and the folio can be unlocked as a whole only when all slices have completed. Thus always allocate erofs_fscache_request for each .read_folio() or .readahead(). In this case, only when all slices of the folio or folio range have completed, the request will be marked as completed and the folio or folio range will be unlocked then. Signed-off-by: Jingbo Xu --- fs/erofs/fscache.c | 116 +++++++++++++++++++-------------------------- 1 file changed, 48 insertions(+), 68 deletions(-) diff --git a/fs/erofs/fscache.c b/fs/erofs/fscache.c index 3cfe1af7a46e..0643b205c7eb 100644 --- a/fs/erofs/fscache.c +++ b/fs/erofs/fscache.c @@ -167,32 +167,18 @@ static int erofs_fscache_meta_read_folio(struct file *data, struct folio *folio) return ret; } -/* - * Read into page cache in the range described by (@pos, @len). - * - * On return, if the output @unlock is true, the caller is responsible for page - * unlocking; otherwise the callee will take this responsibility through request - * completion. - * - * The return value is the number of bytes successfully handled, or negative - * error code on failure. The only exception is that, the length of the range - * instead of the error code is returned on failure after request is allocated, - * so that .readahead() could advance rac accordingly. - */ -static int erofs_fscache_data_read(struct address_space *mapping, - loff_t pos, size_t len, bool *unlock) +static int erofs_fscache_data_read_slice(struct erofs_fscache_request *req) { + struct address_space *mapping = req->mapping; struct inode *inode = mapping->host; struct super_block *sb = inode->i_sb; - struct erofs_fscache_request *req; + loff_t pos = req->start + req->submitted; struct erofs_map_blocks map; struct erofs_map_dev mdev; struct iov_iter iter; size_t count; int ret; - *unlock = true; - map.m_la = pos; ret = erofs_map_blocks(inode, &map, EROFS_GET_BLOCKS_RAW); if (ret) @@ -201,36 +187,37 @@ static int erofs_fscache_data_read(struct address_space *mapping, if (map.m_flags & EROFS_MAP_META) { struct erofs_buf buf = __EROFS_BUF_INITIALIZER; erofs_blk_t blknr; - size_t offset, size; + size_t offset; void *src; /* For tail packing layout, the offset may be non-zero. */ offset = erofs_blkoff(map.m_pa); blknr = erofs_blknr(map.m_pa); - size = map.m_llen; + count = map.m_llen; src = erofs_read_metabuf(&buf, sb, blknr, EROFS_KMAP); if (IS_ERR(src)) return PTR_ERR(src); - iov_iter_xarray(&iter, READ, &mapping->i_pages, pos, PAGE_SIZE); - if (copy_to_iter(src + offset, size, &iter) != size) { + iov_iter_xarray(&iter, READ, &mapping->i_pages, pos, count); + if (copy_to_iter(src + offset, count, &iter) != count) { erofs_put_metabuf(&buf); return -EFAULT; } - iov_iter_zero(PAGE_SIZE - size, &iter); erofs_put_metabuf(&buf); - return PAGE_SIZE; + req->submitted += count; + return 0; } + count = req->len - req->submitted; if (!(map.m_flags & EROFS_MAP_MAPPED)) { - count = len; iov_iter_xarray(&iter, READ, &mapping->i_pages, pos, count); iov_iter_zero(count, &iter); - return count; + req->submitted += count; + return 0; } - count = min_t(size_t, map.m_llen - (pos - map.m_la), len); + count = min_t(size_t, map.m_llen - (pos - map.m_la), count); DBG_BUGON(!count || count % PAGE_SIZE); mdev = (struct erofs_map_dev) { @@ -241,68 +228,61 @@ static int erofs_fscache_data_read(struct address_space *mapping, if (ret) return ret; - req = erofs_fscache_req_alloc(mapping, pos, count); - if (IS_ERR(req)) - return PTR_ERR(req); - - *unlock = false; - ret = erofs_fscache_read_folios_async(mdev.m_fscache->cookie, + return erofs_fscache_read_folios_async(mdev.m_fscache->cookie, req, mdev.m_pa + (pos - map.m_la), count); - if (ret) - req->error = ret; +} - erofs_fscache_req_put(req); - return count; +/* + * Read into page cache in the range described by (req->start, req->len). + */ +static int erofs_fscache_data_read(struct erofs_fscache_request *req) +{ + int ret; + + do { + ret = erofs_fscache_data_read_slice(req); + if (ret) + req->error = ret; + } while (!ret && req->submitted < req->len); + + return ret; } static int erofs_fscache_read_folio(struct file *file, struct folio *folio) { - bool unlock; + struct erofs_fscache_request *req; int ret; - DBG_BUGON(folio_size(folio) != EROFS_BLKSIZ); - - ret = erofs_fscache_data_read(folio_mapping(folio), folio_pos(folio), - folio_size(folio), &unlock); - if (unlock) { - if (ret > 0) - folio_mark_uptodate(folio); + req = erofs_fscache_req_alloc(folio_mapping(folio), + folio_pos(folio), folio_size(folio)); + if (IS_ERR(req)) { folio_unlock(folio); + return PTR_ERR(req); } - return ret < 0 ? ret : 0; + + ret = erofs_fscache_data_read(req); + erofs_fscache_req_put(req); + return ret; } static void erofs_fscache_readahead(struct readahead_control *rac) { - struct folio *folio; - size_t len, done = 0; - loff_t start, pos; - bool unlock; - int ret, size; + struct erofs_fscache_request *req; if (!readahead_count(rac)) return; - start = readahead_pos(rac); - len = readahead_length(rac); + req = erofs_fscache_req_alloc(rac->mapping, + readahead_pos(rac), readahead_length(rac)); + if (IS_ERR(req)) + return; - do { - pos = start + done; - ret = erofs_fscache_data_read(rac->mapping, pos, - len - done, &unlock); - if (ret <= 0) - return; + /* The request completion will drop refs on the folios. */ + while (readahead_folio(rac)) + ; - size = ret; - while (size) { - folio = readahead_folio(rac); - size -= folio_size(folio); - if (unlock) { - folio_mark_uptodate(folio); - folio_unlock(folio); - } - } - } while ((done += ret) < len); + erofs_fscache_data_read(req); + erofs_fscache_req_put(req); } static const struct address_space_operations erofs_fscache_meta_aops = {