From patchwork Thu Nov 24 03:42:11 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jingbo Xu X-Patchwork-Id: 25304 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp3171329wrr; Wed, 23 Nov 2022 19:45:09 -0800 (PST) X-Google-Smtp-Source: AA0mqf6eLuHg/p+tqZqrYsPZGHBRqXhDJR/503olXlqbiKAeg8Eb1in3zZ2cFHTczo8gGdxm+Grx X-Received: by 2002:a17:902:efd3:b0:186:a3ad:4c63 with SMTP id ja19-20020a170902efd300b00186a3ad4c63mr12168498plb.1.1669261509260; Wed, 23 Nov 2022 19:45:09 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1669261509; cv=none; d=google.com; s=arc-20160816; b=DCvDD1QjTLDPDMtSkxWXc/3dSeqK42l4VZQcl4nuhtKihQcNMFe5KfuzBu/sCI24m4 atpc6LhC4BMqdYf1BpSgc0TWxdSvkpgAIpUbj15qCRWpysHH1G2N7H/hegVprH90by40 Fzi24s4zOJ6sblUvn8jbwrC5nACOygEaxW/k1u7dnHIIp8ieIA1cnfDjkgVACB4w+g6r H8z34oD7h34N2pzYlP5WXILzxoLGFweXwq1DIPMJR23tf/dg3JUzKSVV0mvRxVosgpGk Iv+ycPjAqKsMzo9IgsSySv+Ob53eU0GGFuK09OTaJyvZgu8jPc/c9Nh+ePhKD2YSCDaV SKsQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=/aYh955X3xnTyhjO3OZ7ePdFb8GJtN212dF9TspDYkw=; b=k/2XBivX2D+4EosnSvFHodsSsVbN959WLVqNAZErjAuTHQcXuY6b47rK5UrRVXQWWb KkncxOZMh7GhiMtySQcnEjCuou+JKmsTNc1OwiHj3P31PV/dgyezjwB7KvFJs2eBPBfx XpMSGfxhah3wJ16yfDlco2H7AV1ummG6NYgHU4H5agH8KV8rVnqpFrY9F7FgmZRZOPPv 79eUZdZLgeVSHyxCRB6CRUrJa14wUo8y1fZg2EoKac8m/Lfc0yfiODl8LOIGd4z/yVIp AjPbbmisX+x99FgBDixJy0rL2xwqZhz+mcMPR633605aU0LY3CrNZahSDgz2bsohIPlR yKiA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id n16-20020a170902d2d000b00179faf5c34asi546288plc.379.2022.11.23.19.44.56; Wed, 23 Nov 2022 19:45:09 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229755AbiKXDmX (ORCPT + 99 others); Wed, 23 Nov 2022 22:42:23 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60128 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229657AbiKXDmS (ORCPT ); Wed, 23 Nov 2022 22:42:18 -0500 Received: from out30-57.freemail.mail.aliyun.com (out30-57.freemail.mail.aliyun.com [115.124.30.57]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D87AE5B87E; Wed, 23 Nov 2022 19:42:16 -0800 (PST) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R151e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046056;MF=jefflexu@linux.alibaba.com;NM=1;PH=DS;RN=8;SR=0;TI=SMTPD_---0VVZ97Yj_1669261333; Received: from localhost(mailfrom:jefflexu@linux.alibaba.com fp:SMTPD_---0VVZ97Yj_1669261333) by smtp.aliyun-inc.com; Thu, 24 Nov 2022 11:42:14 +0800 From: Jingbo Xu To: dhowells@redhat.com, jlayton@kernel.org, xiang@kernel.org, chao@kernel.org, linux-cachefs@redhat.com, linux-erofs@lists.ozlabs.org Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v5 1/2] fscache,cachefiles: add prepare_ondemand_read() callback Date: Thu, 24 Nov 2022 11:42:11 +0800 Message-Id: <20221124034212.81892-2-jefflexu@linux.alibaba.com> X-Mailer: git-send-email 2.19.1.6.gb485710b In-Reply-To: <20221124034212.81892-1-jefflexu@linux.alibaba.com> References: <20221124034212.81892-1-jefflexu@linux.alibaba.com> MIME-Version: 1.0 X-Spam-Status: No, score=-9.9 required=5.0 tests=BAYES_00, ENV_AND_HDR_SPF_MATCH,RCVD_IN_DNSWL_NONE,RCVD_IN_MSPIKE_H2, SPF_HELO_NONE,SPF_PASS,UNPARSEABLE_RELAY,USER_IN_DEF_SPF_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1750347556688672692?= X-GMAIL-MSGID: =?utf-8?q?1750347556688672692?= Add prepare_ondemand_read() callback dedicated for the on-demand read scenario, so that callers from this scenario can be decoupled from netfs_io_subrequest. The original cachefiles_prepare_read() is now refactored to a generic routine accepting a parameter list instead of netfs_io_subrequest. There's no logic change, except that the debug id of subrequest and request is removed from trace_cachefiles_prep_read(). Reviewed-by: Jeff Layton Signed-off-by: Jingbo Xu Acked-by: David Howells --- fs/cachefiles/io.c | 77 ++++++++++++++++++++----------- include/linux/netfs.h | 8 ++++ include/trace/events/cachefiles.h | 27 +++++------ 3 files changed, 72 insertions(+), 40 deletions(-) diff --git a/fs/cachefiles/io.c b/fs/cachefiles/io.c index 000a28f46e59..175a25fcade8 100644 --- a/fs/cachefiles/io.c +++ b/fs/cachefiles/io.c @@ -385,38 +385,35 @@ static int cachefiles_write(struct netfs_cache_resources *cres, term_func, term_func_priv); } -/* - * Prepare a read operation, shortening it to a cached/uncached - * boundary as appropriate. - */ -static enum netfs_io_source cachefiles_prepare_read(struct netfs_io_subrequest *subreq, - loff_t i_size) +static inline enum netfs_io_source +cachefiles_do_prepare_read(struct netfs_cache_resources *cres, + loff_t start, size_t *_len, loff_t i_size, + unsigned long *_flags, ino_t netfs_ino) { enum cachefiles_prepare_read_trace why; - struct netfs_io_request *rreq = subreq->rreq; - struct netfs_cache_resources *cres = &rreq->cache_resources; - struct cachefiles_object *object; + struct cachefiles_object *object = NULL; struct cachefiles_cache *cache; struct fscache_cookie *cookie = fscache_cres_cookie(cres); const struct cred *saved_cred; struct file *file = cachefiles_cres_file(cres); enum netfs_io_source ret = NETFS_DOWNLOAD_FROM_SERVER; + size_t len = *_len; loff_t off, to; ino_t ino = file ? file_inode(file)->i_ino : 0; int rc; - _enter("%zx @%llx/%llx", subreq->len, subreq->start, i_size); + _enter("%zx @%llx/%llx", len, start, i_size); - if (subreq->start >= i_size) { + if (start >= i_size) { ret = NETFS_FILL_WITH_ZEROES; why = cachefiles_trace_read_after_eof; goto out_no_object; } if (test_bit(FSCACHE_COOKIE_NO_DATA_TO_READ, &cookie->flags)) { - __set_bit(NETFS_SREQ_COPY_TO_CACHE, &subreq->flags); + __set_bit(NETFS_SREQ_COPY_TO_CACHE, _flags); why = cachefiles_trace_read_no_data; - if (!test_bit(NETFS_SREQ_ONDEMAND, &subreq->flags)) + if (!test_bit(NETFS_SREQ_ONDEMAND, _flags)) goto out_no_object; } @@ -437,7 +434,7 @@ static enum netfs_io_source cachefiles_prepare_read(struct netfs_io_subrequest * retry: off = cachefiles_inject_read_error(); if (off == 0) - off = vfs_llseek(file, subreq->start, SEEK_DATA); + off = vfs_llseek(file, start, SEEK_DATA); if (off < 0 && off >= (loff_t)-MAX_ERRNO) { if (off == (loff_t)-ENXIO) { why = cachefiles_trace_read_seek_nxio; @@ -449,21 +446,22 @@ static enum netfs_io_source cachefiles_prepare_read(struct netfs_io_subrequest * goto out; } - if (off >= subreq->start + subreq->len) { + if (off >= start + len) { why = cachefiles_trace_read_found_hole; goto download_and_store; } - if (off > subreq->start) { + if (off > start) { off = round_up(off, cache->bsize); - subreq->len = off - subreq->start; + len = off - start; + *_len = len; why = cachefiles_trace_read_found_part; goto download_and_store; } to = cachefiles_inject_read_error(); if (to == 0) - to = vfs_llseek(file, subreq->start, SEEK_HOLE); + to = vfs_llseek(file, start, SEEK_HOLE); if (to < 0 && to >= (loff_t)-MAX_ERRNO) { trace_cachefiles_io_error(object, file_inode(file), to, cachefiles_trace_seek_error); @@ -471,12 +469,13 @@ static enum netfs_io_source cachefiles_prepare_read(struct netfs_io_subrequest * goto out; } - if (to < subreq->start + subreq->len) { - if (subreq->start + subreq->len >= i_size) + if (to < start + len) { + if (start + len >= i_size) to = round_up(to, cache->bsize); else to = round_down(to, cache->bsize); - subreq->len = to - subreq->start; + len = to - start; + *_len = len; } why = cachefiles_trace_read_have_data; @@ -484,12 +483,11 @@ static enum netfs_io_source cachefiles_prepare_read(struct netfs_io_subrequest * goto out; download_and_store: - __set_bit(NETFS_SREQ_COPY_TO_CACHE, &subreq->flags); - if (test_bit(NETFS_SREQ_ONDEMAND, &subreq->flags)) { - rc = cachefiles_ondemand_read(object, subreq->start, - subreq->len); + __set_bit(NETFS_SREQ_COPY_TO_CACHE, _flags); + if (test_bit(NETFS_SREQ_ONDEMAND, _flags)) { + rc = cachefiles_ondemand_read(object, start, len); if (!rc) { - __clear_bit(NETFS_SREQ_ONDEMAND, &subreq->flags); + __clear_bit(NETFS_SREQ_ONDEMAND, _flags); goto retry; } ret = NETFS_INVALID_READ; @@ -497,10 +495,34 @@ static enum netfs_io_source cachefiles_prepare_read(struct netfs_io_subrequest * out: cachefiles_end_secure(cache, saved_cred); out_no_object: - trace_cachefiles_prep_read(subreq, ret, why, ino); + trace_cachefiles_prep_read(object, start, len, *_flags, ret, why, ino, netfs_ino); return ret; } +/* + * Prepare a read operation, shortening it to a cached/uncached + * boundary as appropriate. + */ +static enum netfs_io_source cachefiles_prepare_read(struct netfs_io_subrequest *subreq, + loff_t i_size) +{ + return cachefiles_do_prepare_read(&subreq->rreq->cache_resources, + subreq->start, &subreq->len, i_size, + &subreq->flags, subreq->rreq->inode->i_ino); +} + +/* + * Prepare an on-demand read operation, shortening it to a cached/uncached + * boundary as appropriate. + */ +static enum netfs_io_source +cachefiles_prepare_ondemand_read(struct netfs_cache_resources *cres, + loff_t start, size_t *_len, loff_t i_size, + unsigned long *_flags, ino_t ino) +{ + return cachefiles_do_prepare_read(cres, start, _len, i_size, _flags, ino); +} + /* * Prepare for a write to occur. */ @@ -621,6 +643,7 @@ static const struct netfs_cache_ops cachefiles_netfs_cache_ops = { .write = cachefiles_write, .prepare_read = cachefiles_prepare_read, .prepare_write = cachefiles_prepare_write, + .prepare_ondemand_read = cachefiles_prepare_ondemand_read, .query_occupancy = cachefiles_query_occupancy, }; diff --git a/include/linux/netfs.h b/include/linux/netfs.h index f2402ddeafbf..4c76ddfb6a67 100644 --- a/include/linux/netfs.h +++ b/include/linux/netfs.h @@ -267,6 +267,14 @@ struct netfs_cache_ops { loff_t *_start, size_t *_len, loff_t i_size, bool no_space_allocated_yet); + /* Prepare an on-demand read operation, shortening it to a cached/uncached + * boundary as appropriate. + */ + enum netfs_io_source (*prepare_ondemand_read)(struct netfs_cache_resources *cres, + loff_t start, size_t *_len, + loff_t i_size, + unsigned long *_flags, ino_t ino); + /* Query the occupancy of the cache in a region, returning where the * next chunk of data starts and how long it is. */ diff --git a/include/trace/events/cachefiles.h b/include/trace/events/cachefiles.h index d8d4d73fe7b6..cf4b98b9a9ed 100644 --- a/include/trace/events/cachefiles.h +++ b/include/trace/events/cachefiles.h @@ -428,16 +428,18 @@ TRACE_EVENT(cachefiles_vol_coherency, ); TRACE_EVENT(cachefiles_prep_read, - TP_PROTO(struct netfs_io_subrequest *sreq, + TP_PROTO(struct cachefiles_object *obj, + loff_t start, + size_t len, + unsigned short flags, enum netfs_io_source source, enum cachefiles_prepare_read_trace why, - ino_t cache_inode), + ino_t cache_inode, ino_t netfs_inode), - TP_ARGS(sreq, source, why, cache_inode), + TP_ARGS(obj, start, len, flags, source, why, cache_inode, netfs_inode), TP_STRUCT__entry( - __field(unsigned int, rreq ) - __field(unsigned short, index ) + __field(unsigned int, obj ) __field(unsigned short, flags ) __field(enum netfs_io_source, source ) __field(enum cachefiles_prepare_read_trace, why ) @@ -448,19 +450,18 @@ TRACE_EVENT(cachefiles_prep_read, ), TP_fast_assign( - __entry->rreq = sreq->rreq->debug_id; - __entry->index = sreq->debug_index; - __entry->flags = sreq->flags; + __entry->obj = obj ? obj->debug_id : 0; + __entry->flags = flags; __entry->source = source; __entry->why = why; - __entry->len = sreq->len; - __entry->start = sreq->start; - __entry->netfs_inode = sreq->rreq->inode->i_ino; + __entry->len = len; + __entry->start = start; + __entry->netfs_inode = netfs_inode; __entry->cache_inode = cache_inode; ), - TP_printk("R=%08x[%u] %s %s f=%02x s=%llx %zx ni=%x B=%x", - __entry->rreq, __entry->index, + TP_printk("o=%08x %s %s f=%02x s=%llx %zx ni=%x B=%x", + __entry->obj, __print_symbolic(__entry->source, netfs_sreq_sources), __print_symbolic(__entry->why, cachefiles_prepare_read_traces), __entry->flags,