From patchwork Mon Feb 5 22:57:13 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 197088 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7301:168b:b0:106:860b:bbdd with SMTP id ma11csp1196192dyb; Mon, 5 Feb 2024 14:58:21 -0800 (PST) X-Google-Smtp-Source: AGHT+IF3JXhSFiCmveb8OmvPZzBxQVIQ5tZYyUroATKTbR1AQo+IP1eOusM2sB+XACnMl38fJNU2 X-Received: by 2002:a05:620a:1261:b0:785:504a:88e2 with SMTP id b1-20020a05620a126100b00785504a88e2mr791792qkl.66.1707173901689; Mon, 05 Feb 2024 14:58:21 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1707173901; cv=pass; d=google.com; s=arc-20160816; b=Q1gw7XmePIvLIm4fizYQF89FQSop8sQ7gOtt13Yxk7vRYejWf/Xhxh4bPbxKc/FTu5 TsvNoDhkK0rwu0Y/gcVDTFw+QWDioKVGuggmE4rWYHpY1gfDC4EsGtCV672Iz/r/pfF7 8KVTgvn5rXjmRCKLHXsOZdcbHTWVutrG3VRsgIXaeYMzXoWRq++d125aFDd7je6wSYTo YXrZATKEeEcUCzdb9arJeocCXEQDgiTVSxHKIpB81575Wjk8zVlney2FwUrHJqYBq6au FEy3trRz2tGGDxi0B7qFRFrbqHXv5DcI9m4LqTKm8GSIoFZo/CY3vtYAKVEL1DcPh41s 5chQ== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=MytHiOAGt39No+tBkso3SKMj9ST88QNnSyvPj08v1Cw=; fh=2ALtdtimAbBjq2oipYluEd7+194Ki+p1MCLhCEPKvj0=; b=VEFkY612Z78Y6ETvkvk/xBxjmEvt1zTJBXhBjsE46rUr6HEQcfdz6A1jYuOonRbAku GrFOle9Zwu+XSTH+yJrJmc50LdSOehvvkbUGXNc0J7u5ItMSuzreaZRGwvc5cd0HSiJB TmerGCaERlxWqraz3WjZhMiMsMXLlhS55Yr4w6cJ7R2lqneHWsboWUSLa0h1vDqpKJTx 8XRmPsQnSgbCnD41ww/06MnrIqVEr4uZQ+NhbZ+cisYyYvXlJsaOkPwTd9jOpxvsU/S6 G2JELsgu0LURMAR2zaXr99G4Ec+zoPkowoKWPokv5NvIztIzQ5eZOGZyZOvsdXR9TH7q VREg==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=Tws+B1x0; arc=pass (i=1 spf=pass spfdomain=redhat.com dkim=pass dkdomain=redhat.com dmarc=pass fromdomain=redhat.com); spf=pass (google.com: domain of linux-kernel+bounces-54046-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-54046-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com X-Forwarded-Encrypted: i=1; AJvYcCWrN+3DfamNo72N9cncONXNjiC73ENuZGn/PKKZbWqPbqD76LSnYubfBc4RtcXpMR/LkFALX3JNyCtUysdSawDBwqBhFQ== Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [2604:1380:45d1:ec00::1]) by mx.google.com with ESMTPS id dz2-20020a05620a2b8200b0078573971d2esi1076676qkb.584.2024.02.05.14.58.21 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 05 Feb 2024 14:58:21 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-54046-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) client-ip=2604:1380:45d1:ec00::1; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=Tws+B1x0; arc=pass (i=1 spf=pass spfdomain=redhat.com dkim=pass dkdomain=redhat.com dmarc=pass fromdomain=redhat.com); spf=pass (google.com: domain of linux-kernel+bounces-54046-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-54046-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id 589C31C27BA7 for ; Mon, 5 Feb 2024 22:58:21 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 81A574B5DD; Mon, 5 Feb 2024 22:57:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="Tws+B1x0" Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E4C844879D for ; Mon, 5 Feb 2024 22:57:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707173860; cv=none; b=VDwo04nQQt5q2BbUWs4A/WfNTkQM1kjKbmExgjHcUkZJ+cELgKoQ2T3zjQeEk6m7tBUwLKtxR+5ifmwKR4RpyQvdocPtGReN52qtEPAa44uyX1X0bT5l/fHhf27QjHfv7K4zC4wkQtxRuCSIttxQXHGLdvhinNALi2Z0CU1BoHw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707173860; c=relaxed/simple; bh=bXnYZQcrxZmTRPJkqwBciYDzSifGOcq4B8YAPUCoJYc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=b2D6Wz5s/H2fGywe6W9caZAfeXhkAHFlJTI1kaH88X0qI4gqXstSx3hMGOW/34iB3iHzCiFL11YB+PTupwgxQI8kICWqKysGb1TG/2YkEgI1jKoTbxII/2H4PUld1+/HG66nGmHj8RbRt68Tu7W42SpRaK2FA6OKZTV6JfHgTNc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=Tws+B1x0; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1707173856; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=MytHiOAGt39No+tBkso3SKMj9ST88QNnSyvPj08v1Cw=; b=Tws+B1x06LwD+Ic8K84LIDNMvvVKjHAstgDfAO/IX1nuM2jbTDLtGNfzGxKtwNIWpiap+4 axIv6kcuOfctiQ2jQP/PiN5NbiXXpO5DYeG2SOLjlec8POlIftdtorED0RSvBqw4yVwUM2 bBknW/Ffz3chIDHPcmiSGL11R58A9q8= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-57-2-np1Cj0P6i9w8-O-xQF_Q-1; Mon, 05 Feb 2024 17:57:34 -0500 X-MC-Unique: 2-np1Cj0P6i9w8-O-xQF_Q-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id F34A429AC03F; Mon, 5 Feb 2024 22:57:33 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.42.28.245]) by smtp.corp.redhat.com (Postfix) with ESMTP id C81273C2E; Mon, 5 Feb 2024 22:57:31 +0000 (UTC) From: David Howells To: Steve French Cc: David Howells , Jeff Layton , Matthew Wilcox , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Christian Brauner , netfs@lists.linux.dev, linux-cifs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Steve French , Shyam Prasad N , Rohith Surabattula Subject: [PATCH v5 01/12] cifs: Replace cifs_readdata with a wrapper around netfs_io_subrequest Date: Mon, 5 Feb 2024 22:57:13 +0000 Message-ID: <20240205225726.3104808-2-dhowells@redhat.com> In-Reply-To: <20240205225726.3104808-1-dhowells@redhat.com> References: <20240205225726.3104808-1-dhowells@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.1 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1790101580753007005 X-GMAIL-MSGID: 1790101580753007005 Netfslib has a facility whereby the allocation for netfs_io_subrequest can be increased to so that filesystem-specific data can be tagged on the end. Prepare to use this by making a struct, cifs_io_subrequest, that wraps netfs_io_subrequest, and absorb struct cifs_readdata into it. Signed-off-by: David Howells cc: Steve French cc: Shyam Prasad N cc: Rohith Surabattula cc: Jeff Layton cc: linux-cifs@vger.kernel.org cc: netfs@lists.linux.dev cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org --- fs/smb/client/cifsglob.h | 22 ++++++++++--------- fs/smb/client/cifsproto.h | 13 +++++++++-- fs/smb/client/cifssmb.c | 11 ++++------ fs/smb/client/file.c | 46 ++++++++++++++++++--------------------- fs/smb/client/smb2ops.c | 2 +- fs/smb/client/smb2pdu.c | 12 +++++----- fs/smb/client/smb2proto.h | 2 +- fs/smb/client/transport.c | 4 ++-- 8 files changed, 59 insertions(+), 53 deletions(-) diff --git a/fs/smb/client/cifsglob.h b/fs/smb/client/cifsglob.h index decf80131bbe..1ffd1f3774f7 100644 --- a/fs/smb/client/cifsglob.h +++ b/fs/smb/client/cifsglob.h @@ -256,7 +256,7 @@ struct dfs_info3_param; struct cifs_fattr; struct smb3_fs_context; struct cifs_fid; -struct cifs_readdata; +struct cifs_io_subrequest; struct cifs_writedata; struct cifs_io_parms; struct cifs_search_info; @@ -434,7 +434,7 @@ struct smb_version_operations { /* send a flush request to the server */ int (*flush)(const unsigned int, struct cifs_tcon *, struct cifs_fid *); /* async read from the server */ - int (*async_readv)(struct cifs_readdata *); + int (*async_readv)(struct cifs_io_subrequest *); /* async write to the server */ int (*async_writev)(struct cifs_writedata *, void (*release)(struct kref *)); @@ -1465,26 +1465,28 @@ struct cifs_aio_ctx { }; /* asynchronous read support */ -struct cifs_readdata { - struct kref refcount; - struct list_head list; - struct completion done; +struct cifs_io_subrequest { + struct netfs_io_subrequest subreq; struct cifsFileInfo *cfile; struct address_space *mapping; struct cifs_aio_ctx *ctx; - __u64 offset; ssize_t got_bytes; - unsigned int bytes; pid_t pid; int result; - struct work_struct work; - struct iov_iter iter; struct kvec iov[2]; struct TCP_Server_Info *server; #ifdef CONFIG_CIFS_SMB_DIRECT struct smbd_mr *mr; #endif struct cifs_credits credits; + + // TODO: Remove following elements + struct list_head list; + struct completion done; + struct work_struct work; + struct iov_iter iter; + __u64 offset; + unsigned int bytes; }; /* asynchronous write support */ diff --git a/fs/smb/client/cifsproto.h b/fs/smb/client/cifsproto.h index 770db9026850..adf4a1f523e3 100644 --- a/fs/smb/client/cifsproto.h +++ b/fs/smb/client/cifsproto.h @@ -590,8 +590,17 @@ void __cifs_put_smb_ses(struct cifs_ses *ses); extern struct cifs_ses * cifs_get_smb_ses(struct TCP_Server_Info *server, struct smb3_fs_context *ctx); -void cifs_readdata_release(struct kref *refcount); -int cifs_async_readv(struct cifs_readdata *rdata); +void cifs_readdata_release(struct cifs_io_subrequest *rdata); +static inline void cifs_get_readdata(struct cifs_io_subrequest *rdata) +{ + refcount_inc(&rdata->subreq.ref); +} +static inline void cifs_put_readdata(struct cifs_io_subrequest *rdata) +{ + if (refcount_dec_and_test(&rdata->subreq.ref)) + cifs_readdata_release(rdata); +} +int cifs_async_readv(struct cifs_io_subrequest *rdata); int cifs_readv_receive(struct TCP_Server_Info *server, struct mid_q_entry *mid); int cifs_async_writev(struct cifs_writedata *wdata, diff --git a/fs/smb/client/cifssmb.c b/fs/smb/client/cifssmb.c index 01e89070df5a..2d10c07c1295 100644 --- a/fs/smb/client/cifssmb.c +++ b/fs/smb/client/cifssmb.c @@ -24,6 +24,8 @@ #include #include #include +#include +#include #include "cifspdu.h" #include "cifsfs.h" #include "cifsglob.h" @@ -1262,12 +1264,11 @@ CIFS_open(const unsigned int xid, struct cifs_open_parms *oparms, int *oplock, static void cifs_readv_callback(struct mid_q_entry *mid) { - struct cifs_readdata *rdata = mid->callback_data; + struct cifs_io_subrequest *rdata = mid->callback_data; struct cifs_tcon *tcon = tlink_tcon(rdata->cfile->tlink); struct TCP_Server_Info *server = tcon->ses->server; struct smb_rqst rqst = { .rq_iov = rdata->iov, .rq_nvec = 2, - .rq_iter_size = iov_iter_count(&rdata->iter), .rq_iter = rdata->iter }; struct cifs_credits credits = { .value = 1, .instance = 0 }; @@ -1312,7 +1313,7 @@ cifs_readv_callback(struct mid_q_entry *mid) /* cifs_async_readv - send an async write, and set up mid to handle result */ int -cifs_async_readv(struct cifs_readdata *rdata) +cifs_async_readv(struct cifs_io_subrequest *rdata) { int rc; READ_REQ *smb = NULL; @@ -1364,15 +1365,11 @@ cifs_async_readv(struct cifs_readdata *rdata) rdata->iov[1].iov_base = (char *)smb + 4; rdata->iov[1].iov_len = get_rfc1002_length(smb); - kref_get(&rdata->refcount); rc = cifs_call_async(tcon->ses->server, &rqst, cifs_readv_receive, cifs_readv_callback, NULL, rdata, 0, NULL); if (rc == 0) cifs_stats_inc(&tcon->stats.cifs_stats.num_reads); - else - kref_put(&rdata->refcount, cifs_readdata_release); - cifs_small_buf_release(smb); return rc; } diff --git a/fs/smb/client/file.c b/fs/smb/client/file.c index b75282c204da..dfafc31a7436 100644 --- a/fs/smb/client/file.c +++ b/fs/smb/client/file.c @@ -2951,7 +2951,7 @@ static int cifs_writepages_region(struct address_space *mapping, continue; } - folio_batch_release(&fbatch); + folio_batch_release(&fbatch); cond_resched(); } while (wbc->nr_to_write > 0); @@ -3786,13 +3786,13 @@ cifs_strict_writev(struct kiocb *iocb, struct iov_iter *from) return written; } -static struct cifs_readdata *cifs_readdata_alloc(work_func_t complete) +static struct cifs_io_subrequest *cifs_readdata_alloc(work_func_t complete) { - struct cifs_readdata *rdata; + struct cifs_io_subrequest *rdata; rdata = kzalloc(sizeof(*rdata), GFP_KERNEL); if (rdata) { - kref_init(&rdata->refcount); + refcount_set(&rdata->subreq.ref, 1); INIT_LIST_HEAD(&rdata->list); init_completion(&rdata->done); INIT_WORK(&rdata->work, complete); @@ -3802,11 +3802,8 @@ static struct cifs_readdata *cifs_readdata_alloc(work_func_t complete) } void -cifs_readdata_release(struct kref *refcount) +cifs_readdata_release(struct cifs_io_subrequest *rdata) { - struct cifs_readdata *rdata = container_of(refcount, - struct cifs_readdata, refcount); - if (rdata->ctx) kref_put(&rdata->ctx->refcount, cifs_aio_ctx_release); #ifdef CONFIG_CIFS_SMB_DIRECT @@ -3826,16 +3823,16 @@ static void collect_uncached_read_data(struct cifs_aio_ctx *ctx); static void cifs_uncached_readv_complete(struct work_struct *work) { - struct cifs_readdata *rdata = container_of(work, - struct cifs_readdata, work); + struct cifs_io_subrequest *rdata = + container_of(work, struct cifs_io_subrequest, work); complete(&rdata->done); collect_uncached_read_data(rdata->ctx); /* the below call can possibly free the last ref to aio ctx */ - kref_put(&rdata->refcount, cifs_readdata_release); + cifs_put_readdata(rdata); } -static int cifs_resend_rdata(struct cifs_readdata *rdata, +static int cifs_resend_rdata(struct cifs_io_subrequest *rdata, struct list_head *rdata_list, struct cifs_aio_ctx *ctx) { @@ -3903,7 +3900,7 @@ static int cifs_resend_rdata(struct cifs_readdata *rdata, } while (rc == -EAGAIN); fail: - kref_put(&rdata->refcount, cifs_readdata_release); + cifs_put_readdata(rdata); return rc; } @@ -3912,7 +3909,7 @@ cifs_send_async_read(loff_t fpos, size_t len, struct cifsFileInfo *open_file, struct cifs_sb_info *cifs_sb, struct list_head *rdata_list, struct cifs_aio_ctx *ctx) { - struct cifs_readdata *rdata; + struct cifs_io_subrequest *rdata; unsigned int rsize, nsegs, max_segs = INT_MAX; struct cifs_credits credits_on_stack; struct cifs_credits *credits = &credits_on_stack; @@ -3994,7 +3991,7 @@ cifs_send_async_read(loff_t fpos, size_t len, struct cifsFileInfo *open_file, if (rc) { add_credits_and_wake_if(server, &rdata->credits, 0); - kref_put(&rdata->refcount, cifs_readdata_release); + cifs_put_readdata(rdata); if (rc == -EAGAIN) continue; break; @@ -4012,7 +4009,7 @@ cifs_send_async_read(loff_t fpos, size_t len, struct cifsFileInfo *open_file, static void collect_uncached_read_data(struct cifs_aio_ctx *ctx) { - struct cifs_readdata *rdata, *tmp; + struct cifs_io_subrequest *rdata, *tmp; struct cifs_sb_info *cifs_sb; int rc; @@ -4058,8 +4055,7 @@ collect_uncached_read_data(struct cifs_aio_ctx *ctx) rdata->cfile, cifs_sb, &tmp_list, ctx); - kref_put(&rdata->refcount, - cifs_readdata_release); + cifs_put_readdata(rdata); } list_splice(&tmp_list, &ctx->list); @@ -4075,7 +4071,7 @@ collect_uncached_read_data(struct cifs_aio_ctx *ctx) ctx->total_len += rdata->got_bytes; } list_del_init(&rdata->list); - kref_put(&rdata->refcount, cifs_readdata_release); + cifs_put_readdata(rdata); } /* mask nodata case */ @@ -4447,8 +4443,8 @@ static void cifs_unlock_folios(struct address_space *mapping, pgoff_t first, pgo static void cifs_readahead_complete(struct work_struct *work) { - struct cifs_readdata *rdata = container_of(work, - struct cifs_readdata, work); + struct cifs_io_subrequest *rdata = container_of(work, + struct cifs_io_subrequest, work); struct folio *folio; pgoff_t last; bool good = rdata->result == 0 || (rdata->result == -EAGAIN && rdata->got_bytes); @@ -4474,7 +4470,7 @@ static void cifs_readahead_complete(struct work_struct *work) } rcu_read_unlock(); - kref_put(&rdata->refcount, cifs_readdata_release); + cifs_put_readdata(rdata); } static void cifs_readahead(struct readahead_control *ractl) @@ -4514,7 +4510,7 @@ static void cifs_readahead(struct readahead_control *ractl) */ while ((nr_pages = ra_pages)) { unsigned int i, rsize; - struct cifs_readdata *rdata; + struct cifs_io_subrequest *rdata; struct cifs_credits credits_on_stack; struct cifs_credits *credits = &credits_on_stack; struct folio *folio; @@ -4633,11 +4629,11 @@ static void cifs_readahead(struct readahead_control *ractl) rdata->offset / PAGE_SIZE, (rdata->offset + rdata->bytes - 1) / PAGE_SIZE); /* Fallback to the readpage in error/reconnect cases */ - kref_put(&rdata->refcount, cifs_readdata_release); + cifs_put_readdata(rdata); break; } - kref_put(&rdata->refcount, cifs_readdata_release); + cifs_put_readdata(rdata); } free_xid(xid); diff --git a/fs/smb/client/smb2ops.c b/fs/smb/client/smb2ops.c index 708973ec750b..9da9ee98ae39 100644 --- a/fs/smb/client/smb2ops.c +++ b/fs/smb/client/smb2ops.c @@ -4478,7 +4478,7 @@ handle_read_data(struct TCP_Server_Info *server, struct mid_q_entry *mid, unsigned int cur_off; unsigned int cur_page_idx; unsigned int pad_len; - struct cifs_readdata *rdata = mid->callback_data; + struct cifs_io_subrequest *rdata = mid->callback_data; struct smb2_hdr *shdr = (struct smb2_hdr *)buf; int length; bool use_rdma_mr = false; diff --git a/fs/smb/client/smb2pdu.c b/fs/smb/client/smb2pdu.c index cad402b426b7..3301a80a0f7e 100644 --- a/fs/smb/client/smb2pdu.c +++ b/fs/smb/client/smb2pdu.c @@ -23,6 +23,8 @@ #include #include #include +#include +#include #include "cifsglob.h" #include "cifsacl.h" #include "cifsproto.h" @@ -4356,7 +4358,7 @@ static inline bool smb3_use_rdma_offload(struct cifs_io_parms *io_parms) */ static int smb2_new_read_req(void **buf, unsigned int *total_len, - struct cifs_io_parms *io_parms, struct cifs_readdata *rdata, + struct cifs_io_parms *io_parms, struct cifs_io_subrequest *rdata, unsigned int remaining_bytes, int request_type) { int rc = -EACCES; @@ -4448,7 +4450,7 @@ smb2_new_read_req(void **buf, unsigned int *total_len, static void smb2_readv_callback(struct mid_q_entry *mid) { - struct cifs_readdata *rdata = mid->callback_data; + struct cifs_io_subrequest *rdata = mid->callback_data; struct cifs_tcon *tcon = tlink_tcon(rdata->cfile->tlink); struct TCP_Server_Info *server = rdata->server; struct smb2_hdr *shdr = @@ -4535,7 +4537,7 @@ smb2_readv_callback(struct mid_q_entry *mid) /* smb2_async_readv - send an async read, and set up mid to handle result */ int -smb2_async_readv(struct cifs_readdata *rdata) +smb2_async_readv(struct cifs_io_subrequest *rdata) { int rc, flags = 0; char *buf; @@ -4593,13 +4595,13 @@ smb2_async_readv(struct cifs_readdata *rdata) flags |= CIFS_HAS_CREDITS; } - kref_get(&rdata->refcount); + cifs_get_readdata(rdata); rc = cifs_call_async(server, &rqst, cifs_readv_receive, smb2_readv_callback, smb3_handle_read_data, rdata, flags, &rdata->credits); if (rc) { - kref_put(&rdata->refcount, cifs_readdata_release); + cifs_put_readdata(rdata); cifs_stats_fail_inc(io_parms.tcon, SMB2_READ_HE); trace_smb3_read_err(0 /* xid */, io_parms.persistent_fid, io_parms.tcon->tid, diff --git a/fs/smb/client/smb2proto.h b/fs/smb/client/smb2proto.h index d910d38671ee..6df87d294fbd 100644 --- a/fs/smb/client/smb2proto.h +++ b/fs/smb/client/smb2proto.h @@ -208,7 +208,7 @@ extern int SMB2_query_acl(const unsigned int xid, struct cifs_tcon *tcon, extern int SMB2_get_srv_num(const unsigned int xid, struct cifs_tcon *tcon, u64 persistent_fid, u64 volatile_fid, __le64 *uniqueid); -extern int smb2_async_readv(struct cifs_readdata *rdata); +extern int smb2_async_readv(struct cifs_io_subrequest *rdata); extern int SMB2_read(const unsigned int xid, struct cifs_io_parms *io_parms, unsigned int *nbytes, char **buf, int *buf_type); extern int smb2_async_writev(struct cifs_writedata *wdata, diff --git a/fs/smb/client/transport.c b/fs/smb/client/transport.c index 994d70193432..a5bab478e6de 100644 --- a/fs/smb/client/transport.c +++ b/fs/smb/client/transport.c @@ -1687,7 +1687,7 @@ __cifs_readv_discard(struct TCP_Server_Info *server, struct mid_q_entry *mid, static int cifs_readv_discard(struct TCP_Server_Info *server, struct mid_q_entry *mid) { - struct cifs_readdata *rdata = mid->callback_data; + struct cifs_io_subrequest *rdata = mid->callback_data; return __cifs_readv_discard(server, mid, rdata->result); } @@ -1697,7 +1697,7 @@ cifs_readv_receive(struct TCP_Server_Info *server, struct mid_q_entry *mid) { int length, len; unsigned int data_offset, data_len; - struct cifs_readdata *rdata = mid->callback_data; + struct cifs_io_subrequest *rdata = mid->callback_data; char *buf = server->smallbuf; unsigned int buflen = server->pdu_size + HEADER_PREAMBLE_SIZE(server); bool use_rdma_mr = false; From patchwork Mon Feb 5 22:57:14 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 197090 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7301:168b:b0:106:860b:bbdd with SMTP id ma11csp1196253dyb; Mon, 5 Feb 2024 14:58:35 -0800 (PST) X-Google-Smtp-Source: AGHT+IFEjmzqKcZj05ak+pffB5k7RrjtFXd50BjzZrReWA949Qz523vG4sUsN8w6+siXimONBYhV X-Received: by 2002:a2e:95c5:0:b0:2d0:b6cc:8169 with SMTP id y5-20020a2e95c5000000b002d0b6cc8169mr712070ljh.28.1707173914866; Mon, 05 Feb 2024 14:58:34 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1707173914; cv=pass; d=google.com; s=arc-20160816; b=TARhcyE9tZD137WFFPXT1pdLP+eEZDgtw6mQjelZbHJtI7GqS5ea/fLsak5CWiMhCg jdjS/8641VeGGWhkFdFnRgPyc7EpGwdtfunSDi6DAgJe4veHOrhqXXGzy7NrPtcYc/1K KjfNjY1fRVqvfpzG8UfHi6LG1O5LMZZpmHFzixkED6mmW/ecMYVqyL5rss2YuZdZPKMv rL6y3zGgxEl46iO8jHaLf4fT2a7WUMMcMOyqkAeB4VPsO0fE83f86iaULFBYRH7EM7lb Z5a7FbWvt+NZjiasAUvtdScfaubCFr8/MSKBLoiuHmeMf8tSc8UVs5hTsjQhyQDreZ68 4uVA== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=9AufEF5BoE2ti1jRANwQA21BYnThNCxSf+Ymoah+TR0=; fh=SB7fG+WvdcXt9FMMrrc3w8JR32HV/xw/kV4HZ2g1yCk=; b=RH3Cl46x9jqdTH5rMf+i9fD/s1QnxM71CPRM5XXq9IAe5VQLDEkA7vrKuWfwMb5Ifr 9fZrA6vpVWVSCgHNCMte1ISUG63xN07KKyjUCZF0Jo0I7PONlxj8Zmr0zMh58Odb1NnR 5jLQIF54ZaD70MSHhkQ+NZ/44eabyW6oI4f1IRNIlt9anpm2Hc/HJAUD/SdY5jnTmNk5 5o1q2YijSaEvG9OXv7vkh4kKFfdJ/75sdM0mQSnBtN6Lfj1KRr/cdGwO892QTKVNQZKi qHHv9LoEo7AbECNdQ+dWF90J/E1I4XjppwFtz/euMkcBJ0ec5R3UgrTZ4KlwTyOIOhlg 48zg==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=ZpgcuVYC; arc=pass (i=1 spf=pass spfdomain=redhat.com dkim=pass dkdomain=redhat.com dmarc=pass fromdomain=redhat.com); spf=pass (google.com: domain of linux-kernel+bounces-54047-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-54047-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com X-Forwarded-Encrypted: i=1; AJvYcCUByg8uO6e7do9eloOkXgGLQPGtpnBARHjALWWEUBO9/5JFT21yA8UD3939witeH/j6/t7WRjSZ4AM4SHSyfZMxhRXTLg== Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [2604:1380:4601:e00::3]) by mx.google.com with ESMTPS id bd18-20020a056402207200b0055eaca42c54si344896edb.418.2024.02.05.14.58.34 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 05 Feb 2024 14:58:34 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-54047-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) client-ip=2604:1380:4601:e00::3; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=ZpgcuVYC; arc=pass (i=1 spf=pass spfdomain=redhat.com dkim=pass dkdomain=redhat.com dmarc=pass fromdomain=redhat.com); spf=pass (google.com: domain of linux-kernel+bounces-54047-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-54047-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id 7707D1F2863C for ; Mon, 5 Feb 2024 22:58:34 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id EEC3B1C2A8; Mon, 5 Feb 2024 22:57:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="ZpgcuVYC" Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 770BF249E3 for ; Mon, 5 Feb 2024 22:57:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707173864; cv=none; b=c9pI3VquXL9aTkhr4FU2v5Xg8ajonpUPF5WP8DMY75A7ngeh/o393pKNQU7x3pDtDl5vaqvJKHuVjokB1FZz7aD0QUS1ITFaeB4dPfyfb5ZvzyQNM1YF7WTc7RrkhBi2pOXK1DzIy/AkeB6ep2QkMyp4on0mbPKb/4T7RpxwrGY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707173864; c=relaxed/simple; bh=Agj0gXGUntbTVJaToDnP0jCCebZOHGxchG/pl6AcBHg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=bLT5hqjDqBxipcwuYoUdy8scHWYM9rdEzyeuXPrpOLNfeayCJpV9QX1+CPm9S0EvCo2QNtAjr+PkWtH/MB7LAUmaJs5yfgIVbH+6nDiYD4VGgAU9eBssvpkMd+BSuIH2awi6rSkj8AX8qZ9GWOhYHGeluiYCI6buKUXWByEGCqc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=ZpgcuVYC; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1707173861; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=9AufEF5BoE2ti1jRANwQA21BYnThNCxSf+Ymoah+TR0=; b=ZpgcuVYCLEO0D+mWmGcRgtT1Xg0a5XShtUByEcfbn+89My8YAn54euJnzvrZzvqpFiV0sP k89hwW3x+KGgCqZ4I+X7ytzuN3GfRBeajPgekQM3nAKK3ORCI/6Lt3xnPEIHQ/O5GAMjWS sduLR2ZyRgO1t1Gxto/mTN3g5xwqyFM= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-319-6-Z_WH0MMm-tUd2tJ60avA-1; Mon, 05 Feb 2024 17:57:37 -0500 X-MC-Unique: 6-Z_WH0MMm-tUd2tJ60avA-1 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 9D74C8870E2; Mon, 5 Feb 2024 22:57:36 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.42.28.245]) by smtp.corp.redhat.com (Postfix) with ESMTP id AD0DD492BC6; Mon, 5 Feb 2024 22:57:34 +0000 (UTC) From: David Howells To: Steve French Cc: David Howells , Jeff Layton , Matthew Wilcox , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Christian Brauner , netfs@lists.linux.dev, linux-cifs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Steve French , Shyam Prasad N , Rohith Surabattula Subject: [PATCH v5 02/12] cifs: Set zero_point in the copy_file_range() and remap_file_range() Date: Mon, 5 Feb 2024 22:57:14 +0000 Message-ID: <20240205225726.3104808-3-dhowells@redhat.com> In-Reply-To: <20240205225726.3104808-1-dhowells@redhat.com> References: <20240205225726.3104808-1-dhowells@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.9 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1790101594781369085 X-GMAIL-MSGID: 1790101594781369085 Set zero_point in the copy_file_range() and remap_file_range() implementations so that we don't skip reading data modified on a server-side copy. Signed-off-by: David Howells cc: Steve French cc: Shyam Prasad N cc: Rohith Surabattula cc: Jeff Layton cc: linux-cifs@vger.kernel.org cc: netfs@lists.linux.dev cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org --- fs/smb/client/cifsfs.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/fs/smb/client/cifsfs.c b/fs/smb/client/cifsfs.c index 2a4a4e3a8751..41617541d175 100644 --- a/fs/smb/client/cifsfs.c +++ b/fs/smb/client/cifsfs.c @@ -1340,6 +1340,8 @@ static loff_t cifs_remap_file_range(struct file *src_file, loff_t off, rc = cifs_flush_folio(target_inode, destend, &fstart, &fend, false); if (rc) goto unlock; + if (fend > target_cifsi->netfs.zero_point) + target_cifsi->netfs.zero_point = fend + 1; /* Discard all the folios that overlap the destination region. */ cifs_dbg(FYI, "about to discard pages %llx-%llx\n", fstart, fend); @@ -1358,6 +1360,8 @@ static loff_t cifs_remap_file_range(struct file *src_file, loff_t off, fscache_resize_cookie(cifs_inode_cookie(target_inode), new_size); } + if (rc == 0 && new_size > target_cifsi->netfs.zero_point) + target_cifsi->netfs.zero_point = new_size; } /* force revalidate of size and timestamps of target file now @@ -1449,6 +1453,8 @@ ssize_t cifs_file_copychunk_range(unsigned int xid, rc = cifs_flush_folio(target_inode, destend, &fstart, &fend, false); if (rc) goto unlock; + if (fend > target_cifsi->netfs.zero_point) + target_cifsi->netfs.zero_point = fend + 1; /* Discard all the folios that overlap the destination region. */ truncate_inode_pages_range(&target_inode->i_data, fstart, fend); From patchwork Mon Feb 5 22:57:15 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 197102 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7301:168b:b0:106:860b:bbdd with SMTP id ma11csp1202193dyb; Mon, 5 Feb 2024 15:10:04 -0800 (PST) X-Google-Smtp-Source: AGHT+IGSXMCJHb2yyHEIpeWwtnGu8z2w3lv67nzWHd8ywm4st49mmchuXYcgcjlewQBRhARm9Dgt X-Received: by 2002:a05:6a20:1ea5:b0:19c:5711:fd93 with SMTP id dl37-20020a056a201ea500b0019c5711fd93mr814070pzb.16.1707174604037; Mon, 05 Feb 2024 15:10:04 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1707174604; cv=pass; d=google.com; s=arc-20160816; b=yaFrf+qQGuIwEIz2GSF/6eohuLO4dMgFVT5dLifwZfv66lrlJROqRjWHltd7z7AQpl vzV3KYcxfxbKa+BvvdWzaHPpJqkgLwyuZCRrxmn2shFe1pksyqoCkZusMWpHflF8uLHC jHLduQmnVDFjngrmIPCsQpJcXzltVkU9CLO4jbyxc29yjI30vCqmnTx9mq0we9ryqrU2 r3J1r9ZlPTDEI03Z1vXvPov3uFbLzqBhKmpBqZeXheWywUsWTP/jamw69C3Vqe+onuLO M1PLTTrDLUai8w5VSbpcF783D2NQOOdHhiyh/d7yH/GsCWRDafGuMl73kDlFw0DATqwT tjEA== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=BBf97OTkDK/LM76sah8bnxNN3mskirxH3ybwoWjSlQ4=; fh=hUsa90FqCpJeHzCWeuZMMhJk/nj031HBJ+yVfD3bfS8=; b=Y69VuOMzWJyaMKcNQrfC4KpIqcwO8458kpMhUomTrF++3FMQtej8uwPZxKip6UfYfr /mjfNu6vFAtTeizahh3s83JuQ7CX+OOTyuNj4bt9DvnEhNmrbFlnpxw3Le+GwwN1Ke9Q tobQp1l7xS8cyP7WyT7aMAlieK/KobndT6O1nEp5aKdTlpMtSKXE/J5FUVg7d+1qNy9l g73RslsSBiW96p+19YU6zEjJDramaTZSM51Oxc366rURHWLbGzHnqQGn9teNMAPYgIJ1 R9qyh47U+ytPTbO91JxypHrPDt4U79qEJTMEtGFw1juE+seewdaF02PhZ3nD02PgQOJX ug8Q==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=Jdrze1+k; arc=pass (i=1 spf=pass spfdomain=redhat.com dkim=pass dkdomain=redhat.com dmarc=pass fromdomain=redhat.com); spf=pass (google.com: domain of linux-kernel+bounces-54048-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:40f1:3f00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-54048-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com X-Forwarded-Encrypted: i=1; AJvYcCXCmLJtFdiBRjzCoWrUmQ8dtPvpBS+K7HBthZ7JKmXAe06XoYWqnUxT/NIutKxtwYm5lOWokQQtjAF8gxy23A9vM3uEWA== Received: from sy.mirrors.kernel.org (sy.mirrors.kernel.org. [2604:1380:40f1:3f00::1]) by mx.google.com with ESMTPS id ld13-20020a056a004f8d00b006e047aa6093si512268pfb.255.2024.02.05.15.10.03 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 05 Feb 2024 15:10:04 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-54048-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:40f1:3f00::1 as permitted sender) client-ip=2604:1380:40f1:3f00::1; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=Jdrze1+k; arc=pass (i=1 spf=pass spfdomain=redhat.com dkim=pass dkdomain=redhat.com dmarc=pass fromdomain=redhat.com); spf=pass (google.com: domain of linux-kernel+bounces-54048-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:40f1:3f00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-54048-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sy.mirrors.kernel.org (Postfix) with ESMTPS id 0AF30B25C74 for ; Mon, 5 Feb 2024 22:59:11 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 4F2EE4F21E; Mon, 5 Feb 2024 22:57:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="Jdrze1+k" Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 547654A99C for ; Mon, 5 Feb 2024 22:57:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707173867; cv=none; b=KELhU+7i+vmOvDt3DLoSL1vfaRdcqJ4yPxNDXqC3wItKHtGY6BeZ+gALMigALMBWASSNS3WVqtLGmBbHNQo0/CkGszhClJ//eP426b0FLwU9ssnx+MPKBFTRBx9xibWei8YGh0AqB4amXEzYRFGDuZIjXscVbGuQCDcvGP1tF68= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707173867; c=relaxed/simple; bh=F7Y1UKoKfxwY69Lqss7uxvmltFc043/wCx+YSnM+1Es=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=i9H8sa8DQegYnLEFgmsN8YTn3TpQfOfKsf1+lVb8VEe29Y5lJu259Stv+Zz0hWEI6AucFaHcJps8h0XK30JlEHu6BDbbmMdOk+iENOynh9WwBtnXKn/McADUWm9FeLip5Kk5cz5/A6rf0OSnEUyjMl6W8G/AyL6c6osKObjqnJ0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=Jdrze1+k; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1707173864; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=BBf97OTkDK/LM76sah8bnxNN3mskirxH3ybwoWjSlQ4=; b=Jdrze1+kXHjHLvs5pjutEuLcy+Nbcs1N06jZqWNthhiK8EQX2OxsuehGoWKTfIRUMnFyBI VdXxSKeN07X6JJww8THX3QfU/3HVsWxb9is3fCZRWwmYueDgC4qmph5jtwQnweUXP7jLB5 WTJlm7HaT7oR63CnV7w+5k1YIHY2pU0= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-427-SpW__6UZMLGtMzK9q0472A-1; Mon, 05 Feb 2024 17:57:40 -0500 X-MC-Unique: SpW__6UZMLGtMzK9q0472A-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 4538B3C000B6; Mon, 5 Feb 2024 22:57:39 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.42.28.245]) by smtp.corp.redhat.com (Postfix) with ESMTP id 3B51B400DF3D; Mon, 5 Feb 2024 22:57:37 +0000 (UTC) From: David Howells To: Steve French Cc: David Howells , Jeff Layton , Matthew Wilcox , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Christian Brauner , netfs@lists.linux.dev, linux-cifs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Steve French , Shyam Prasad N , Rohith Surabattula Subject: [PATCH v5 03/12] cifs: Replace cifs_writedata with a wrapper around netfs_io_subrequest Date: Mon, 5 Feb 2024 22:57:15 +0000 Message-ID: <20240205225726.3104808-4-dhowells@redhat.com> In-Reply-To: <20240205225726.3104808-1-dhowells@redhat.com> References: <20240205225726.3104808-1-dhowells@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.2 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1790102317789802862 X-GMAIL-MSGID: 1790102317789802862 Replace the cifs_writedata struct with the same wrapper around netfs_io_subrequest that was used to replace cifs_readdata. Signed-off-by: David Howells cc: Steve French cc: Shyam Prasad N cc: Rohith Surabattula cc: Jeff Layton cc: linux-cifs@vger.kernel.org cc: netfs@lists.linux.dev cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org --- fs/smb/client/cifsglob.h | 32 +++------------- fs/smb/client/cifsproto.h | 16 ++++++-- fs/smb/client/cifssmb.c | 9 ++--- fs/smb/client/file.c | 79 ++++++++++++++++----------------------- fs/smb/client/smb2pdu.c | 9 ++--- fs/smb/client/smb2proto.h | 3 +- 6 files changed, 59 insertions(+), 89 deletions(-) diff --git a/fs/smb/client/cifsglob.h b/fs/smb/client/cifsglob.h index 1ffd1f3774f7..7f7073813612 100644 --- a/fs/smb/client/cifsglob.h +++ b/fs/smb/client/cifsglob.h @@ -257,7 +257,6 @@ struct cifs_fattr; struct smb3_fs_context; struct cifs_fid; struct cifs_io_subrequest; -struct cifs_writedata; struct cifs_io_parms; struct cifs_search_info; struct cifsInodeInfo; @@ -436,8 +435,7 @@ struct smb_version_operations { /* async read from the server */ int (*async_readv)(struct cifs_io_subrequest *); /* async write to the server */ - int (*async_writev)(struct cifs_writedata *, - void (*release)(struct kref *)); + int (*async_writev)(struct cifs_io_subrequest *); /* sync read from the server */ int (*sync_read)(const unsigned int, struct cifs_fid *, struct cifs_io_parms *, unsigned int *, char **, @@ -1480,36 +1478,18 @@ struct cifs_io_subrequest { #endif struct cifs_credits credits; - // TODO: Remove following elements - struct list_head list; - struct completion done; - struct work_struct work; - struct iov_iter iter; - __u64 offset; - unsigned int bytes; -}; + enum writeback_sync_modes sync_mode; + bool uncached; + bool replay; + struct bio_vec *bv; -/* asynchronous write support */ -struct cifs_writedata { - struct kref refcount; + // TODO: Remove following elements struct list_head list; struct completion done; - enum writeback_sync_modes sync_mode; struct work_struct work; - struct cifsFileInfo *cfile; - struct cifs_aio_ctx *ctx; struct iov_iter iter; - struct bio_vec *bv; __u64 offset; - pid_t pid; unsigned int bytes; - int result; - struct TCP_Server_Info *server; -#ifdef CONFIG_CIFS_SMB_DIRECT - struct smbd_mr *mr; -#endif - struct cifs_credits credits; - bool replay; }; /* diff --git a/fs/smb/client/cifsproto.h b/fs/smb/client/cifsproto.h index adf4a1f523e3..867077be859c 100644 --- a/fs/smb/client/cifsproto.h +++ b/fs/smb/client/cifsproto.h @@ -603,11 +603,19 @@ static inline void cifs_put_readdata(struct cifs_io_subrequest *rdata) int cifs_async_readv(struct cifs_io_subrequest *rdata); int cifs_readv_receive(struct TCP_Server_Info *server, struct mid_q_entry *mid); -int cifs_async_writev(struct cifs_writedata *wdata, - void (*release)(struct kref *kref)); +int cifs_async_writev(struct cifs_io_subrequest *wdata); void cifs_writev_complete(struct work_struct *work); -struct cifs_writedata *cifs_writedata_alloc(work_func_t complete); -void cifs_writedata_release(struct kref *refcount); +struct cifs_io_subrequest *cifs_writedata_alloc(work_func_t complete); +void cifs_writedata_release(struct cifs_io_subrequest *rdata); +static inline void cifs_get_writedata(struct cifs_io_subrequest *wdata) +{ + refcount_inc(&wdata->subreq.ref); +} +static inline void cifs_put_writedata(struct cifs_io_subrequest *wdata) +{ + if (refcount_dec_and_test(&wdata->subreq.ref)) + cifs_writedata_release(wdata); +} int cifs_query_mf_symlink(unsigned int xid, struct cifs_tcon *tcon, struct cifs_sb_info *cifs_sb, const unsigned char *path, char *pbuf, diff --git a/fs/smb/client/cifssmb.c b/fs/smb/client/cifssmb.c index 2d10c07c1295..1ead867a0f4a 100644 --- a/fs/smb/client/cifssmb.c +++ b/fs/smb/client/cifssmb.c @@ -1612,7 +1612,7 @@ CIFSSMBWrite(const unsigned int xid, struct cifs_io_parms *io_parms, static void cifs_writev_callback(struct mid_q_entry *mid) { - struct cifs_writedata *wdata = mid->callback_data; + struct cifs_io_subrequest *wdata = mid->callback_data; struct cifs_tcon *tcon = tlink_tcon(wdata->cfile->tlink); unsigned int written; WRITE_RSP *smb = (WRITE_RSP *)mid->resp_buf; @@ -1657,8 +1657,7 @@ cifs_writev_callback(struct mid_q_entry *mid) /* cifs_async_writev - send an async write, and set up mid to handle result */ int -cifs_async_writev(struct cifs_writedata *wdata, - void (*release)(struct kref *kref)) +cifs_async_writev(struct cifs_io_subrequest *wdata) { int rc = -EACCES; WRITE_REQ *smb = NULL; @@ -1725,14 +1724,14 @@ cifs_async_writev(struct cifs_writedata *wdata, iov[1].iov_len += 4; /* pad bigger by four bytes */ } - kref_get(&wdata->refcount); + cifs_get_writedata(wdata); rc = cifs_call_async(tcon->ses->server, &rqst, NULL, cifs_writev_callback, NULL, wdata, 0, NULL); if (rc == 0) cifs_stats_inc(&tcon->stats.cifs_stats.num_writes); else - kref_put(&wdata->refcount, release); + cifs_put_writedata(wdata); async_writev_out: cifs_small_buf_release(smb); diff --git a/fs/smb/client/file.c b/fs/smb/client/file.c index dfafc31a7436..6c7e6a513b90 100644 --- a/fs/smb/client/file.c +++ b/fs/smb/client/file.c @@ -2413,10 +2413,10 @@ cifs_get_readable_path(struct cifs_tcon *tcon, const char *name, } void -cifs_writedata_release(struct kref *refcount) +cifs_writedata_release(struct cifs_io_subrequest *wdata) { - struct cifs_writedata *wdata = container_of(refcount, - struct cifs_writedata, refcount); + if (wdata->uncached) + kref_put(&wdata->ctx->refcount, cifs_aio_ctx_release); #ifdef CONFIG_CIFS_SMB_DIRECT if (wdata->mr) { smbd_deregister_mr(wdata->mr); @@ -2435,7 +2435,7 @@ cifs_writedata_release(struct kref *refcount) * possible that the page was redirtied so re-clean the page. */ static void -cifs_writev_requeue(struct cifs_writedata *wdata) +cifs_writev_requeue(struct cifs_io_subrequest *wdata) { int rc = 0; struct inode *inode = d_inode(wdata->cfile->dentry); @@ -2445,7 +2445,7 @@ cifs_writev_requeue(struct cifs_writedata *wdata) server = tlink_tcon(wdata->cfile->tlink)->ses->server; do { - struct cifs_writedata *wdata2; + struct cifs_io_subrequest *wdata2; unsigned int wsize, cur_len; wsize = server->ops->wp_retry_size(inode); @@ -2468,7 +2468,7 @@ cifs_writev_requeue(struct cifs_writedata *wdata) wdata2->sync_mode = wdata->sync_mode; wdata2->offset = fpos; wdata2->bytes = cur_len; - wdata2->iter = wdata->iter; + wdata2->iter = wdata->iter; iov_iter_advance(&wdata2->iter, fpos - wdata->offset); iov_iter_truncate(&wdata2->iter, wdata2->bytes); @@ -2490,11 +2490,10 @@ cifs_writev_requeue(struct cifs_writedata *wdata) rc = -EBADF; } else { wdata2->pid = wdata2->cfile->pid; - rc = server->ops->async_writev(wdata2, - cifs_writedata_release); + rc = server->ops->async_writev(wdata2); } - kref_put(&wdata2->refcount, cifs_writedata_release); + cifs_put_writedata(wdata2); if (rc) { if (is_retryable_error(rc)) continue; @@ -2513,14 +2512,14 @@ cifs_writev_requeue(struct cifs_writedata *wdata) if (rc != 0 && !is_retryable_error(rc)) mapping_set_error(inode->i_mapping, rc); - kref_put(&wdata->refcount, cifs_writedata_release); + cifs_put_writedata(wdata); } void cifs_writev_complete(struct work_struct *work) { - struct cifs_writedata *wdata = container_of(work, - struct cifs_writedata, work); + struct cifs_io_subrequest *wdata = container_of(work, + struct cifs_io_subrequest, work); struct inode *inode = d_inode(wdata->cfile->dentry); if (wdata->result == 0) { @@ -2541,16 +2540,16 @@ cifs_writev_complete(struct work_struct *work) if (wdata->result != -EAGAIN) mapping_set_error(inode->i_mapping, wdata->result); - kref_put(&wdata->refcount, cifs_writedata_release); + cifs_put_writedata(wdata); } -struct cifs_writedata *cifs_writedata_alloc(work_func_t complete) +struct cifs_io_subrequest *cifs_writedata_alloc(work_func_t complete) { - struct cifs_writedata *wdata; + struct cifs_io_subrequest *wdata; wdata = kzalloc(sizeof(*wdata), GFP_NOFS); if (wdata != NULL) { - kref_init(&wdata->refcount); + refcount_set(&wdata->subreq.ref, 1); INIT_LIST_HEAD(&wdata->list); init_completion(&wdata->done); INIT_WORK(&wdata->work, complete); @@ -2731,7 +2730,7 @@ static ssize_t cifs_write_back_from_locked_folio(struct address_space *mapping, { struct inode *inode = mapping->host; struct TCP_Server_Info *server; - struct cifs_writedata *wdata; + struct cifs_io_subrequest *wdata; struct cifs_sb_info *cifs_sb = CIFS_SB(inode->i_sb); struct cifs_credits credits_on_stack; struct cifs_credits *credits = &credits_on_stack; @@ -2823,10 +2822,9 @@ static ssize_t cifs_write_back_from_locked_folio(struct address_space *mapping, if (wdata->cfile->invalidHandle) rc = -EAGAIN; else - rc = wdata->server->ops->async_writev(wdata, - cifs_writedata_release); + rc = wdata->server->ops->async_writev(wdata); if (rc >= 0) { - kref_put(&wdata->refcount, cifs_writedata_release); + cifs_put_writedata(wdata); goto err_close; } } else { @@ -2836,7 +2834,7 @@ static ssize_t cifs_write_back_from_locked_folio(struct address_space *mapping, } err_wdata: - kref_put(&wdata->refcount, cifs_writedata_release); + cifs_put_writedata(wdata); err_uncredit: add_credits_and_wake_if(server, credits, 0); err_close: @@ -3225,23 +3223,13 @@ int cifs_flush(struct file *file, fl_owner_t id) return rc; } -static void -cifs_uncached_writedata_release(struct kref *refcount) -{ - struct cifs_writedata *wdata = container_of(refcount, - struct cifs_writedata, refcount); - - kref_put(&wdata->ctx->refcount, cifs_aio_ctx_release); - cifs_writedata_release(refcount); -} - static void collect_uncached_write_data(struct cifs_aio_ctx *ctx); static void cifs_uncached_writev_complete(struct work_struct *work) { - struct cifs_writedata *wdata = container_of(work, - struct cifs_writedata, work); + struct cifs_io_subrequest *wdata = container_of(work, + struct cifs_io_subrequest, work); struct inode *inode = d_inode(wdata->cfile->dentry); struct cifsInodeInfo *cifsi = CIFS_I(inode); @@ -3254,11 +3242,11 @@ cifs_uncached_writev_complete(struct work_struct *work) complete(&wdata->done); collect_uncached_write_data(wdata->ctx); /* the below call can possibly free the last ref to aio ctx */ - kref_put(&wdata->refcount, cifs_uncached_writedata_release); + cifs_put_writedata(wdata); } static int -cifs_resend_wdata(struct cifs_writedata *wdata, struct list_head *wdata_list, +cifs_resend_wdata(struct cifs_io_subrequest *wdata, struct list_head *wdata_list, struct cifs_aio_ctx *ctx) { unsigned int wsize; @@ -3308,8 +3296,7 @@ cifs_resend_wdata(struct cifs_writedata *wdata, struct list_head *wdata_list, wdata->mr = NULL; } #endif - rc = server->ops->async_writev(wdata, - cifs_uncached_writedata_release); + rc = server->ops->async_writev(wdata); } } @@ -3324,7 +3311,7 @@ cifs_resend_wdata(struct cifs_writedata *wdata, struct list_head *wdata_list, } while (rc == -EAGAIN); fail: - kref_put(&wdata->refcount, cifs_uncached_writedata_release); + cifs_put_writedata(wdata); return rc; } @@ -3376,7 +3363,7 @@ cifs_write_from_iter(loff_t fpos, size_t len, struct iov_iter *from, { int rc = 0; size_t cur_len, max_len; - struct cifs_writedata *wdata; + struct cifs_io_subrequest *wdata; pid_t pid; struct TCP_Server_Info *server; unsigned int xid, max_segs = INT_MAX; @@ -3440,6 +3427,7 @@ cifs_write_from_iter(loff_t fpos, size_t len, struct iov_iter *from, break; } + wdata->uncached = true; wdata->sync_mode = WB_SYNC_ALL; wdata->offset = (__u64)fpos; wdata->cfile = cifsFileInfo_get(open_file); @@ -3459,14 +3447,12 @@ cifs_write_from_iter(loff_t fpos, size_t len, struct iov_iter *from, if (wdata->cfile->invalidHandle) rc = -EAGAIN; else - rc = server->ops->async_writev(wdata, - cifs_uncached_writedata_release); + rc = server->ops->async_writev(wdata); } if (rc) { add_credits_and_wake_if(server, &wdata->credits, 0); - kref_put(&wdata->refcount, - cifs_uncached_writedata_release); + cifs_put_writedata(wdata); if (rc == -EAGAIN) continue; break; @@ -3484,7 +3470,7 @@ cifs_write_from_iter(loff_t fpos, size_t len, struct iov_iter *from, static void collect_uncached_write_data(struct cifs_aio_ctx *ctx) { - struct cifs_writedata *wdata, *tmp; + struct cifs_io_subrequest *wdata, *tmp; struct cifs_tcon *tcon; struct cifs_sb_info *cifs_sb; struct dentry *dentry = ctx->cfile->dentry; @@ -3539,8 +3525,7 @@ static void collect_uncached_write_data(struct cifs_aio_ctx *ctx) ctx->cfile, cifs_sb, &tmp_list, ctx); - kref_put(&wdata->refcount, - cifs_uncached_writedata_release); + cifs_put_writedata(wdata); } list_splice(&tmp_list, &ctx->list); @@ -3548,7 +3533,7 @@ static void collect_uncached_write_data(struct cifs_aio_ctx *ctx) } } list_del_init(&wdata->list); - kref_put(&wdata->refcount, cifs_uncached_writedata_release); + cifs_put_writedata(wdata); } cifs_stats_bytes_written(tcon, ctx->total_len); diff --git a/fs/smb/client/smb2pdu.c b/fs/smb/client/smb2pdu.c index 3301a80a0f7e..db739806343d 100644 --- a/fs/smb/client/smb2pdu.c +++ b/fs/smb/client/smb2pdu.c @@ -4702,7 +4702,7 @@ SMB2_read(const unsigned int xid, struct cifs_io_parms *io_parms, static void smb2_writev_callback(struct mid_q_entry *mid) { - struct cifs_writedata *wdata = mid->callback_data; + struct cifs_io_subrequest *wdata = mid->callback_data; struct cifs_tcon *tcon = tlink_tcon(wdata->cfile->tlink); struct TCP_Server_Info *server = wdata->server; unsigned int written; @@ -4783,8 +4783,7 @@ smb2_writev_callback(struct mid_q_entry *mid) /* smb2_async_writev - send an async write, and set up mid to handle result */ int -smb2_async_writev(struct cifs_writedata *wdata, - void (*release)(struct kref *kref)) +smb2_async_writev(struct cifs_io_subrequest *wdata) { int rc = -EACCES, flags = 0; struct smb2_write_req *req = NULL; @@ -4918,7 +4917,7 @@ smb2_async_writev(struct cifs_writedata *wdata, flags |= CIFS_HAS_CREDITS; } - kref_get(&wdata->refcount); + cifs_get_writedata(wdata); rc = cifs_call_async(server, &rqst, NULL, smb2_writev_callback, NULL, wdata, flags, &wdata->credits); @@ -4930,7 +4929,7 @@ smb2_async_writev(struct cifs_writedata *wdata, io_parms->offset, io_parms->length, rc); - kref_put(&wdata->refcount, release); + cifs_put_writedata(wdata); cifs_stats_fail_inc(tcon, SMB2_WRITE_HE); } diff --git a/fs/smb/client/smb2proto.h b/fs/smb/client/smb2proto.h index 6df87d294fbd..39ac8ed06281 100644 --- a/fs/smb/client/smb2proto.h +++ b/fs/smb/client/smb2proto.h @@ -211,8 +211,7 @@ extern int SMB2_get_srv_num(const unsigned int xid, struct cifs_tcon *tcon, extern int smb2_async_readv(struct cifs_io_subrequest *rdata); extern int SMB2_read(const unsigned int xid, struct cifs_io_parms *io_parms, unsigned int *nbytes, char **buf, int *buf_type); -extern int smb2_async_writev(struct cifs_writedata *wdata, - void (*release)(struct kref *kref)); +extern int smb2_async_writev(struct cifs_io_subrequest *wdata); extern int SMB2_write(const unsigned int xid, struct cifs_io_parms *io_parms, unsigned int *nbytes, struct kvec *iov, int n_vec); extern int SMB2_echo(struct TCP_Server_Info *server); From patchwork Mon Feb 5 22:57:16 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 197093 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7301:168b:b0:106:860b:bbdd with SMTP id ma11csp1197411dyb; Mon, 5 Feb 2024 15:01:04 -0800 (PST) X-Google-Smtp-Source: AGHT+IE0uu43L39lw6BC7/LcuarKvViPxzAO59+5SN70FgtqVuMukRhRnqFt8l9AbtEGfDYvOy8q X-Received: by 2002:a05:620a:a02:b0:784:8de:f5e3 with SMTP id i2-20020a05620a0a0200b0078408def5e3mr868717qka.2.1707174064676; Mon, 05 Feb 2024 15:01:04 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1707174064; cv=pass; d=google.com; s=arc-20160816; b=O6pPr1kzG5W2v9dbHGjRWJLQfM8Ue3+Jdjd5pR3zbEcjrl36BYlvjULUPnGNGyaBeq LU3vNzu5sFS9yr4MZuPjwoxq7y8l44hVrINqjtsQ7aY5MawYoKgJtBEwUyy+PNvrgwHV z7aCZZMhkU7vjFe58O5fOS/X7jHc0cHzz4J0x+yl5s/q6U51g3zLFDjABzjmtmQiCrwX S7Pjc771EUwDyP9RnW/WRUqmb/Jg6jPxi6SMYo1oQzj1IZqVrtX6zkAaeiLGsVay8dMm Xni/iB2D9OQ3CpirpIpeUB3zweZBBP2LzGvMq0G/pfj4MwEY7IC/bLOZDdiEyxkXFL12 wEOg== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=4I6fU7/4wPT+PYoUIdLLREVRUQVIkIPSriBzY+bMogw=; fh=lkVktwn/bfajZz5rAxkwUt0vc/+r5y065AgHFGpMzoY=; b=CyoC4CevvLgxGbHO/1GHujv7rkFLBSbY2qIcavrwGu8IO+K1tPcHlntIgpFf6RtpIx DtJ5KOHUIQtZoakZW5VYDrFBnRrxTp/fGNVvYMyjfQHpWsf1MJGee+va2gP1aOK1yQ5z /4yt/QA0aeXu3wN6Yzr+HKPD431jbtyIVUGZgRzihMQ0tgF8JEZqhYXsmF7d7GzKYOo+ 5rUTCEbQ+hwy4X898ONLp3mcVxPozF/FvqVTdvkZDTLIe6Dsawje3sQ9gC+Ikuzi3Rmy xTI7yubwgNV07xMvCmFKGu6Dt7g6nXrT7g4Ne4M5XHc3KsnDzVOBojCJkXxcTd39ikAK kBaQ==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=WqlEJemH; arc=pass (i=1 spf=pass spfdomain=redhat.com dkim=pass dkdomain=redhat.com dmarc=pass fromdomain=redhat.com); spf=pass (google.com: domain of linux-kernel+bounces-54049-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-54049-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com X-Forwarded-Encrypted: i=1; AJvYcCXdKOLyZ1dP6+nSSYwNCx3KTaSdLf8WcNiZRgSRys9LHgphgeeGXpZ6nFjzZ0wa04O1DqTEhibuWn5uZRcTCsR6cVjEeg== Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [2604:1380:45d1:ec00::1]) by mx.google.com with ESMTPS id os33-20020a05620a812100b007856fcf3292si934407qkn.77.2024.02.05.15.01.04 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 05 Feb 2024 15:01:04 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-54049-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) client-ip=2604:1380:45d1:ec00::1; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=WqlEJemH; arc=pass (i=1 spf=pass spfdomain=redhat.com dkim=pass dkdomain=redhat.com dmarc=pass fromdomain=redhat.com); spf=pass (google.com: domain of linux-kernel+bounces-54049-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-54049-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id 870581C285D3 for ; Mon, 5 Feb 2024 23:00:29 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 5E94651C23; Mon, 5 Feb 2024 22:58:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="WqlEJemH" Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6D8A24D131 for ; Mon, 5 Feb 2024 22:57:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707173872; cv=none; b=lTjxxJ39Ei05PXPiDyXMEGDmHhsqrEivi1IMQmc1+r18OwJjBf+DojSGbpru6z3V+OS3CJi1bZyd1du1+Zb5fxfiuGdo+KWccRdpng1qOm8dPx1j8Y9LGSsMWD8YxnaddAETCTZO1QMiryxWbDgAXY83d2KbTfl1pP3wkAmKxck= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707173872; c=relaxed/simple; bh=YnvrjpP+1mlPltPyneC8mOp31s+PrxG2XoHGAQxYZNs=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=rGzN3nq+633hzQjjN0fzRS7XMnqOXOjPVTkDwTV5KEbRltZpaA29OzCCY/rXulpHnZfzUV6DzF9aHH16esu3yADvxX+YbgX/8/p3b9fGQndUUIUFajQ+h7PGUfB1rvD/LAeQwHB2bWpA79UkgMYouuehBvZGLx5bRaviPclaC5I= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=WqlEJemH; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1707173869; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=4I6fU7/4wPT+PYoUIdLLREVRUQVIkIPSriBzY+bMogw=; b=WqlEJemHiyXJR1TD4ovu9KdHnEwR7rRmUFYXw4z187hQUkzqySzwBh+AFNr6c9/+Egv4QP rEX5SemFWeUz8vvJEnhReAlnvlJH99UUrUAUHxmOokPIm+CtPEyJD3BCeKl91+zm0Zbft6 xMGz3lQj6L2wEHTOuE7Gj/vnZMFMhrY= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-70-OygGiR3FO4eTM_n7IadjIQ-1; Mon, 05 Feb 2024 17:57:43 -0500 X-MC-Unique: OygGiR3FO4eTM_n7IadjIQ-1 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 1E7F08870E0; Mon, 5 Feb 2024 22:57:42 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.42.28.245]) by smtp.corp.redhat.com (Postfix) with ESMTP id F176C492BF0; Mon, 5 Feb 2024 22:57:39 +0000 (UTC) From: David Howells To: Steve French Cc: David Howells , Jeff Layton , Matthew Wilcox , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Christian Brauner , netfs@lists.linux.dev, linux-cifs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Steve French , Shyam Prasad N , Rohith Surabattula Subject: [PATCH v5 04/12] cifs: Use more fields from netfs_io_subrequest Date: Mon, 5 Feb 2024 22:57:16 +0000 Message-ID: <20240205225726.3104808-5-dhowells@redhat.com> In-Reply-To: <20240205225726.3104808-1-dhowells@redhat.com> References: <20240205225726.3104808-1-dhowells@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.10 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1790101752176705865 X-GMAIL-MSGID: 1790101752176705865 Use more fields from netfs_io_subrequest instead of those incorporated into cifs_io_subrequest from cifs_readdata and cifs_writedata. Signed-off-by: David Howells cc: Steve French cc: Shyam Prasad N cc: Rohith Surabattula cc: Jeff Layton cc: linux-cifs@vger.kernel.org cc: netfs@lists.linux.dev cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org --- fs/smb/client/cifsglob.h | 3 - fs/smb/client/cifssmb.c | 52 +++++++++--------- fs/smb/client/file.c | 112 +++++++++++++++++++------------------- fs/smb/client/smb2ops.c | 4 +- fs/smb/client/smb2pdu.c | 52 +++++++++--------- fs/smb/client/transport.c | 6 +- 6 files changed, 113 insertions(+), 116 deletions(-) diff --git a/fs/smb/client/cifsglob.h b/fs/smb/client/cifsglob.h index 7f7073813612..cac10f8e17e4 100644 --- a/fs/smb/client/cifsglob.h +++ b/fs/smb/client/cifsglob.h @@ -1487,9 +1487,6 @@ struct cifs_io_subrequest { struct list_head list; struct completion done; struct work_struct work; - struct iov_iter iter; - __u64 offset; - unsigned int bytes; }; /* diff --git a/fs/smb/client/cifssmb.c b/fs/smb/client/cifssmb.c index 1ead867a0f4a..b1c33a624157 100644 --- a/fs/smb/client/cifssmb.c +++ b/fs/smb/client/cifssmb.c @@ -1269,12 +1269,12 @@ cifs_readv_callback(struct mid_q_entry *mid) struct TCP_Server_Info *server = tcon->ses->server; struct smb_rqst rqst = { .rq_iov = rdata->iov, .rq_nvec = 2, - .rq_iter = rdata->iter }; + .rq_iter = rdata->subreq.io_iter }; struct cifs_credits credits = { .value = 1, .instance = 0 }; - cifs_dbg(FYI, "%s: mid=%llu state=%d result=%d bytes=%u\n", + cifs_dbg(FYI, "%s: mid=%llu state=%d result=%d bytes=%zu\n", __func__, mid->mid, mid->mid_state, rdata->result, - rdata->bytes); + rdata->subreq.len); switch (mid->mid_state) { case MID_RESPONSE_RECEIVED: @@ -1322,14 +1322,14 @@ cifs_async_readv(struct cifs_io_subrequest *rdata) struct smb_rqst rqst = { .rq_iov = rdata->iov, .rq_nvec = 2 }; - cifs_dbg(FYI, "%s: offset=%llu bytes=%u\n", - __func__, rdata->offset, rdata->bytes); + cifs_dbg(FYI, "%s: offset=%llu bytes=%zu\n", + __func__, rdata->subreq.start, rdata->subreq.len); if (tcon->ses->capabilities & CAP_LARGE_FILES) wct = 12; else { wct = 10; /* old style read */ - if ((rdata->offset >> 32) > 0) { + if ((rdata->subreq.start >> 32) > 0) { /* can not handle this big offset for old */ return -EIO; } @@ -1344,12 +1344,12 @@ cifs_async_readv(struct cifs_io_subrequest *rdata) smb->AndXCommand = 0xFF; /* none */ smb->Fid = rdata->cfile->fid.netfid; - smb->OffsetLow = cpu_to_le32(rdata->offset & 0xFFFFFFFF); + smb->OffsetLow = cpu_to_le32(rdata->subreq.start & 0xFFFFFFFF); if (wct == 12) - smb->OffsetHigh = cpu_to_le32(rdata->offset >> 32); + smb->OffsetHigh = cpu_to_le32(rdata->subreq.start >> 32); smb->Remaining = 0; - smb->MaxCount = cpu_to_le16(rdata->bytes & 0xFFFF); - smb->MaxCountHigh = cpu_to_le32(rdata->bytes >> 16); + smb->MaxCount = cpu_to_le16(rdata->subreq.len & 0xFFFF); + smb->MaxCountHigh = cpu_to_le32(rdata->subreq.len >> 16); if (wct == 12) smb->ByteCount = 0; else { @@ -1633,13 +1633,13 @@ cifs_writev_callback(struct mid_q_entry *mid) * client. OS/2 servers are known to set incorrect * CountHigh values. */ - if (written > wdata->bytes) + if (written > wdata->subreq.len) written &= 0xFFFF; - if (written < wdata->bytes) + if (written < wdata->subreq.len) wdata->result = -ENOSPC; else - wdata->bytes = written; + wdata->subreq.len = written; break; case MID_REQUEST_SUBMITTED: case MID_RETRY_NEEDED: @@ -1670,7 +1670,7 @@ cifs_async_writev(struct cifs_io_subrequest *wdata) wct = 14; } else { wct = 12; - if (wdata->offset >> 32 > 0) { + if (wdata->subreq.start >> 32 > 0) { /* can not handle big offset for old srv */ return -EIO; } @@ -1685,9 +1685,9 @@ cifs_async_writev(struct cifs_io_subrequest *wdata) smb->AndXCommand = 0xFF; /* none */ smb->Fid = wdata->cfile->fid.netfid; - smb->OffsetLow = cpu_to_le32(wdata->offset & 0xFFFFFFFF); + smb->OffsetLow = cpu_to_le32(wdata->subreq.start & 0xFFFFFFFF); if (wct == 14) - smb->OffsetHigh = cpu_to_le32(wdata->offset >> 32); + smb->OffsetHigh = cpu_to_le32(wdata->subreq.start >> 32); smb->Reserved = 0xFFFFFFFF; smb->WriteMode = 0; smb->Remaining = 0; @@ -1703,24 +1703,24 @@ cifs_async_writev(struct cifs_io_subrequest *wdata) rqst.rq_iov = iov; rqst.rq_nvec = 2; - rqst.rq_iter = wdata->iter; - rqst.rq_iter_size = iov_iter_count(&wdata->iter); + rqst.rq_iter = wdata->subreq.io_iter; + rqst.rq_iter_size = iov_iter_count(&wdata->subreq.io_iter); - cifs_dbg(FYI, "async write at %llu %u bytes\n", - wdata->offset, wdata->bytes); + cifs_dbg(FYI, "async write at %llu %zu bytes\n", + wdata->subreq.start, wdata->subreq.len); - smb->DataLengthLow = cpu_to_le16(wdata->bytes & 0xFFFF); - smb->DataLengthHigh = cpu_to_le16(wdata->bytes >> 16); + smb->DataLengthLow = cpu_to_le16(wdata->subreq.len & 0xFFFF); + smb->DataLengthHigh = cpu_to_le16(wdata->subreq.len >> 16); if (wct == 14) { - inc_rfc1001_len(&smb->hdr, wdata->bytes + 1); - put_bcc(wdata->bytes + 1, &smb->hdr); + inc_rfc1001_len(&smb->hdr, wdata->subreq.len + 1); + put_bcc(wdata->subreq.len + 1, &smb->hdr); } else { /* wct == 12 */ struct smb_com_writex_req *smbw = (struct smb_com_writex_req *)smb; - inc_rfc1001_len(&smbw->hdr, wdata->bytes + 5); - put_bcc(wdata->bytes + 5, &smbw->hdr); + inc_rfc1001_len(&smbw->hdr, wdata->subreq.len + 5); + put_bcc(wdata->subreq.len + 5, &smbw->hdr); iov[1].iov_len += 4; /* pad bigger by four bytes */ } diff --git a/fs/smb/client/file.c b/fs/smb/client/file.c index 6c7e6a513b90..6e53657154d6 100644 --- a/fs/smb/client/file.c +++ b/fs/smb/client/file.c @@ -2440,8 +2440,8 @@ cifs_writev_requeue(struct cifs_io_subrequest *wdata) int rc = 0; struct inode *inode = d_inode(wdata->cfile->dentry); struct TCP_Server_Info *server; - unsigned int rest_len = wdata->bytes; - loff_t fpos = wdata->offset; + unsigned int rest_len = wdata->subreq.len; + loff_t fpos = wdata->subreq.start; server = tlink_tcon(wdata->cfile->tlink)->ses->server; do { @@ -2466,14 +2466,14 @@ cifs_writev_requeue(struct cifs_io_subrequest *wdata) } wdata2->sync_mode = wdata->sync_mode; - wdata2->offset = fpos; - wdata2->bytes = cur_len; - wdata2->iter = wdata->iter; + wdata2->subreq.start = fpos; + wdata2->subreq.len = cur_len; + wdata2->subreq.io_iter = wdata->subreq.io_iter; - iov_iter_advance(&wdata2->iter, fpos - wdata->offset); - iov_iter_truncate(&wdata2->iter, wdata2->bytes); + iov_iter_advance(&wdata2->subreq.io_iter, fpos - wdata->subreq.start); + iov_iter_truncate(&wdata2->subreq.io_iter, wdata2->subreq.len); - if (iov_iter_is_xarray(&wdata2->iter)) + if (iov_iter_is_xarray(&wdata2->subreq.io_iter)) /* Check for pages having been redirtied and clean * them. We can do this by walking the xarray. If * it's not an xarray, then it's a DIO and we shouldn't @@ -2507,7 +2507,7 @@ cifs_writev_requeue(struct cifs_io_subrequest *wdata) } while (rest_len > 0); /* Clean up remaining pages from the original wdata */ - if (iov_iter_is_xarray(&wdata->iter)) + if (iov_iter_is_xarray(&wdata->subreq.io_iter)) cifs_pages_write_failed(inode, fpos, rest_len); if (rc != 0 && !is_retryable_error(rc)) @@ -2524,19 +2524,19 @@ cifs_writev_complete(struct work_struct *work) if (wdata->result == 0) { spin_lock(&inode->i_lock); - cifs_update_eof(CIFS_I(inode), wdata->offset, wdata->bytes); + cifs_update_eof(CIFS_I(inode), wdata->subreq.start, wdata->subreq.len); spin_unlock(&inode->i_lock); cifs_stats_bytes_written(tlink_tcon(wdata->cfile->tlink), - wdata->bytes); + wdata->subreq.len); } else if (wdata->sync_mode == WB_SYNC_ALL && wdata->result == -EAGAIN) return cifs_writev_requeue(wdata); if (wdata->result == -EAGAIN) - cifs_pages_write_redirty(inode, wdata->offset, wdata->bytes); + cifs_pages_write_redirty(inode, wdata->subreq.start, wdata->subreq.len); else if (wdata->result < 0) - cifs_pages_write_failed(inode, wdata->offset, wdata->bytes); + cifs_pages_write_failed(inode, wdata->subreq.start, wdata->subreq.len); else - cifs_pages_written_back(inode, wdata->offset, wdata->bytes); + cifs_pages_written_back(inode, wdata->subreq.start, wdata->subreq.len); if (wdata->result != -EAGAIN) mapping_set_error(inode->i_mapping, wdata->result); @@ -2768,7 +2768,7 @@ static ssize_t cifs_write_back_from_locked_folio(struct address_space *mapping, } wdata->sync_mode = wbc->sync_mode; - wdata->offset = folio_pos(folio); + wdata->subreq.start = folio_pos(folio); wdata->pid = cfile->pid; wdata->credits = credits_on_stack; wdata->cfile = cfile; @@ -2803,7 +2803,7 @@ static ssize_t cifs_write_back_from_locked_folio(struct address_space *mapping, len = min_t(loff_t, len, max_len); } - wdata->bytes = len; + wdata->subreq.len = len; /* We now have a contiguous set of dirty pages, each with writeback * set; the first page is still locked at this point, but all the rest @@ -2812,10 +2812,10 @@ static ssize_t cifs_write_back_from_locked_folio(struct address_space *mapping, folio_unlock(folio); if (start < i_size) { - iov_iter_xarray(&wdata->iter, ITER_SOURCE, &mapping->i_pages, + iov_iter_xarray(&wdata->subreq.io_iter, ITER_SOURCE, &mapping->i_pages, start, len); - rc = adjust_credits(wdata->server, &wdata->credits, wdata->bytes); + rc = adjust_credits(wdata->server, &wdata->credits, wdata->subreq.len); if (rc) goto err_wdata; @@ -3234,7 +3234,7 @@ cifs_uncached_writev_complete(struct work_struct *work) struct cifsInodeInfo *cifsi = CIFS_I(inode); spin_lock(&inode->i_lock); - cifs_update_eof(cifsi, wdata->offset, wdata->bytes); + cifs_update_eof(cifsi, wdata->subreq.start, wdata->subreq.len); if (cifsi->netfs.remote_i_size > inode->i_size) i_size_write(inode, cifsi->netfs.remote_i_size); spin_unlock(&inode->i_lock); @@ -3270,19 +3270,19 @@ cifs_resend_wdata(struct cifs_io_subrequest *wdata, struct list_head *wdata_list * segments */ do { - rc = server->ops->wait_mtu_credits(server, wdata->bytes, + rc = server->ops->wait_mtu_credits(server, wdata->subreq.len, &wsize, &credits); if (rc) goto fail; - if (wsize < wdata->bytes) { + if (wsize < wdata->subreq.len) { add_credits_and_wake_if(server, &credits, 0); msleep(1000); } - } while (wsize < wdata->bytes); + } while (wsize < wdata->subreq.len); wdata->credits = credits; - rc = adjust_credits(server, &wdata->credits, wdata->bytes); + rc = adjust_credits(server, &wdata->credits, wdata->subreq.len); if (!rc) { if (wdata->cfile->invalidHandle) @@ -3429,19 +3429,19 @@ cifs_write_from_iter(loff_t fpos, size_t len, struct iov_iter *from, wdata->uncached = true; wdata->sync_mode = WB_SYNC_ALL; - wdata->offset = (__u64)fpos; + wdata->subreq.start = (__u64)fpos; wdata->cfile = cifsFileInfo_get(open_file); wdata->server = server; wdata->pid = pid; - wdata->bytes = cur_len; + wdata->subreq.len = cur_len; wdata->credits = credits_on_stack; - wdata->iter = *from; + wdata->subreq.io_iter = *from; wdata->ctx = ctx; kref_get(&ctx->refcount); - iov_iter_truncate(&wdata->iter, cur_len); + iov_iter_truncate(&wdata->subreq.io_iter, cur_len); - rc = adjust_credits(server, &wdata->credits, wdata->bytes); + rc = adjust_credits(server, &wdata->credits, wdata->subreq.len); if (!rc) { if (wdata->cfile->invalidHandle) @@ -3503,7 +3503,7 @@ static void collect_uncached_write_data(struct cifs_aio_ctx *ctx) if (wdata->result) rc = wdata->result; else - ctx->total_len += wdata->bytes; + ctx->total_len += wdata->subreq.len; /* resend call if it's a retryable error */ if (rc == -EAGAIN) { @@ -3518,10 +3518,10 @@ static void collect_uncached_write_data(struct cifs_aio_ctx *ctx) wdata, &tmp_list, ctx); else { iov_iter_advance(&tmp_from, - wdata->offset - ctx->pos); + wdata->subreq.start - ctx->pos); - rc = cifs_write_from_iter(wdata->offset, - wdata->bytes, &tmp_from, + rc = cifs_write_from_iter(wdata->subreq.start, + wdata->subreq.len, &tmp_from, ctx->cfile, cifs_sb, &tmp_list, ctx); @@ -3844,20 +3844,20 @@ static int cifs_resend_rdata(struct cifs_io_subrequest *rdata, * segments */ do { - rc = server->ops->wait_mtu_credits(server, rdata->bytes, + rc = server->ops->wait_mtu_credits(server, rdata->subreq.len, &rsize, &credits); if (rc) goto fail; - if (rsize < rdata->bytes) { + if (rsize < rdata->subreq.len) { add_credits_and_wake_if(server, &credits, 0); msleep(1000); } - } while (rsize < rdata->bytes); + } while (rsize < rdata->subreq.len); rdata->credits = credits; - rc = adjust_credits(server, &rdata->credits, rdata->bytes); + rc = adjust_credits(server, &rdata->credits, rdata->subreq.len); if (!rc) { if (rdata->cfile->invalidHandle) rc = -EAGAIN; @@ -3955,17 +3955,17 @@ cifs_send_async_read(loff_t fpos, size_t len, struct cifsFileInfo *open_file, rdata->server = server; rdata->cfile = cifsFileInfo_get(open_file); - rdata->offset = fpos; - rdata->bytes = cur_len; + rdata->subreq.start = fpos; + rdata->subreq.len = cur_len; rdata->pid = pid; rdata->credits = credits_on_stack; rdata->ctx = ctx; kref_get(&ctx->refcount); - rdata->iter = ctx->iter; - iov_iter_truncate(&rdata->iter, cur_len); + rdata->subreq.io_iter = ctx->iter; + iov_iter_truncate(&rdata->subreq.io_iter, cur_len); - rc = adjust_credits(server, &rdata->credits, rdata->bytes); + rc = adjust_credits(server, &rdata->credits, rdata->subreq.len); if (!rc) { if (rdata->cfile->invalidHandle) @@ -4035,8 +4035,8 @@ collect_uncached_read_data(struct cifs_aio_ctx *ctx) &tmp_list, ctx); } else { rc = cifs_send_async_read( - rdata->offset + got_bytes, - rdata->bytes - got_bytes, + rdata->subreq.start + got_bytes, + rdata->subreq.len - got_bytes, rdata->cfile, cifs_sb, &tmp_list, ctx); @@ -4050,7 +4050,7 @@ collect_uncached_read_data(struct cifs_aio_ctx *ctx) rc = rdata->result; /* if there was a short read -- discard anything left */ - if (rdata->got_bytes && rdata->got_bytes < rdata->bytes) + if (rdata->got_bytes && rdata->got_bytes < rdata->subreq.len) rc = -ENODATA; ctx->total_len += rdata->got_bytes; @@ -4434,16 +4434,16 @@ static void cifs_readahead_complete(struct work_struct *work) pgoff_t last; bool good = rdata->result == 0 || (rdata->result == -EAGAIN && rdata->got_bytes); - XA_STATE(xas, &rdata->mapping->i_pages, rdata->offset / PAGE_SIZE); + XA_STATE(xas, &rdata->mapping->i_pages, rdata->subreq.start / PAGE_SIZE); if (good) cifs_readahead_to_fscache(rdata->mapping->host, - rdata->offset, rdata->bytes); + rdata->subreq.start, rdata->subreq.len); - if (iov_iter_count(&rdata->iter) > 0) - iov_iter_zero(iov_iter_count(&rdata->iter), &rdata->iter); + if (iov_iter_count(&rdata->subreq.io_iter) > 0) + iov_iter_zero(iov_iter_count(&rdata->subreq.io_iter), &rdata->subreq.io_iter); - last = (rdata->offset + rdata->bytes - 1) / PAGE_SIZE; + last = (rdata->subreq.start + rdata->subreq.len - 1) / PAGE_SIZE; rcu_read_lock(); xas_for_each(&xas, folio, last) { @@ -4582,8 +4582,8 @@ static void cifs_readahead(struct readahead_control *ractl) break; } - rdata->offset = ra_index * PAGE_SIZE; - rdata->bytes = nr_pages * PAGE_SIZE; + rdata->subreq.start = ra_index * PAGE_SIZE; + rdata->subreq.len = nr_pages * PAGE_SIZE; rdata->cfile = cifsFileInfo_get(open_file); rdata->server = server; rdata->mapping = ractl->mapping; @@ -4597,10 +4597,10 @@ static void cifs_readahead(struct readahead_control *ractl) ra_pages -= nr_pages; ra_index += nr_pages; - iov_iter_xarray(&rdata->iter, ITER_DEST, &rdata->mapping->i_pages, - rdata->offset, rdata->bytes); + iov_iter_xarray(&rdata->subreq.io_iter, ITER_DEST, &rdata->mapping->i_pages, + rdata->subreq.start, rdata->subreq.len); - rc = adjust_credits(server, &rdata->credits, rdata->bytes); + rc = adjust_credits(server, &rdata->credits, rdata->subreq.len); if (!rc) { if (rdata->cfile->invalidHandle) rc = -EAGAIN; @@ -4611,8 +4611,8 @@ static void cifs_readahead(struct readahead_control *ractl) if (rc) { add_credits_and_wake_if(server, &rdata->credits, 0); cifs_unlock_folios(rdata->mapping, - rdata->offset / PAGE_SIZE, - (rdata->offset + rdata->bytes - 1) / PAGE_SIZE); + rdata->subreq.start / PAGE_SIZE, + (rdata->subreq.start + rdata->subreq.len - 1) / PAGE_SIZE); /* Fallback to the readpage in error/reconnect cases */ cifs_put_readdata(rdata); break; diff --git a/fs/smb/client/smb2ops.c b/fs/smb/client/smb2ops.c index 9da9ee98ae39..8d674aef8dd9 100644 --- a/fs/smb/client/smb2ops.c +++ b/fs/smb/client/smb2ops.c @@ -4580,7 +4580,7 @@ handle_read_data(struct TCP_Server_Info *server, struct mid_q_entry *mid, /* Copy the data to the output I/O iterator. */ rdata->result = cifs_copy_pages_to_iter(pages, pages_len, - cur_off, &rdata->iter); + cur_off, &rdata->subreq.io_iter); if (rdata->result != 0) { if (is_offloaded) mid->mid_state = MID_RESPONSE_MALFORMED; @@ -4594,7 +4594,7 @@ handle_read_data(struct TCP_Server_Info *server, struct mid_q_entry *mid, /* read response payload is in buf */ WARN_ONCE(pages && !xa_empty(pages), "read data can be either in buf or in pages"); - length = copy_to_iter(buf + data_offset, data_len, &rdata->iter); + length = copy_to_iter(buf + data_offset, data_len, &rdata->subreq.io_iter); if (length < 0) return length; rdata->got_bytes = data_len; diff --git a/fs/smb/client/smb2pdu.c b/fs/smb/client/smb2pdu.c index db739806343d..2ecc5f210329 100644 --- a/fs/smb/client/smb2pdu.c +++ b/fs/smb/client/smb2pdu.c @@ -4399,7 +4399,7 @@ smb2_new_read_req(void **buf, unsigned int *total_len, struct smbd_buffer_descriptor_v1 *v1; bool need_invalidate = server->dialect == SMB30_PROT_ID; - rdata->mr = smbd_register_mr(server->smbd_conn, &rdata->iter, + rdata->mr = smbd_register_mr(server->smbd_conn, &rdata->subreq.io_iter, true, need_invalidate); if (!rdata->mr) return -EAGAIN; @@ -4459,17 +4459,17 @@ smb2_readv_callback(struct mid_q_entry *mid) struct smb_rqst rqst = { .rq_iov = &rdata->iov[1], .rq_nvec = 1 }; if (rdata->got_bytes) { - rqst.rq_iter = rdata->iter; - rqst.rq_iter_size = iov_iter_count(&rdata->iter); + rqst.rq_iter = rdata->subreq.io_iter; + rqst.rq_iter_size = iov_iter_count(&rdata->subreq.io_iter); } WARN_ONCE(rdata->server != mid->server, "rdata server %p != mid server %p", rdata->server, mid->server); - cifs_dbg(FYI, "%s: mid=%llu state=%d result=%d bytes=%u\n", + cifs_dbg(FYI, "%s: mid=%llu state=%d result=%d bytes=%zu\n", __func__, mid->mid, mid->mid_state, rdata->result, - rdata->bytes); + rdata->subreq.len); switch (mid->mid_state) { case MID_RESPONSE_RECEIVED: @@ -4522,13 +4522,13 @@ smb2_readv_callback(struct mid_q_entry *mid) cifs_stats_fail_inc(tcon, SMB2_READ_HE); trace_smb3_read_err(0 /* xid */, rdata->cfile->fid.persistent_fid, - tcon->tid, tcon->ses->Suid, rdata->offset, - rdata->bytes, rdata->result); + tcon->tid, tcon->ses->Suid, rdata->subreq.start, + rdata->subreq.len, rdata->result); } else trace_smb3_read_done(0 /* xid */, rdata->cfile->fid.persistent_fid, tcon->tid, tcon->ses->Suid, - rdata->offset, rdata->got_bytes); + rdata->subreq.start, rdata->got_bytes); queue_work(cifsiod_wq, &rdata->work); release_mid(mid); @@ -4550,16 +4550,16 @@ smb2_async_readv(struct cifs_io_subrequest *rdata) unsigned int total_len; int credit_request; - cifs_dbg(FYI, "%s: offset=%llu bytes=%u\n", - __func__, rdata->offset, rdata->bytes); + cifs_dbg(FYI, "%s: offset=%llu bytes=%zu\n", + __func__, rdata->subreq.start, rdata->subreq.len); if (!rdata->server) rdata->server = cifs_pick_channel(tcon->ses); io_parms.tcon = tlink_tcon(rdata->cfile->tlink); io_parms.server = server = rdata->server; - io_parms.offset = rdata->offset; - io_parms.length = rdata->bytes; + io_parms.offset = rdata->subreq.start; + io_parms.length = rdata->subreq.len; io_parms.persistent_fid = rdata->cfile->fid.persistent_fid; io_parms.volatile_fid = rdata->cfile->fid.volatile_fid; io_parms.pid = rdata->pid; @@ -4578,7 +4578,7 @@ smb2_async_readv(struct cifs_io_subrequest *rdata) shdr = (struct smb2_hdr *)buf; if (rdata->credits.value > 0) { - shdr->CreditCharge = cpu_to_le16(DIV_ROUND_UP(rdata->bytes, + shdr->CreditCharge = cpu_to_le16(DIV_ROUND_UP(rdata->subreq.len, SMB2_MAX_BUFFER_SIZE)); credit_request = le16_to_cpu(shdr->CreditCharge) + 8; if (server->credits >= server->max_credits) @@ -4588,7 +4588,7 @@ smb2_async_readv(struct cifs_io_subrequest *rdata) min_t(int, server->max_credits - server->credits, credit_request)); - rc = adjust_credits(server, &rdata->credits, rdata->bytes); + rc = adjust_credits(server, &rdata->credits, rdata->subreq.len); if (rc) goto async_readv_out; @@ -4728,13 +4728,13 @@ smb2_writev_callback(struct mid_q_entry *mid) * client. OS/2 servers are known to set incorrect * CountHigh values. */ - if (written > wdata->bytes) + if (written > wdata->subreq.len) written &= 0xFFFF; - if (written < wdata->bytes) + if (written < wdata->subreq.len) wdata->result = -ENOSPC; else - wdata->bytes = written; + wdata->subreq.len = written; break; case MID_REQUEST_SUBMITTED: case MID_RETRY_NEEDED: @@ -4765,8 +4765,8 @@ smb2_writev_callback(struct mid_q_entry *mid) cifs_stats_fail_inc(tcon, SMB2_WRITE_HE); trace_smb3_write_err(0 /* no xid */, wdata->cfile->fid.persistent_fid, - tcon->tid, tcon->ses->Suid, wdata->offset, - wdata->bytes, wdata->result); + tcon->tid, tcon->ses->Suid, wdata->subreq.start, + wdata->subreq.len, wdata->result); if (wdata->result == -ENOSPC) pr_warn_once("Out of space writing to %s\n", tcon->tree_name); @@ -4774,7 +4774,7 @@ smb2_writev_callback(struct mid_q_entry *mid) trace_smb3_write_done(0 /* no xid */, wdata->cfile->fid.persistent_fid, tcon->tid, tcon->ses->Suid, - wdata->offset, wdata->bytes); + wdata->subreq.start, wdata->subreq.len); queue_work(cifsiod_wq, &wdata->work); release_mid(mid); @@ -4807,8 +4807,8 @@ smb2_async_writev(struct cifs_io_subrequest *wdata) _io_parms = (struct cifs_io_parms) { .tcon = tcon, .server = server, - .offset = wdata->offset, - .length = wdata->bytes, + .offset = wdata->subreq.start, + .length = wdata->subreq.len, .persistent_fid = wdata->cfile->fid.persistent_fid, .volatile_fid = wdata->cfile->fid.volatile_fid, .pid = wdata->pid, @@ -4850,10 +4850,10 @@ smb2_async_writev(struct cifs_io_subrequest *wdata) */ if (smb3_use_rdma_offload(io_parms)) { struct smbd_buffer_descriptor_v1 *v1; - size_t data_size = iov_iter_count(&wdata->iter); + size_t data_size = iov_iter_count(&wdata->subreq.io_iter); bool need_invalidate = server->dialect == SMB30_PROT_ID; - wdata->mr = smbd_register_mr(server->smbd_conn, &wdata->iter, + wdata->mr = smbd_register_mr(server->smbd_conn, &wdata->subreq.io_iter, false, need_invalidate); if (!wdata->mr) { rc = -EAGAIN; @@ -4880,7 +4880,7 @@ smb2_async_writev(struct cifs_io_subrequest *wdata) rqst.rq_iov = iov; rqst.rq_nvec = 1; - rqst.rq_iter = wdata->iter; + rqst.rq_iter = wdata->subreq.io_iter; rqst.rq_iter_size = iov_iter_count(&rqst.rq_iter); if (wdata->replay) smb2_set_replay(server, &rqst); @@ -4900,7 +4900,7 @@ smb2_async_writev(struct cifs_io_subrequest *wdata) #endif if (wdata->credits.value > 0) { - shdr->CreditCharge = cpu_to_le16(DIV_ROUND_UP(wdata->bytes, + shdr->CreditCharge = cpu_to_le16(DIV_ROUND_UP(wdata->subreq.len, SMB2_MAX_BUFFER_SIZE)); credit_request = le16_to_cpu(shdr->CreditCharge) + 8; if (server->credits >= server->max_credits) diff --git a/fs/smb/client/transport.c b/fs/smb/client/transport.c index a5bab478e6de..e52967de59e6 100644 --- a/fs/smb/client/transport.c +++ b/fs/smb/client/transport.c @@ -1702,8 +1702,8 @@ cifs_readv_receive(struct TCP_Server_Info *server, struct mid_q_entry *mid) unsigned int buflen = server->pdu_size + HEADER_PREAMBLE_SIZE(server); bool use_rdma_mr = false; - cifs_dbg(FYI, "%s: mid=%llu offset=%llu bytes=%u\n", - __func__, mid->mid, rdata->offset, rdata->bytes); + cifs_dbg(FYI, "%s: mid=%llu offset=%llu bytes=%zu\n", + __func__, mid->mid, rdata->subreq.start, rdata->subreq.len); /* * read the rest of READ_RSP header (sans Data array), or whatever we @@ -1808,7 +1808,7 @@ cifs_readv_receive(struct TCP_Server_Info *server, struct mid_q_entry *mid) length = data_len; /* An RDMA read is already done. */ else #endif - length = cifs_read_iter_from_socket(server, &rdata->iter, + length = cifs_read_iter_from_socket(server, &rdata->subreq.io_iter, data_len); if (length > 0) rdata->got_bytes += length; From patchwork Mon Feb 5 22:57:17 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 197092 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7301:168b:b0:106:860b:bbdd with SMTP id ma11csp1197341dyb; Mon, 5 Feb 2024 15:00:57 -0800 (PST) X-Google-Smtp-Source: AGHT+IFkVG9KUG3//ew2rGl5HOnKaDOiJSF/1G87ftex+6exKjWg+hQbUv5e+tWBNvjFMi2HP8Ky X-Received: by 2002:ac8:4d4d:0:b0:42c:26bd:2ca with SMTP id x13-20020ac84d4d000000b0042c26bd02camr636238qtv.8.1707174057378; Mon, 05 Feb 2024 15:00:57 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1707174057; cv=pass; d=google.com; s=arc-20160816; b=S7gTFelcDHiO9+GmnIA8na1jsU8YNLR33NZigBlYq9fVBzjJargUmCjxUQbCYYFUcW R/CEEEnBICvyTLUF4J6J7cxpJuBP5taeGjzEwriwSAuNXsWFA66OKMEByps6ROjWNlKK UvThlakTfMP0SH2Wm42BvWRSo/k4RZTj14tQZIk+fDLGjy5m1RaUo7vSnGrjvefeQ1oE VKNu2Mycf6H6aOtbZHrwNmm+Ltg6w3th2ATLaF4SGKiG6UdVX2WJCduNwdYNTV46qeEB LDvA2lKBinr2DhZ69n9GUFk+sZG2RBYkP2B4Q0t+vFJlDsKhZtjfIE8wKeY2VYcQ7/+d kuGQ== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=rsfhF+v4jP24sud3ne/FJVWtqEteLzbJn5JlLIWDz9E=; fh=H6KGIKq0fw4nPYnW7IANPhg3D6dm42rQNY3K5FysQOA=; b=EM/9SUi1+2hLpm7W6rRu2PoV3Dri5RI/DX0btmmVHoYZho4Zz/9yhrC6hlk+AmPTc5 k0IjtGkShaLnjPrKRtdE41RgNESSEqHeQ5bVu9Xd5yutF3B7f5MrfPQv/lM5fodnKpm8 1FrMNST2kvSgQLFSKlmUaNVY6SNbRR8H+1G1t5rOTQkN0Yr6L+at680+3JLdVqs4IMOp TIGYr8/hLk7vzGDJOhfxn9MrFkIEmqcYQ+ZVoS7vLNaO1Xwl3KvM4rBQgGMfTldzQoSn adPF4ydzAnJHNcCWDyegrnbEBcac76u3QFtHEVy6q5FahGcPs0aIoeUuNJ+9rDeI22k7 GY6g==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=GGqPbEIA; arc=pass (i=1 spf=pass spfdomain=redhat.com dkim=pass dkdomain=redhat.com dmarc=pass fromdomain=redhat.com); spf=pass (google.com: domain of linux-kernel+bounces-54051-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-54051-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com X-Forwarded-Encrypted: i=1; AJvYcCX95JCWlXfAVdU/yNPKGsEzoD6kRzwo1MPNw5bm/MhZ+WKrQYO9W4zCkF+W83m/g1HN3hgHGQAoGApZKMR9hF58VqmbPw== Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [2604:1380:45d1:ec00::1]) by mx.google.com with ESMTPS id z14-20020a05622a028e00b0042c32bb07bbsi694788qtw.242.2024.02.05.15.00.57 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 05 Feb 2024 15:00:57 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-54051-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) client-ip=2604:1380:45d1:ec00::1; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=GGqPbEIA; arc=pass (i=1 spf=pass spfdomain=redhat.com dkim=pass dkdomain=redhat.com dmarc=pass fromdomain=redhat.com); spf=pass (google.com: domain of linux-kernel+bounces-54051-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-54051-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id 7C0EE1C28194 for ; Mon, 5 Feb 2024 23:00:17 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 12D8050A81; Mon, 5 Feb 2024 22:58:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="GGqPbEIA" Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AFDBA41218 for ; Mon, 5 Feb 2024 22:57:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707173875; cv=none; b=FU3W1v0lnAe7CqafBDuoi3kJksbMcmHaXj/D9BrXS1ZFLXibUTIPMDWGnD63s4UP54Z+tU5H/z8vgaPjf4v8Vx5BfgiLXtgUFz2bvDtlIvdOtewMpcm06tCYZz20odZCzDKk1qjLZXaivVIRSXfdoQ7A0ZJsUVvj07SEZaJdVt0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707173875; c=relaxed/simple; bh=nbCva3Pi+RfwE482peq2H/s2WoE0tO6Xpi+JhvsOI38=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Yym1gx8dTrsQJc5A0jQiCCXlCKJyzlgCvoxN2Ed6xKYPMnbj7LPsBxVi3aHXmoDiVu7TudiWJOm7pwfOBnlQI/dpnCP0OBZcbouPWCdI02dI+sbZEXv/Df93TSP6REF7Pv1odXHtFASwiXXWfdQWA5TDCavWVtTdw12Y2ozmbxk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=GGqPbEIA; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1707173872; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=rsfhF+v4jP24sud3ne/FJVWtqEteLzbJn5JlLIWDz9E=; b=GGqPbEIACIIecPKbDt4Rjb9AP0cjkoUlpzZBFMRhr3IcQZyMu4frmONQ9uQ7R9emj6+K2P rZd7tKRAmIsEJT7ShcK/yamoZ521DyGti5CfLlXXIK6TrmuPFZEe2NDV+j2aSbNCnh0Lh+ aMEY8Gg0w34UIS+svJXZwTXl0ov6L/0= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-142-d_VwA-mlOuCghcd8uSgV0A-1; Mon, 05 Feb 2024 17:57:45 -0500 X-MC-Unique: d_VwA-mlOuCghcd8uSgV0A-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id C153B8870E2; Mon, 5 Feb 2024 22:57:44 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.42.28.245]) by smtp.corp.redhat.com (Postfix) with ESMTP id B02C13C2E; Mon, 5 Feb 2024 22:57:42 +0000 (UTC) From: David Howells To: Steve French Cc: David Howells , Jeff Layton , Matthew Wilcox , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Christian Brauner , netfs@lists.linux.dev, linux-cifs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Steve French , Shyam Prasad N , Rohith Surabattula , linux-cachefs@redhat.com Subject: [PATCH v5 05/12] cifs: Make wait_mtu_credits take size_t args Date: Mon, 5 Feb 2024 22:57:17 +0000 Message-ID: <20240205225726.3104808-6-dhowells@redhat.com> In-Reply-To: <20240205225726.3104808-1-dhowells@redhat.com> References: <20240205225726.3104808-1-dhowells@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.1 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1790101744426926105 X-GMAIL-MSGID: 1790101744426926105 Make the wait_mtu_credits functions use size_t for the size and num arguments rather than unsigned int as netfslib uses size_t/ssize_t for arguments and return values to allow for extra capacity. Signed-off-by: David Howells cc: Steve French cc: Shyam Prasad N cc: Rohith Surabattula cc: Jeff Layton cc: linux-cifs@vger.kernel.org cc: linux-cachefs@redhat.com cc: netfs@lists.linux.dev cc: linux-mm@kvack.org --- fs/smb/client/cifsglob.h | 4 ++-- fs/smb/client/cifsproto.h | 2 +- fs/smb/client/file.c | 18 ++++++++++-------- fs/smb/client/smb2ops.c | 4 ++-- fs/smb/client/transport.c | 4 ++-- 5 files changed, 17 insertions(+), 15 deletions(-) diff --git a/fs/smb/client/cifsglob.h b/fs/smb/client/cifsglob.h index cac10f8e17e4..c0a0de24f52a 100644 --- a/fs/smb/client/cifsglob.h +++ b/fs/smb/client/cifsglob.h @@ -530,8 +530,8 @@ struct smb_version_operations { /* writepages retry size */ unsigned int (*wp_retry_size)(struct inode *); /* get mtu credits */ - int (*wait_mtu_credits)(struct TCP_Server_Info *, unsigned int, - unsigned int *, struct cifs_credits *); + int (*wait_mtu_credits)(struct TCP_Server_Info *, size_t, + size_t *, struct cifs_credits *); /* adjust previously taken mtu credits to request size */ int (*adjust_credits)(struct TCP_Server_Info *server, struct cifs_credits *credits, diff --git a/fs/smb/client/cifsproto.h b/fs/smb/client/cifsproto.h index 867077be859c..00cb0d2dc935 100644 --- a/fs/smb/client/cifsproto.h +++ b/fs/smb/client/cifsproto.h @@ -121,7 +121,7 @@ extern struct mid_q_entry *cifs_setup_async_request(struct TCP_Server_Info *, extern int cifs_check_receive(struct mid_q_entry *mid, struct TCP_Server_Info *server, bool log_error); extern int cifs_wait_mtu_credits(struct TCP_Server_Info *server, - unsigned int size, unsigned int *num, + size_t size, size_t *num, struct cifs_credits *credits); extern int SendReceive2(const unsigned int /* xid */ , struct cifs_ses *, struct kvec *, int /* nvec to send */, diff --git a/fs/smb/client/file.c b/fs/smb/client/file.c index 6e53657154d6..63237dad0c4d 100644 --- a/fs/smb/client/file.c +++ b/fs/smb/client/file.c @@ -2735,9 +2735,9 @@ static ssize_t cifs_write_back_from_locked_folio(struct address_space *mapping, struct cifs_credits credits_on_stack; struct cifs_credits *credits = &credits_on_stack; struct cifsFileInfo *cfile = NULL; - unsigned int xid, wsize, len; + unsigned int xid, len; loff_t i_size = i_size_read(inode); - size_t max_len; + size_t max_len, wsize; long count = wbc->nr_to_write; int rc; @@ -3249,7 +3249,7 @@ static int cifs_resend_wdata(struct cifs_io_subrequest *wdata, struct list_head *wdata_list, struct cifs_aio_ctx *ctx) { - unsigned int wsize; + size_t wsize; struct cifs_credits credits; int rc; struct TCP_Server_Info *server = wdata->server; @@ -3384,7 +3384,8 @@ cifs_write_from_iter(loff_t fpos, size_t len, struct iov_iter *from, do { struct cifs_credits credits_on_stack; struct cifs_credits *credits = &credits_on_stack; - unsigned int wsize, nsegs = 0; + unsigned int nsegs = 0; + size_t wsize; if (signal_pending(current)) { rc = -EINTR; @@ -3821,7 +3822,7 @@ static int cifs_resend_rdata(struct cifs_io_subrequest *rdata, struct list_head *rdata_list, struct cifs_aio_ctx *ctx) { - unsigned int rsize; + size_t rsize; struct cifs_credits credits; int rc; struct TCP_Server_Info *server; @@ -3895,10 +3896,10 @@ cifs_send_async_read(loff_t fpos, size_t len, struct cifsFileInfo *open_file, struct cifs_aio_ctx *ctx) { struct cifs_io_subrequest *rdata; - unsigned int rsize, nsegs, max_segs = INT_MAX; + unsigned int nsegs, max_segs = INT_MAX; struct cifs_credits credits_on_stack; struct cifs_credits *credits = &credits_on_stack; - size_t cur_len, max_len; + size_t cur_len, max_len, rsize; int rc; pid_t pid; struct TCP_Server_Info *server; @@ -4494,12 +4495,13 @@ static void cifs_readahead(struct readahead_control *ractl) * Chop the readahead request up into rsize-sized read requests. */ while ((nr_pages = ra_pages)) { - unsigned int i, rsize; + unsigned int i; struct cifs_io_subrequest *rdata; struct cifs_credits credits_on_stack; struct cifs_credits *credits = &credits_on_stack; struct folio *folio; pgoff_t fsize; + size_t rsize; /* * Find out if we have anything cached in the range of diff --git a/fs/smb/client/smb2ops.c b/fs/smb/client/smb2ops.c index 8d674aef8dd9..b79b6bbd7103 100644 --- a/fs/smb/client/smb2ops.c +++ b/fs/smb/client/smb2ops.c @@ -217,8 +217,8 @@ smb2_get_credits(struct mid_q_entry *mid) } static int -smb2_wait_mtu_credits(struct TCP_Server_Info *server, unsigned int size, - unsigned int *num, struct cifs_credits *credits) +smb2_wait_mtu_credits(struct TCP_Server_Info *server, size_t size, + size_t *num, struct cifs_credits *credits) { int rc = 0; unsigned int scredits, in_flight; diff --git a/fs/smb/client/transport.c b/fs/smb/client/transport.c index e52967de59e6..5a69a7430ffa 100644 --- a/fs/smb/client/transport.c +++ b/fs/smb/client/transport.c @@ -691,8 +691,8 @@ wait_for_compound_request(struct TCP_Server_Info *server, int num, } int -cifs_wait_mtu_credits(struct TCP_Server_Info *server, unsigned int size, - unsigned int *num, struct cifs_credits *credits) +cifs_wait_mtu_credits(struct TCP_Server_Info *server, size_t size, + size_t *num, struct cifs_credits *credits) { *num = size; credits->value = 0; From patchwork Mon Feb 5 22:57:18 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 197094 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7301:168b:b0:106:860b:bbdd with SMTP id ma11csp1198276dyb; Mon, 5 Feb 2024 15:02:25 -0800 (PST) X-Google-Smtp-Source: AGHT+IF9AcTI6GjbF7Wxw+ChJHB98R3Ts3l26W9ndhK1u0jmcLkT6mxUAAL+javRgvEXrTRV+5V0 X-Received: by 2002:a17:90a:c092:b0:292:679c:1ace with SMTP id o18-20020a17090ac09200b00292679c1acemr867959pjs.14.1707174145614; Mon, 05 Feb 2024 15:02:25 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1707174145; cv=pass; d=google.com; s=arc-20160816; b=idvP7eDI+mqr4pRBRdcTcXOFaV6FpPhksxZoGIaYZ+acKL7KfpkF0Ii8/TnKqyfdv8 bCCwr6lB4FsThOxPhRgqUFDmlYPC+R23/75XUph8AKSID7XLzsvbxGBpEjzRKSzcJlLB cSecUfRWV5lry6hgcRRMQOP1x7XROqRBtmBmfoxMHR12QyfqIKpBXEvPnCziczHu6/kU OHZ1gxiZ6x432TQM2RgADx4E/EsadoO0Nq+R37LxwAD69bWkiQqGhaoqRDNYeSuJ5t0x 1oGGwh6er2G419Sowl+OCFky/IbbhhSKGFipsrQWuNTkXsC6SXtqs488MhEXQZ/iFRRJ XCMg== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=8Dakqpvlgj73+M8cL9l76JqD3i9fpfvlYasaRoUCT+o=; fh=jGdpstbYFVJesSlOr/xcmKVBHeimIY55+ACYdJH5gqY=; b=OiXF+dM0J+WADZHht4pwkSu6qhvcRDALo865bZXBmlY4cr0zGZmPjdbCjXAVLl/GtR PRXisfzG+icXGCiWSpQJpsLmHM/4E3fquYt+IuK8llin1qaKPB1UpA0b3WKoJ0WfSd36 AzmTSyHk+3Bath6VllcHQhw0y/isCxXUyNB59o8V3WpRfCj62PHAnD6wMO+lbX8pCd6r dwBdvMBY/Yhgnqo2w3NiLtHLXnauaz/ZWdm/yCkIpzyvzCHG9HQDNhHcFJ4av29McKrC QnA/LnuVKySsDADoiCaq078SQcuPumXZ42du9wgkh9hYdy8oXTVxqt7FANJ5koayWoTq dXVQ==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b="hkxPe1f/"; arc=pass (i=1 spf=pass spfdomain=redhat.com dkim=pass dkdomain=redhat.com dmarc=pass fromdomain=redhat.com); spf=pass (google.com: domain of linux-kernel+bounces-54050-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-54050-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com X-Forwarded-Encrypted: i=1; AJvYcCXpews9zSCJB8mLxkxRMLwDZ+tUcWtlBy05fk66kF9a6HrZVxL06hchVGgxf8hGai1pMN7f97DjqU3AdehNRZZ32F3eQg== Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [2604:1380:45e3:2400::1]) by mx.google.com with ESMTPS id ft8-20020a17090b0f8800b0029683caf298si8905pjb.84.2024.02.05.15.02.25 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 05 Feb 2024 15:02:25 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-54050-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) client-ip=2604:1380:45e3:2400::1; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b="hkxPe1f/"; arc=pass (i=1 spf=pass spfdomain=redhat.com dkim=pass dkdomain=redhat.com dmarc=pass fromdomain=redhat.com); spf=pass (google.com: domain of linux-kernel+bounces-54050-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-54050-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id 42EC3290A5A for ; Mon, 5 Feb 2024 23:00:01 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id F2BF650270; Mon, 5 Feb 2024 22:57:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="hkxPe1f/" Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7E1F84D9F0 for ; Mon, 5 Feb 2024 22:57:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707173872; cv=none; b=N8M4kH8h1fOJ8MzpdLwyoNZvfW/HZhGI2L40GidPKI2vqDBSUbATT8Qe+4aZFj1Y3mbZZOZH7eqWp+N6/enYQ3qDFa40J9IDW5JBvQOUHQpt2kganR5zJ09AFjmv5Cy55BLBGytwNorCQEwVe78LrSv7dfVwqci+Ku1gUGt/8/w= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707173872; c=relaxed/simple; bh=LQSEK0z9nxvOTSyYoKiSF4bW4n8osvAg64dn9IhFBZU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Ev9COeLilSlpzkpIT8IqG5xeSy9/Dm7t8Cds2SglB653+R/6RflWa7fvfHv9YHCeAJid1DCSzxXu+fgsF9j9PlUOTs4l/rZ3T1Cw+YCqQ+C2Kmla4Yibbwu1eA+PgDGnTGz1DOa63dhuKizV5Xx9U4QntlSfAm4RuZecT/i5jDY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=hkxPe1f/; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1707173869; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=8Dakqpvlgj73+M8cL9l76JqD3i9fpfvlYasaRoUCT+o=; b=hkxPe1f/3+ZxXXI2fuYxAHT0PFZ/SRA2A4B3Qj7QZFKlo/m8fP2gEvPXK1twq0DQa3LHVb aj2aA81LgDBbHMTXEICZjwk0LnLcy9vkigqiJGwqSrfKWaTRc1LZamVL1JiBqCfWf9/8X5 SiT44OvsIR0+N+MQxKQQyjY6Knuf3Pk= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-526-3ToKc4D6MgeGfNPa5_TiOA-1; Mon, 05 Feb 2024 17:57:48 -0500 X-MC-Unique: 3ToKc4D6MgeGfNPa5_TiOA-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 7EF2E386A0A4; Mon, 5 Feb 2024 22:57:47 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.42.28.245]) by smtp.corp.redhat.com (Postfix) with ESMTP id 77D902028CD2; Mon, 5 Feb 2024 22:57:45 +0000 (UTC) From: David Howells To: Steve French Cc: David Howells , Jeff Layton , Matthew Wilcox , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Christian Brauner , netfs@lists.linux.dev, linux-cifs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Steve French , Shyam Prasad N , Rohith Surabattula Subject: [PATCH v5 06/12] cifs: Implement netfslib hooks Date: Mon, 5 Feb 2024 22:57:18 +0000 Message-ID: <20240205225726.3104808-7-dhowells@redhat.com> In-Reply-To: <20240205225726.3104808-1-dhowells@redhat.com> References: <20240205225726.3104808-1-dhowells@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.4 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1790101837106622649 X-GMAIL-MSGID: 1790101837106622649 Provide implementation of the netfslib hooks that will be used by netfslib to ask cifs to set up and perform operations. Of particular note are (*) cifs_clamp_length() - This is used to negotiate the size of the next subrequest in a read request, taking into account the credit available and the rsize. The credits are attached to the subrequest. (*) cifs_req_issue_read() - This is used to issue a subrequest that has been set up and clamped. (*) cifs_create_write_requests() - This is used to break the given span of file positions into suboperations according to cifs's wsize and available credits. As each subop is created, it can be dispatched or queued for dispatch. At this point, cifs is not wired up to actually *use* netfslib; that will be done in a subsequent patch. Signed-off-by: David Howells cc: Steve French cc: Shyam Prasad N cc: Rohith Surabattula cc: Jeff Layton cc: linux-cifs@vger.kernel.org cc: netfs@lists.linux.dev cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org --- fs/netfs/buffered_write.c | 3 + fs/smb/client/Kconfig | 1 + fs/smb/client/cifsglob.h | 28 ++- fs/smb/client/file.c | 357 +++++++++++++++++++++++++++++++++++ include/linux/netfs.h | 1 + include/trace/events/netfs.h | 1 + 6 files changed, 382 insertions(+), 9 deletions(-) diff --git a/fs/netfs/buffered_write.c b/fs/netfs/buffered_write.c index a3059b3168fd..2c117b825247 100644 --- a/fs/netfs/buffered_write.c +++ b/fs/netfs/buffered_write.c @@ -394,6 +394,9 @@ ssize_t netfs_perform_write(struct kiocb *iocb, struct iov_iter *iter, } while (iov_iter_count(iter)); out: + if (likely(written) && ctx->ops->post_modify) + ctx->ops->post_modify(inode); + if (unlikely(wreq)) { ret = netfs_end_writethrough(wreq, iocb); wbc_detach_inode(&wbc); diff --git a/fs/smb/client/Kconfig b/fs/smb/client/Kconfig index 2927bd174a88..2517dc242386 100644 --- a/fs/smb/client/Kconfig +++ b/fs/smb/client/Kconfig @@ -2,6 +2,7 @@ config CIFS tristate "SMB3 and CIFS support (advanced network filesystem)" depends on INET + select NETFS_SUPPORT select NLS select NLS_UCS2_UTILS select CRYPTO diff --git a/fs/smb/client/cifsglob.h b/fs/smb/client/cifsglob.h index c0a0de24f52a..dd0cef742a64 100644 --- a/fs/smb/client/cifsglob.h +++ b/fs/smb/client/cifsglob.h @@ -1462,15 +1462,24 @@ struct cifs_aio_ctx { bool direct_io; }; +struct cifs_io_request { + struct netfs_io_request rreq; + struct cifsFileInfo *cfile; +}; + /* asynchronous read support */ struct cifs_io_subrequest { - struct netfs_io_subrequest subreq; - struct cifsFileInfo *cfile; - struct address_space *mapping; - struct cifs_aio_ctx *ctx; + union { + struct netfs_io_subrequest subreq; + struct netfs_io_request *rreq; + struct cifs_io_request *req; + }; ssize_t got_bytes; pid_t pid; + unsigned int xid; int result; + bool have_credits; + bool replay; struct kvec iov[2]; struct TCP_Server_Info *server; #ifdef CONFIG_CIFS_SMB_DIRECT @@ -1478,15 +1487,16 @@ struct cifs_io_subrequest { #endif struct cifs_credits credits; - enum writeback_sync_modes sync_mode; - bool uncached; - bool replay; - struct bio_vec *bv; - // TODO: Remove following elements struct list_head list; struct completion done; struct work_struct work; + struct cifsFileInfo *cfile; + struct address_space *mapping; + struct cifs_aio_ctx *ctx; + enum writeback_sync_modes sync_mode; + bool uncached; + struct bio_vec *bv; }; /* diff --git a/fs/smb/client/file.c b/fs/smb/client/file.c index 63237dad0c4d..ce5f24206be0 100644 --- a/fs/smb/client/file.c +++ b/fs/smb/client/file.c @@ -36,6 +36,363 @@ #include "fs_context.h" #include "cifs_ioctl.h" #include "cached_dir.h" +#include + +static int cifs_reopen_file(struct cifsFileInfo *cfile, bool can_flush); + +static void cifs_upload_to_server(struct netfs_io_subrequest *subreq) +{ + struct cifs_io_subrequest *wdata = + container_of(subreq, struct cifs_io_subrequest, subreq); + ssize_t rc; + + trace_netfs_sreq(subreq, netfs_sreq_trace_submit); + + if (wdata->req->cfile->invalidHandle) + rc = -EAGAIN; + else + rc = wdata->server->ops->async_writev(wdata); + if (rc < 0) + add_credits_and_wake_if(wdata->server, &wdata->credits, 0); +} + +static void cifs_upload_to_server_worker(struct work_struct *work) +{ + struct netfs_io_subrequest *subreq = + container_of(work, struct netfs_io_subrequest, work); + + cifs_upload_to_server(subreq); +} + +/* + * Set up write requests for a writeback slice. We need to add a write request + * for each write we want to make. + */ +static void cifs_create_write_requests(struct netfs_io_request *wreq, + loff_t start, size_t remain) +{ + struct netfs_io_subrequest *subreq; + struct cifs_io_subrequest *wdata; + struct cifs_io_request *req = container_of(wreq, struct cifs_io_request, rreq); + struct TCP_Server_Info *server; + struct cifsFileInfo *open_file = req->cfile; + struct cifs_sb_info *cifs_sb = CIFS_SB(wreq->inode->i_sb); + int rc = 0; + size_t offset = 0; + pid_t pid; + unsigned int xid, max_segs = INT_MAX; + + if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_RWPIDFORWARD) + pid = open_file->pid; + else + pid = current->tgid; + + server = cifs_pick_channel(tlink_tcon(open_file->tlink)->ses); + xid = get_xid(); + +#ifdef CONFIG_CIFS_SMB_DIRECT + if (server->smbd_conn) + max_segs = server->smbd_conn->max_frmr_depth; +#endif + + do { + unsigned int nsegs = 0; + size_t max_len, part, wsize; + + subreq = netfs_create_write_request(wreq, NETFS_UPLOAD_TO_SERVER, + start, remain, + cifs_upload_to_server_worker); + if (!subreq) { + wreq->error = -ENOMEM; + break; + } + + wdata = container_of(subreq, struct cifs_io_subrequest, subreq); + +retry: + if (signal_pending(current)) { + wreq->error = -EINTR; + break; + } + + if (open_file->invalidHandle) { + rc = cifs_reopen_file(open_file, false); + if (rc < 0) { + if (rc == -EAGAIN) + goto retry; + break; + } + } + + rc = server->ops->wait_mtu_credits(server, wreq->wsize, &wsize, + &wdata->credits); + if (rc) + break; + + max_len = min(remain, wsize); + if (!max_len) { + rc = -EAGAIN; + goto failed_return_credits; + } + + part = netfs_limit_iter(&wreq->io_iter, offset, max_len, max_segs); + cifs_dbg(FYI, "create_write_request len=%zx/%zx nsegs=%u/%lu/%u\n", + part, max_len, nsegs, wreq->io_iter.nr_segs, max_segs); + if (!part) { + rc = -EIO; + goto failed_return_credits; + } + + if (part < wdata->subreq.len) { + wdata->subreq.len = part; + iov_iter_truncate(&wdata->subreq.io_iter, part); + } + + wdata->server = server; + wdata->pid = pid; + + rc = adjust_credits(server, &wdata->credits, wdata->subreq.len); + if (rc) { + add_credits_and_wake_if(server, &wdata->credits, 0); + if (rc == -EAGAIN) + goto retry; + goto failed; + } + + cifs_upload_to_server(subreq); + //netfs_queue_write_request(subreq); + start += part; + offset += part; + remain -= part; + } while (remain > 0); + + free_xid(xid); + return; + +failed_return_credits: + add_credits_and_wake_if(server, &wdata->credits, 0); +failed: + netfs_write_subrequest_terminated(subreq, rc, false); + free_xid(xid); +} + +/* + * Split the read up according to how many credits we can get for each piece. + * It's okay to sleep here if we need to wait for more credit to become + * available. + * + * We also choose the server and allocate an operation ID to be cleaned up + * later. + */ +static bool cifs_clamp_length(struct netfs_io_subrequest *subreq) +{ + struct netfs_io_request *rreq = subreq->rreq; + struct TCP_Server_Info *server; + struct cifs_io_subrequest *rdata = container_of(subreq, struct cifs_io_subrequest, subreq); + struct cifs_io_request *req = container_of(subreq->rreq, struct cifs_io_request, rreq); + struct cifs_sb_info *cifs_sb = CIFS_SB(rreq->inode->i_sb); + size_t rsize = 0; + int rc; + + rdata->xid = get_xid(); + + server = cifs_pick_channel(tlink_tcon(req->cfile->tlink)->ses); + rdata->server = server; + + if (cifs_sb->ctx->rsize == 0) + cifs_sb->ctx->rsize = + server->ops->negotiate_rsize(tlink_tcon(req->cfile->tlink), + cifs_sb->ctx); + + + rc = server->ops->wait_mtu_credits(server, cifs_sb->ctx->rsize, &rsize, + &rdata->credits); + if (rc) { + subreq->error = rc; + return false; + } + + rdata->have_credits = true; + subreq->len = min_t(size_t, subreq->len, rsize); +#ifdef CONFIG_CIFS_SMB_DIRECT + if (server->smbd_conn) + subreq->max_nr_segs = server->smbd_conn->max_frmr_depth; +#endif + return true; +} + +/* + * Issue a read operation on behalf of the netfs helper functions. We're asked + * to make a read of a certain size at a point in the file. We are permitted + * to only read a portion of that, but as long as we read something, the netfs + * helper will call us again so that we can issue another read. + */ +static void cifs_req_issue_read(struct netfs_io_subrequest *subreq) +{ + struct netfs_io_request *rreq = subreq->rreq; + struct cifs_io_subrequest *rdata = container_of(subreq, struct cifs_io_subrequest, subreq); + struct cifs_io_request *req = container_of(subreq->rreq, struct cifs_io_request, rreq); + struct cifs_sb_info *cifs_sb = CIFS_SB(rreq->inode->i_sb); + pid_t pid; + int rc = 0; + + if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_RWPIDFORWARD) + pid = req->cfile->pid; + else + pid = current->tgid; // Ummm... This may be a workqueue + + cifs_dbg(FYI, "%s: op=%08x[%x] mapping=%p len=%zu/%zu\n", + __func__, rreq->debug_id, subreq->debug_index, rreq->mapping, + subreq->transferred, subreq->len); + + if (req->cfile->invalidHandle) { + do { + rc = cifs_reopen_file(req->cfile, true); + } while (rc == -EAGAIN); + if (rc) + goto out; + } + + __set_bit(NETFS_SREQ_CLEAR_TAIL, &subreq->flags); + rdata->pid = pid; + + rc = adjust_credits(rdata->server, &rdata->credits, rdata->subreq.len); + if (!rc) { + if (rdata->req->cfile->invalidHandle) + rc = -EAGAIN; + else + rc = rdata->server->ops->async_readv(rdata); + } + +out: + if (rc) + netfs_subreq_terminated(subreq, rc, false); +} + +/* + * Initialise a request. + */ +static int cifs_init_request(struct netfs_io_request *rreq, struct file *file) +{ + struct cifs_io_request *req = container_of(rreq, struct cifs_io_request, rreq); + struct cifs_sb_info *cifs_sb = CIFS_SB(rreq->inode->i_sb); + struct cifsFileInfo *open_file = NULL; + int ret; + + rreq->rsize = cifs_sb->ctx->rsize; + rreq->wsize = cifs_sb->ctx->wsize; + + if (file) { + open_file = file->private_data; + rreq->netfs_priv = file->private_data; + req->cfile = cifsFileInfo_get(open_file); + } else if (rreq->origin == NETFS_WRITEBACK || + rreq->origin == NETFS_LAUNDER_WRITE) { + ret = cifs_get_writable_file(CIFS_I(rreq->inode), FIND_WR_ANY, &req->cfile); + if (ret) { + cifs_dbg(VFS, "No writable handle in writepages ret=%d\n", ret); + return ret; + } + } else { + WARN_ON_ONCE(1); + return -EIO; + } + + return 0; +} + +/* + * Expand the size of a readahead to the size of the rsize, if at least as + * large as a page, allowing for the possibility that rsize is not pow-2 + * aligned. + */ +static void cifs_expand_readahead(struct netfs_io_request *rreq) +{ + unsigned int rsize = rreq->rsize; + loff_t misalignment, i_size = i_size_read(rreq->inode); + + if (rsize < PAGE_SIZE) + return; + + if (rsize < INT_MAX) + rsize = roundup_pow_of_two(rsize); + else + rsize = ((unsigned int)INT_MAX + 1) / 2; + + misalignment = rreq->start & (rsize - 1); + if (misalignment) { + rreq->start -= misalignment; + rreq->len += misalignment; + } + + rreq->len = round_up(rreq->len, rsize); + if (rreq->start < i_size && rreq->len > i_size - rreq->start) + rreq->len = i_size - rreq->start; +} + +/* + * Completion of a request operation. + */ +static void cifs_rreq_done(struct netfs_io_request *rreq) +{ + struct timespec64 atime, mtime; + struct inode *inode = rreq->inode; + + /* we do not want atime to be less than mtime, it broke some apps */ + atime = inode_set_atime_to_ts(inode, current_time(inode)); + mtime = inode_get_mtime(inode); + if (timespec64_compare(&atime, &mtime)) + inode_set_atime_to_ts(inode, inode_get_mtime(inode)); +} + +static void cifs_post_modify(struct inode *inode) +{ + /* Indication to update ctime and mtime as close is deferred */ + set_bit(CIFS_INO_MODIFIED_ATTR, &CIFS_I(inode)->flags); +} + +static void cifs_free_request(struct netfs_io_request *rreq) +{ + struct cifs_io_request *req = container_of(rreq, struct cifs_io_request, rreq); + + if (req->cfile) + cifsFileInfo_put(req->cfile); +} + +static void cifs_free_subrequest(struct netfs_io_subrequest *subreq) +{ + struct cifs_io_subrequest *rdata = + container_of(subreq, struct cifs_io_subrequest, subreq); + int rc; + + if (rdata->subreq.source == NETFS_DOWNLOAD_FROM_SERVER) { +#ifdef CONFIG_CIFS_SMB_DIRECT + if (rdata->mr) { + smbd_deregister_mr(rdata->mr); + rdata->mr = NULL; + } +#endif + + if (rdata->have_credits) + add_credits_and_wake_if(rdata->server, &rdata->credits, 0); + rc = subreq->error; + free_xid(rdata->xid); + } +} + +const struct netfs_request_ops cifs_req_ops = { + .io_request_size = sizeof(struct cifs_io_request), + .io_subrequest_size = sizeof(struct cifs_io_subrequest), + .init_request = cifs_init_request, + .free_request = cifs_free_request, + .free_subrequest = cifs_free_subrequest, + .expand_readahead = cifs_expand_readahead, + .clamp_length = cifs_clamp_length, + .issue_read = cifs_req_issue_read, + .done = cifs_rreq_done, + .post_modify = cifs_post_modify, + .create_write_requests = cifs_create_write_requests, +}; /* * Remove the dirty flags from a span of pages. diff --git a/include/linux/netfs.h b/include/linux/netfs.h index 100cbb261269..61195cf16d6e 100644 --- a/include/linux/netfs.h +++ b/include/linux/netfs.h @@ -312,6 +312,7 @@ struct netfs_request_ops { /* Modification handling */ void (*update_i_size)(struct inode *inode, loff_t i_size); + void (*post_modify)(struct inode *inode); /* Write request handling */ void (*create_write_requests)(struct netfs_io_request *wreq, diff --git a/include/trace/events/netfs.h b/include/trace/events/netfs.h index 447a8c21cf57..06567b5be8fa 100644 --- a/include/trace/events/netfs.h +++ b/include/trace/events/netfs.h @@ -101,6 +101,7 @@ #define netfs_sreq_ref_traces \ EM(netfs_sreq_trace_get_copy_to_cache, "GET COPY2C ") \ EM(netfs_sreq_trace_get_resubmit, "GET RESUBMIT") \ + EM(netfs_sreq_trace_get_submit, "GET SUBMIT") \ EM(netfs_sreq_trace_get_short_read, "GET SHORTRD") \ EM(netfs_sreq_trace_new, "NEW ") \ EM(netfs_sreq_trace_put_clear, "PUT CLEAR ") \ From patchwork Mon Feb 5 22:57:19 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 197091 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7301:168b:b0:106:860b:bbdd with SMTP id ma11csp1197036dyb; Mon, 5 Feb 2024 15:00:31 -0800 (PST) X-Google-Smtp-Source: AGHT+IGnikR3zmLiE6EuaxibDqdjTJhmjOgmSJpn7eAs0Ft4RPWMwUezvyamYd4SZIunOFs5xCE1 X-Received: by 2002:a05:6402:1485:b0:560:9277:80e6 with SMTP id e5-20020a056402148500b00560927780e6mr440316edv.18.1707174031424; Mon, 05 Feb 2024 15:00:31 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1707174031; cv=pass; d=google.com; s=arc-20160816; b=xKgggQtsMM2WWSu4z2p1fMqtBgPLxA0PWvTcoMhRZDg9nEF3HkFzb5znHLhn9I4BWW Z85dJcQJlDFs8q5B3M3kMGEYqlfntf4hBkURyMQ/Hk6Cw2dTC/jtYYGy2HCeh6zJ6Vjk jV1BLfzW4SamZj5xtNyKIt/zdd0C97ZDd8MvPgoFst8SkxmqMhdaPTWSEppX9pfjQwmR tpRhYmf59as78D3u8e6WNn9C26VNwxisxzSjgJHPYZFSbcQFFQJP4arf23jQ15KHQbo4 MNtCr9LBFEXxGfEY1bTw16meZyzCtjFegpcYrvInRU5mUn8rFF8Xun5cFPyTQjs2WIl0 We4A== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=3wa+MhW5WY7m0Mgt6HSqvOGHHNP0rK9YvsBe6EDxi5k=; fh=Tv/w+I0fvzXOlTekOCkJaXUJUtJ9TZ+d+3CiHeH2k5A=; b=uZgrErpYYbS4+UxXEzwU93V4kAex/q4nFll+7oBlWI7Kj7LdOW6e/Bg4L+U1S6DwS2 sSWcNn7F6NfK9I9dMyi5ZH+EYnNO79SPRu0CSJtrVnbILiKL3omd11kVtoXheV71uQ6P 76w3sd+/J69sS0sCKEEh53DqNOZExUpYCdm/ignGT1O2QbspB5hbjywsfCGdDsl4BiUE OIvXBBZgi85vUrXsMBqm/z6/IG62GMgQNor9/i6AAgNVK7Qw9HzXjs7oidsegM1DwrcS BIASByHpABBft3BRzlv2Me9tNOFwVtNvGktACNMMS28Tbqz1z+TfOhnRaksYJdYrmJtn zx3A==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=gwYKQlg8; arc=pass (i=1 spf=pass spfdomain=redhat.com dkim=pass dkdomain=redhat.com dmarc=pass fromdomain=redhat.com); spf=pass (google.com: domain of linux-kernel+bounces-54052-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-54052-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com X-Forwarded-Encrypted: i=1; AJvYcCUghkLuH0VinmpNxTGYXokre+9vWfi7jwaSn8x7L5po31L4/scZECc+oUBT+NEF5VOQugPEQPm4FuUSCF8WuO7i9eecfA== Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [2604:1380:4601:e00::3]) by mx.google.com with ESMTPS id e20-20020a056402089400b00560255c6219si367536edy.585.2024.02.05.15.00.31 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 05 Feb 2024 15:00:31 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-54052-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) client-ip=2604:1380:4601:e00::3; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=gwYKQlg8; arc=pass (i=1 spf=pass spfdomain=redhat.com dkim=pass dkdomain=redhat.com dmarc=pass fromdomain=redhat.com); spf=pass (google.com: domain of linux-kernel+bounces-54052-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-54052-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id 08DE41F29144 for ; Mon, 5 Feb 2024 23:00:31 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id A889D51C3F; Mon, 5 Feb 2024 22:58:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="gwYKQlg8" Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 281E94F896 for ; Mon, 5 Feb 2024 22:57:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707173878; cv=none; b=oRR4f04TgyyipgItuHUS7GF9TmHmOVfG2tEF7OMi3kR3p9e7Zgyc5k4KbkfaJKbFP/lhq1mHA5s9HSJJKpIkVd5BogYUvX3B1lN47Qx9TQ3PoLsYrly6HLquf4KRt+2vlJNCQehiPWQ0a7xvqM1uKa8f1YyAM6GpaONvsR8gK2M= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707173878; c=relaxed/simple; bh=dLHPCDdqZyYqcKsH5vxgCp7lOfDq1Dp1o0GtCoHXsMo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Rt3KuHv+93mIUxTrH2wILMi4QeBbiBRpfRHUlod4VUmUDZdd7GU0pDD2pO7rE3TODPKkmNumkVwHW98pTavQSlC+q8iwKqNafF2KSTdiugLDiu7qmaQR090EiSzXm4cxzLSEsZjV7ZvzUIJniUCICsHBJirq74Ph6upOXVNTe8w= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=gwYKQlg8; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1707173875; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=3wa+MhW5WY7m0Mgt6HSqvOGHHNP0rK9YvsBe6EDxi5k=; b=gwYKQlg8DkYaAZ20giJiCfU0RIVCPMKEHXlYhlo5MCZZclKaKj0n77KkFXY0yeSLEWhFzt tcpLUNM6auhSmqQrqo+WQDyhuGjkndQUWP0eSrqmiQfa54f32mbH2mHVfMeiix29ygcQ2m zaZVqsj0GCq1t4Pb+lORhKluOnn/z7w= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-588-QlhvDllSPfKdJhAef6Pv0g-1; Mon, 05 Feb 2024 17:57:51 -0500 X-MC-Unique: QlhvDllSPfKdJhAef6Pv0g-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 1131F3CBD509; Mon, 5 Feb 2024 22:57:50 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.42.28.245]) by smtp.corp.redhat.com (Postfix) with ESMTP id 1EAB02166B31; Mon, 5 Feb 2024 22:57:48 +0000 (UTC) From: David Howells To: Steve French Cc: David Howells , Jeff Layton , Matthew Wilcox , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Christian Brauner , netfs@lists.linux.dev, linux-cifs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Steve French , Shyam Prasad N , Rohith Surabattula Subject: [PATCH v5 07/12] cifs: Replace the writedata replay bool with a netfs sreq flag Date: Mon, 5 Feb 2024 22:57:19 +0000 Message-ID: <20240205225726.3104808-8-dhowells@redhat.com> In-Reply-To: <20240205225726.3104808-1-dhowells@redhat.com> References: <20240205225726.3104808-1-dhowells@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.6 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1790101717611324652 X-GMAIL-MSGID: 1790101717611324652 Replace the 'replay' bool in cifs_writedata (now cifs_io_subrequest) with a flag in the netfs_io_subrequest flags. Signed-off-by: David Howells cc: Steve French cc: Shyam Prasad N cc: Rohith Surabattula cc: Jeff Layton cc: linux-cifs@vger.kernel.org cc: netfs@lists.linux.dev cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org --- fs/smb/client/cifsglob.h | 1 - fs/smb/client/file.c | 2 +- fs/smb/client/smb2pdu.c | 4 ++-- include/linux/netfs.h | 1 + 4 files changed, 4 insertions(+), 4 deletions(-) diff --git a/fs/smb/client/cifsglob.h b/fs/smb/client/cifsglob.h index dd0cef742a64..1dfed3eddaa2 100644 --- a/fs/smb/client/cifsglob.h +++ b/fs/smb/client/cifsglob.h @@ -1479,7 +1479,6 @@ struct cifs_io_subrequest { unsigned int xid; int result; bool have_credits; - bool replay; struct kvec iov[2]; struct TCP_Server_Info *server; #ifdef CONFIG_CIFS_SMB_DIRECT diff --git a/fs/smb/client/file.c b/fs/smb/client/file.c index ce5f24206be0..14602ed6bc39 100644 --- a/fs/smb/client/file.c +++ b/fs/smb/client/file.c @@ -3645,7 +3645,7 @@ cifs_resend_wdata(struct cifs_io_subrequest *wdata, struct list_head *wdata_list if (wdata->cfile->invalidHandle) rc = -EAGAIN; else { - wdata->replay = true; + set_bit(NETFS_SREQ_RETRYING, &wdata->subreq.flags); #ifdef CONFIG_CIFS_SMB_DIRECT if (wdata->mr) { wdata->mr->need_invalidate = true; diff --git a/fs/smb/client/smb2pdu.c b/fs/smb/client/smb2pdu.c index 2ecc5f210329..84e3675eb41e 100644 --- a/fs/smb/client/smb2pdu.c +++ b/fs/smb/client/smb2pdu.c @@ -4797,7 +4797,7 @@ smb2_async_writev(struct cifs_io_subrequest *wdata) struct cifs_io_parms *io_parms = NULL; int credit_request; - if (!wdata->server || wdata->replay) + if (!wdata->server || test_bit(NETFS_SREQ_RETRYING, &wdata->subreq.flags)) server = wdata->server = cifs_pick_channel(tcon->ses); /* @@ -4882,7 +4882,7 @@ smb2_async_writev(struct cifs_io_subrequest *wdata) rqst.rq_nvec = 1; rqst.rq_iter = wdata->subreq.io_iter; rqst.rq_iter_size = iov_iter_count(&rqst.rq_iter); - if (wdata->replay) + if (test_bit(NETFS_SREQ_RETRYING, &wdata->subreq.flags)) smb2_set_replay(server, &rqst); #ifdef CONFIG_CIFS_SMB_DIRECT if (wdata->mr) diff --git a/include/linux/netfs.h b/include/linux/netfs.h index 61195cf16d6e..455ccfe8bffa 100644 --- a/include/linux/netfs.h +++ b/include/linux/netfs.h @@ -224,6 +224,7 @@ struct netfs_io_subrequest { #define NETFS_SREQ_SEEK_DATA_READ 3 /* Set if ->read() should SEEK_DATA first */ #define NETFS_SREQ_NO_PROGRESS 4 /* Set if we didn't manage to read any data */ #define NETFS_SREQ_ONDEMAND 5 /* Set if it's from on-demand read mode */ +#define NETFS_SREQ_RETRYING 6 /* Set if we're retrying the op */ }; enum netfs_io_origin { From patchwork Mon Feb 5 22:57:20 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 197096 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7301:168b:b0:106:860b:bbdd with SMTP id ma11csp1198726dyb; Mon, 5 Feb 2024 15:03:16 -0800 (PST) X-Google-Smtp-Source: AGHT+IEYdwE57BLNXOa4m/wyRJaI3yk7AoRUjiEXLwDuRE9Xyk3NqruTspuiIGaeG55rZ/Bj9tVy X-Received: by 2002:a05:6a00:189f:b0:6e0:4e85:54f0 with SMTP id x31-20020a056a00189f00b006e04e8554f0mr1345502pfh.28.1707174196237; Mon, 05 Feb 2024 15:03:16 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1707174196; cv=pass; d=google.com; s=arc-20160816; b=gHjJoOd9IsTcSSOtcpRObrwATUUg+yQH+FgU6ipoy1ONbEQ+3CL4jCPE32u0jogGSq XHVeaYlog48fUAOezMDyN20OTTW18dspe9t/n6kIE2Hemv3LNcCQPbLfcFXQtUJMh2EH ekTbbdb4YUT50igTql1xeA3sWccyFbgaF423+mHQmQLfTdSJuxcDORhWkG6br5RuyhlO SCTcR1U6OLHM2hULZnr5TIk8vvAgzbnn+8ZLvT4TtH8WBQtchuhXAsbfUTC3mysP+Ue5 9ZsWhX7q8lq8KSI9CC8M7bmP+k+tbMCqMOrkKCJxX/tJ11q1PAAtt2b9OJ/q8lwyk0vv lFCQ== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=gEDfM0YmOVVjZkqrLm5MW20CtKMi1zvofDx7AFNwinE=; fh=Ky2V59MUUnDHZ3loGdzrz759fmktGLFP4GsDBMz102Q=; b=yOCTomAheegMGo8aaNpGcW5KgbhqU938i4YhQ6IRZpxA8sXbSYe71dS2uP9nyk/LK+ JfW/TdDLPJo7x1rFCW/5nQNlTgHLYM0eSs8WqaGjJbbrReHn21rzD/5T08jXvXvoXAlO 8Y1pxsKv9FOp9dJbYrPI/u6xLuQtyA/iC1iv7lv1LQyehKQI1MUlV6+f359FJJmvwhmv sMxKQumFaBslOzEzC3msXU9JeOEyANXzL138/o7VKE9c6tE4CLeqEE8GMWUmG3G6Gi5Q s8tlEtyoc5pY3DmZzTp1KVlMifuoNBGFboaVSLtLga4oqyR2OBifKWCsgGA0EVE/s7KR 2fFA==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=DNt9MYjW; arc=pass (i=1 spf=pass spfdomain=redhat.com dkim=pass dkdomain=redhat.com dmarc=pass fromdomain=redhat.com); spf=pass (google.com: domain of linux-kernel+bounces-54053-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-54053-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com X-Forwarded-Encrypted: i=1; AJvYcCWp5uaeugixuqBIg8wCO1vts5ET2xvrKj0Xx2HYJjLG5bU1wZAF5/fMCExIqrfNb2f9NAFJu1a8qq6+GP0PVBWetYSLOw== Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [139.178.88.99]) by mx.google.com with ESMTPS id 34-20020a630c62000000b005dbec36a48csi586102pgm.418.2024.02.05.15.03.16 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 05 Feb 2024 15:03:16 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-54053-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) client-ip=139.178.88.99; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=DNt9MYjW; arc=pass (i=1 spf=pass spfdomain=redhat.com dkim=pass dkdomain=redhat.com dmarc=pass fromdomain=redhat.com); spf=pass (google.com: domain of linux-kernel+bounces-54053-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-54053-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id 3494A294269 for ; Mon, 5 Feb 2024 23:01:13 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 520795380E; Mon, 5 Feb 2024 22:58:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="DNt9MYjW" Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 77BD251C3D for ; Mon, 5 Feb 2024 22:58:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707173883; cv=none; b=YBUnuIAU39cLK38LLf61K+UKXCnll5BMuPXTvbqIuy/aJknOt8FVqWqwtzIVfGrAFgXw3ucwA/cIu6Loqk73WiqIbTaFgdjEuMiLlxx/pMkK5se15QU8QDqddh5gNraRydHT/TiyKsy60Mr4NEsUHIXmx3HeIDIytVmIu9DZ0k8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707173883; c=relaxed/simple; bh=SKb3E7W4WhaRGmBfAfKjvqEtOqpcRYCQ8Ot/YOETWzg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=B8OPpmxCxhPLyvS/7slVdEo5gunuf/m9Rld1v3ukU2KVdD6csutLSVhLF9m1wztyAR7CnCT+weBKoe7Mw+A9LceKFN7V5i5Y7a5SIx0M3NC5NKPdSNy/OX2bxv8Bd7EwVxpRJBD5klPVz7ON77g+WGocczOyF4giqncJRTn9+0E= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=DNt9MYjW; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1707173880; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=gEDfM0YmOVVjZkqrLm5MW20CtKMi1zvofDx7AFNwinE=; b=DNt9MYjW+LT1/XrwrE9TOB+Z1ODZmhhUwaDPP+L5OHCPt1kSx+R/WG2yuhLN0Kfsi6FE88 C5DIdqTUS9NfU86j3xNjiP8VK96e/YsTmods84SkJla1tycgXqw6NA9Hg4/3xZrBohey78 koqrigNbcEA2r8hI7S7Ac5oaeIph+dk= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-45-jLdc7QWiO9W-lp1ehv9R1A-1; Mon, 05 Feb 2024 17:57:54 -0500 X-MC-Unique: jLdc7QWiO9W-lp1ehv9R1A-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id B0A521C05EC0; Mon, 5 Feb 2024 22:57:52 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.42.28.245]) by smtp.corp.redhat.com (Postfix) with ESMTP id BE9961C060AF; Mon, 5 Feb 2024 22:57:50 +0000 (UTC) From: David Howells To: Steve French Cc: David Howells , Jeff Layton , Matthew Wilcox , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Christian Brauner , netfs@lists.linux.dev, linux-cifs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Steve French , Shyam Prasad N , Rohith Surabattula Subject: [PATCH v5 08/12] cifs: Move cifs_loose_read_iter() and cifs_file_write_iter() to file.c Date: Mon, 5 Feb 2024 22:57:20 +0000 Message-ID: <20240205225726.3104808-9-dhowells@redhat.com> In-Reply-To: <20240205225726.3104808-1-dhowells@redhat.com> References: <20240205225726.3104808-1-dhowells@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.7 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1790101889895637964 X-GMAIL-MSGID: 1790101889895637964 Move cifs_loose_read_iter() and cifs_file_write_iter() to file.c so that they are colocated with similar functions rather than being split with cifsfs.c. Signed-off-by: David Howells cc: Steve French cc: Shyam Prasad N cc: Rohith Surabattula cc: Jeff Layton cc: linux-cifs@vger.kernel.org cc: netfs@lists.linux.dev cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org --- fs/smb/client/cifsfs.c | 55 ------------------------------------------ fs/smb/client/cifsfs.h | 2 ++ fs/smb/client/file.c | 53 ++++++++++++++++++++++++++++++++++++++++ 3 files changed, 55 insertions(+), 55 deletions(-) diff --git a/fs/smb/client/cifsfs.c b/fs/smb/client/cifsfs.c index 41617541d175..4b6d1a5e4741 100644 --- a/fs/smb/client/cifsfs.c +++ b/fs/smb/client/cifsfs.c @@ -984,61 +984,6 @@ cifs_smb3_do_mount(struct file_system_type *fs_type, return root; } - -static ssize_t -cifs_loose_read_iter(struct kiocb *iocb, struct iov_iter *iter) -{ - ssize_t rc; - struct inode *inode = file_inode(iocb->ki_filp); - - if (iocb->ki_flags & IOCB_DIRECT) - return cifs_user_readv(iocb, iter); - - rc = cifs_revalidate_mapping(inode); - if (rc) - return rc; - - return generic_file_read_iter(iocb, iter); -} - -static ssize_t cifs_file_write_iter(struct kiocb *iocb, struct iov_iter *from) -{ - struct inode *inode = file_inode(iocb->ki_filp); - struct cifsInodeInfo *cinode = CIFS_I(inode); - ssize_t written; - int rc; - - if (iocb->ki_filp->f_flags & O_DIRECT) { - written = cifs_user_writev(iocb, from); - if (written > 0 && CIFS_CACHE_READ(cinode)) { - cifs_zap_mapping(inode); - cifs_dbg(FYI, - "Set no oplock for inode=%p after a write operation\n", - inode); - cinode->oplock = 0; - } - return written; - } - - written = cifs_get_writer(cinode); - if (written) - return written; - - written = generic_file_write_iter(iocb, from); - - if (CIFS_CACHE_WRITE(CIFS_I(inode))) - goto out; - - rc = filemap_fdatawrite(inode->i_mapping); - if (rc) - cifs_dbg(FYI, "cifs_file_write_iter: %d rc on %p inode\n", - rc, inode); - -out: - cifs_put_writer(cinode); - return written; -} - static loff_t cifs_llseek(struct file *file, loff_t offset, int whence) { struct cifsFileInfo *cfile = file->private_data; diff --git a/fs/smb/client/cifsfs.h b/fs/smb/client/cifsfs.h index 685f7d1139c6..e8e0f863e935 100644 --- a/fs/smb/client/cifsfs.h +++ b/fs/smb/client/cifsfs.h @@ -100,6 +100,8 @@ extern ssize_t cifs_strict_readv(struct kiocb *iocb, struct iov_iter *to); extern ssize_t cifs_user_writev(struct kiocb *iocb, struct iov_iter *from); extern ssize_t cifs_direct_writev(struct kiocb *iocb, struct iov_iter *from); extern ssize_t cifs_strict_writev(struct kiocb *iocb, struct iov_iter *from); +ssize_t cifs_file_write_iter(struct kiocb *iocb, struct iov_iter *from); +ssize_t cifs_loose_read_iter(struct kiocb *iocb, struct iov_iter *iter); extern int cifs_flock(struct file *pfile, int cmd, struct file_lock *plock); extern int cifs_lock(struct file *, int, struct file_lock *); extern int cifs_fsync(struct file *, loff_t, loff_t, int); diff --git a/fs/smb/client/file.c b/fs/smb/client/file.c index 14602ed6bc39..a1e6a4c83dc6 100644 --- a/fs/smb/client/file.c +++ b/fs/smb/client/file.c @@ -4570,6 +4570,59 @@ ssize_t cifs_user_readv(struct kiocb *iocb, struct iov_iter *to) return __cifs_readv(iocb, to, false); } +ssize_t cifs_loose_read_iter(struct kiocb *iocb, struct iov_iter *iter) +{ + ssize_t rc; + struct inode *inode = file_inode(iocb->ki_filp); + + if (iocb->ki_flags & IOCB_DIRECT) + return cifs_user_readv(iocb, iter); + + rc = cifs_revalidate_mapping(inode); + if (rc) + return rc; + + return generic_file_read_iter(iocb, iter); +} + +ssize_t cifs_file_write_iter(struct kiocb *iocb, struct iov_iter *from) +{ + struct inode *inode = file_inode(iocb->ki_filp); + struct cifsInodeInfo *cinode = CIFS_I(inode); + ssize_t written; + int rc; + + if (iocb->ki_filp->f_flags & O_DIRECT) { + written = cifs_user_writev(iocb, from); + if (written > 0 && CIFS_CACHE_READ(cinode)) { + cifs_zap_mapping(inode); + cifs_dbg(FYI, + "Set no oplock for inode=%p after a write operation\n", + inode); + cinode->oplock = 0; + } + return written; + } + + written = cifs_get_writer(cinode); + if (written) + return written; + + written = generic_file_write_iter(iocb, from); + + if (CIFS_CACHE_WRITE(CIFS_I(inode))) + goto out; + + rc = filemap_fdatawrite(inode->i_mapping); + if (rc) + cifs_dbg(FYI, "cifs_file_write_iter: %d rc on %p inode\n", + rc, inode); + +out: + cifs_put_writer(cinode); + return written; +} + ssize_t cifs_strict_readv(struct kiocb *iocb, struct iov_iter *to) { From patchwork Mon Feb 5 22:57:21 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 197099 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7301:168b:b0:106:860b:bbdd with SMTP id ma11csp1199332dyb; Mon, 5 Feb 2024 15:04:18 -0800 (PST) X-Google-Smtp-Source: AGHT+IEtRcOsjA00l2Zy0M0CrV2rT1EeQjRfwMZzW15rX7y2yQ53TAHUVcql8VrN1Ys9rzU9q/Db X-Received: by 2002:a92:ca88:0:b0:363:86fd:9673 with SMTP id t8-20020a92ca88000000b0036386fd9673mr1359102ilo.10.1707174258747; Mon, 05 Feb 2024 15:04:18 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1707174258; cv=pass; d=google.com; s=arc-20160816; b=YDKiiLJaBCjJbFTnUtQKezs0mwNA5TMlh9LALfmiw0KvRJ48jDThXabgjs8YXl4dO4 IAB9NR6DCUTg3nbFfCP45bYPtlenym15368585I2LNuO0Z1vramYP3lRyouJN2xn0gR5 KecLBGqwkb0WZG+UiJmp+phMLsTzo+IPy1eBwH53fBdpisONqHmB0JiQfPwIyBqtnhH8 bA1Eyq26XGOqtk35jf+drRnI8uBBQrD9rczYkVMerDnxND8Yk7qy6qqfkT8duljc0h5K VvVwCDaK5izb9p0rI0aBVN4CEY9zUZ5R8CwWUDb50RTHCTNJQ6/9LoqrlQM4FG0KOghx v1tw== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=trLN/PFnIDPiUbHcmp97bv0I05QpfYfbX0Ux6A/6ioM=; fh=sy1GTywYqTWghDL5gCYD9MO6PdoB05dOLgc2tJ37mjs=; b=iM0YlHLWbjURVFHp9caT3q2UJ0r2vbaW59cKrlyVOcltdnXJf0cJcBLjlaCr+y7TQA xiGErF5okqKGvoqQSR2+v0WNZOPRcvT++vFn17u/uNUGE87SK6n2pJnOLqYzxxOBlpft 5lXzEq7o+FrPRYbJQdgwYJK73+RlcUJxK83yj0eZ3RbOWFUIPhgflI27OMkafjvhzvWV UR+LIsFYfohM89ehSuWHAQnQ9nOXXlzUEQD6f21Qiz40kMHcYk2xGPkt1L6Vt8GrO57g rwjM6B7p0eIAVzUgAxEhA2X5+yjak519RDVJ82I8XaWahnN4JdBDlnhV8hjgjEF/JKl4 86RA==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=Upv9tYjv; arc=pass (i=1 spf=pass spfdomain=redhat.com dkim=pass dkdomain=redhat.com dmarc=pass fromdomain=redhat.com); spf=pass (google.com: domain of linux-kernel+bounces-54055-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-54055-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com X-Forwarded-Encrypted: i=1; AJvYcCWXUldChkeJ+q9RAZw3xd3SdpMGHwluSGMWYD56WhrcNChWtZQVyRXkaYf4YCTtfnpwHIQdn+iwl03ddvdmkSs6pqIX8A== Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [139.178.88.99]) by mx.google.com with ESMTPS id e5-20020a637445000000b005d8e2a73c93si563246pgn.386.2024.02.05.15.04.18 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 05 Feb 2024 15:04:18 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-54055-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) client-ip=139.178.88.99; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=Upv9tYjv; arc=pass (i=1 spf=pass spfdomain=redhat.com dkim=pass dkdomain=redhat.com dmarc=pass fromdomain=redhat.com); spf=pass (google.com: domain of linux-kernel+bounces-54055-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-54055-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id 371E2290E81 for ; Mon, 5 Feb 2024 23:02:48 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 95DEA56B66; Mon, 5 Feb 2024 22:58:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="Upv9tYjv" Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7B391524CA for ; Mon, 5 Feb 2024 22:58:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707173887; cv=none; b=tL4qC0QWheNNEIxEmf5vUJUv4sou/oETTpvtPeV6WF1nQJDNCfw7yrB4gx56MNUF5Ax2PqOW+kldash4GYvZNUqwC4UOiVW5rUJ1ikST4erfEA4NUzZdW/j4oKNa2qpVAxNLdql4Wz7H37WuKH12uQYBP6GEMe/27pi5dsRNKYs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707173887; c=relaxed/simple; bh=GlDpjqegf69c9agebBCOf5MZqzRHfdm1CWtmxHOqtFw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=JLWWOPl06merwFvEAlPKydfZhwrp3600qxyY5uqcmzcHbzG+KDHz0u+Vet3x2u6aha03kXOcv5z2kXiKh3Xy2HQUkmcdPS13ueu2hRrFzc/IGd/6fJ08/QypZ5TzvczrtGkWS1U6K31bmUtrPmBGvL2QhgwEtH+t367mBtLQkL4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=Upv9tYjv; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1707173883; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=trLN/PFnIDPiUbHcmp97bv0I05QpfYfbX0Ux6A/6ioM=; b=Upv9tYjvrDWyRxq7BWtskBjpW9BDSuiAD4HM5uDZmJudvUl3VEKvYb1dtBbWw0+ZqKsozD zCsAuM6Njw+2OpHodwbAg7qbxywMRTj8h0k+wQy/3PPtPw0eey2C7iIMCN554gLGV+iM0J lvc297TdVTSYMJmQ7Hf1DNGjtYpz2dM= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-658-fTUUrS2uMAWZlhD7Awg-Og-1; Mon, 05 Feb 2024 17:57:56 -0500 X-MC-Unique: fTUUrS2uMAWZlhD7Awg-Og-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id D0A5129AC03F; Mon, 5 Feb 2024 22:57:55 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.42.28.245]) by smtp.corp.redhat.com (Postfix) with ESMTP id 531BC1121312; Mon, 5 Feb 2024 22:57:53 +0000 (UTC) From: David Howells To: Steve French Cc: David Howells , Jeff Layton , Matthew Wilcox , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Christian Brauner , netfs@lists.linux.dev, linux-cifs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Steve French , Shyam Prasad N , Rohith Surabattula Subject: [PATCH v5 09/12] cifs: Cut over to using netfslib Date: Mon, 5 Feb 2024 22:57:21 +0000 Message-ID: <20240205225726.3104808-10-dhowells@redhat.com> In-Reply-To: <20240205225726.3104808-1-dhowells@redhat.com> References: <20240205225726.3104808-1-dhowells@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.3 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1790101955902395849 X-GMAIL-MSGID: 1790101955902395849 Make the cifs filesystem use netfslib to handle reading and writing on behalf of cifs. The changes include: (1) Various read_iter/write_iter type functions are turned into wrappers around netfslib API functions or are pointed directly at those functions: cifs_file_direct{,_nobrl}_ops switch to use netfs_unbuffered_read_iter and netfs_unbuffered_write_iter. Large pieces of code that will be removed are #if'd out and will be removed in subsequent patches. [?] Why does cifs mark the page dirty in the destination buffer of a DIO read? Should that happen automatically? Does netfs need to do that? Signed-off-by: David Howells cc: Steve French cc: Shyam Prasad N cc: Rohith Surabattula cc: Jeff Layton cc: linux-cifs@vger.kernel.org cc: netfs@lists.linux.dev cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org --- fs/netfs/io.c | 7 +- fs/smb/client/cifsfs.c | 8 +- fs/smb/client/cifsfs.h | 8 +- fs/smb/client/cifsglob.h | 3 +- fs/smb/client/cifsproto.h | 8 +- fs/smb/client/cifssmb.c | 45 ++++++----- fs/smb/client/file.c | 166 ++++++++++++++++++++------------------ fs/smb/client/fscache.c | 2 + fs/smb/client/fscache.h | 4 + fs/smb/client/inode.c | 19 ++++- fs/smb/client/smb2pdu.c | 100 ++++++++++++++--------- fs/smb/client/trace.h | 144 ++++++++++++++++++++++++++++----- fs/smb/client/transport.c | 3 + 13 files changed, 347 insertions(+), 170 deletions(-) diff --git a/fs/netfs/io.c b/fs/netfs/io.c index e8ff1e61ce79..02f202c6209f 100644 --- a/fs/netfs/io.c +++ b/fs/netfs/io.c @@ -352,8 +352,13 @@ static void netfs_rreq_assess_dio(struct netfs_io_request *rreq) unsigned int i; size_t transferred = 0; - for (i = 0; i < rreq->direct_bv_count; i++) + for (i = 0; i < rreq->direct_bv_count; i++) { flush_dcache_page(rreq->direct_bv[i].bv_page); + // TODO: cifs marks pages in the destination buffer + // dirty under some circumstances after a read. Do we + // need to do that too? + set_page_dirty(rreq->direct_bv[i].bv_page); + } list_for_each_entry(subreq, &rreq->subrequests, rreq_link) { if (subreq->error || subreq->transferred == 0) diff --git a/fs/smb/client/cifsfs.c b/fs/smb/client/cifsfs.c index 4b6d1a5e4741..de838cb82ce1 100644 --- a/fs/smb/client/cifsfs.c +++ b/fs/smb/client/cifsfs.c @@ -1516,8 +1516,8 @@ const struct file_operations cifs_file_strict_ops = { }; const struct file_operations cifs_file_direct_ops = { - .read_iter = cifs_direct_readv, - .write_iter = cifs_direct_writev, + .read_iter = netfs_unbuffered_read_iter, + .write_iter = netfs_file_write_iter, .open = cifs_open, .release = cifs_close, .lock = cifs_lock, @@ -1572,8 +1572,8 @@ const struct file_operations cifs_file_strict_nobrl_ops = { }; const struct file_operations cifs_file_direct_nobrl_ops = { - .read_iter = cifs_direct_readv, - .write_iter = cifs_direct_writev, + .read_iter = netfs_unbuffered_read_iter, + .write_iter = netfs_file_write_iter, .open = cifs_open, .release = cifs_close, .fsync = cifs_fsync, diff --git a/fs/smb/client/cifsfs.h b/fs/smb/client/cifsfs.h index e8e0f863e935..5cd547b7b5ea 100644 --- a/fs/smb/client/cifsfs.h +++ b/fs/smb/client/cifsfs.h @@ -85,6 +85,7 @@ extern const struct inode_operations cifs_namespace_inode_operations; /* Functions related to files and directories */ +extern const struct netfs_request_ops cifs_req_ops; extern const struct file_operations cifs_file_ops; extern const struct file_operations cifs_file_direct_ops; /* if directio mnt */ extern const struct file_operations cifs_file_strict_ops; /* if strictio mnt */ @@ -94,11 +95,7 @@ extern const struct file_operations cifs_file_strict_nobrl_ops; extern int cifs_open(struct inode *inode, struct file *file); extern int cifs_close(struct inode *inode, struct file *file); extern int cifs_closedir(struct inode *inode, struct file *file); -extern ssize_t cifs_user_readv(struct kiocb *iocb, struct iov_iter *to); -extern ssize_t cifs_direct_readv(struct kiocb *iocb, struct iov_iter *to); extern ssize_t cifs_strict_readv(struct kiocb *iocb, struct iov_iter *to); -extern ssize_t cifs_user_writev(struct kiocb *iocb, struct iov_iter *from); -extern ssize_t cifs_direct_writev(struct kiocb *iocb, struct iov_iter *from); extern ssize_t cifs_strict_writev(struct kiocb *iocb, struct iov_iter *from); ssize_t cifs_file_write_iter(struct kiocb *iocb, struct iov_iter *from); ssize_t cifs_loose_read_iter(struct kiocb *iocb, struct iov_iter *iter); @@ -112,9 +109,6 @@ extern int cifs_file_strict_mmap(struct file *file, struct vm_area_struct *vma); extern const struct file_operations cifs_dir_ops; extern int cifs_dir_open(struct inode *inode, struct file *file); extern int cifs_readdir(struct file *file, struct dir_context *ctx); -extern void cifs_pages_written_back(struct inode *inode, loff_t start, unsigned int len); -extern void cifs_pages_write_failed(struct inode *inode, loff_t start, unsigned int len); -extern void cifs_pages_write_redirty(struct inode *inode, loff_t start, unsigned int len); /* Functions related to dir entries */ extern const struct dentry_operations cifs_dentry_ops; diff --git a/fs/smb/client/cifsglob.h b/fs/smb/client/cifsglob.h index 1dfed3eddaa2..259dc3893e28 100644 --- a/fs/smb/client/cifsglob.h +++ b/fs/smb/client/cifsglob.h @@ -1486,7 +1486,7 @@ struct cifs_io_subrequest { #endif struct cifs_credits credits; - // TODO: Remove following elements +#if 0 // TODO: Remove following elements struct list_head list; struct completion done; struct work_struct work; @@ -1496,6 +1496,7 @@ struct cifs_io_subrequest { enum writeback_sync_modes sync_mode; bool uncached; struct bio_vec *bv; +#endif }; /* diff --git a/fs/smb/client/cifsproto.h b/fs/smb/client/cifsproto.h index 00cb0d2dc935..d869918fb5c8 100644 --- a/fs/smb/client/cifsproto.h +++ b/fs/smb/client/cifsproto.h @@ -145,8 +145,8 @@ extern int checkSMB(char *buf, unsigned int len, struct TCP_Server_Info *srvr); extern bool is_valid_oplock_break(char *, struct TCP_Server_Info *); extern bool backup_cred(struct cifs_sb_info *); extern bool is_size_safe_to_change(struct cifsInodeInfo *, __u64 eof); -extern void cifs_update_eof(struct cifsInodeInfo *cifsi, loff_t offset, - unsigned int bytes_written); +void cifs_write_subrequest_terminated(struct cifs_io_subrequest *wdata, ssize_t result, + bool was_async); extern struct cifsFileInfo *find_writable_file(struct cifsInodeInfo *, int); extern int cifs_get_writable_file(struct cifsInodeInfo *cifs_inode, int flags, @@ -590,6 +590,7 @@ void __cifs_put_smb_ses(struct cifs_ses *ses); extern struct cifs_ses * cifs_get_smb_ses(struct TCP_Server_Info *server, struct smb3_fs_context *ctx); +#if 0 // TODO Remove void cifs_readdata_release(struct cifs_io_subrequest *rdata); static inline void cifs_get_readdata(struct cifs_io_subrequest *rdata) { @@ -600,11 +601,13 @@ static inline void cifs_put_readdata(struct cifs_io_subrequest *rdata) if (refcount_dec_and_test(&rdata->subreq.ref)) cifs_readdata_release(rdata); } +#endif int cifs_async_readv(struct cifs_io_subrequest *rdata); int cifs_readv_receive(struct TCP_Server_Info *server, struct mid_q_entry *mid); int cifs_async_writev(struct cifs_io_subrequest *wdata); void cifs_writev_complete(struct work_struct *work); +#if 0 // TODO Remove struct cifs_io_subrequest *cifs_writedata_alloc(work_func_t complete); void cifs_writedata_release(struct cifs_io_subrequest *rdata); static inline void cifs_get_writedata(struct cifs_io_subrequest *wdata) @@ -616,6 +619,7 @@ static inline void cifs_put_writedata(struct cifs_io_subrequest *wdata) if (refcount_dec_and_test(&wdata->subreq.ref)) cifs_writedata_release(wdata); } +#endif int cifs_query_mf_symlink(unsigned int xid, struct cifs_tcon *tcon, struct cifs_sb_info *cifs_sb, const unsigned char *path, char *pbuf, diff --git a/fs/smb/client/cifssmb.c b/fs/smb/client/cifssmb.c index b1c33a624157..b6214f1e3261 100644 --- a/fs/smb/client/cifssmb.c +++ b/fs/smb/client/cifssmb.c @@ -1265,7 +1265,7 @@ static void cifs_readv_callback(struct mid_q_entry *mid) { struct cifs_io_subrequest *rdata = mid->callback_data; - struct cifs_tcon *tcon = tlink_tcon(rdata->cfile->tlink); + struct cifs_tcon *tcon = tlink_tcon(rdata->req->cfile->tlink); struct TCP_Server_Info *server = tcon->ses->server; struct smb_rqst rqst = { .rq_iov = rdata->iov, .rq_nvec = 2, @@ -1306,7 +1306,12 @@ cifs_readv_callback(struct mid_q_entry *mid) rdata->result = -EIO; } - queue_work(cifsiod_wq, &rdata->work); + if (rdata->result == 0 || rdata->result == -EAGAIN) + iov_iter_advance(&rdata->subreq.io_iter, rdata->got_bytes); + netfs_subreq_terminated(&rdata->subreq, + (rdata->result == 0 || rdata->result == -EAGAIN) ? + rdata->got_bytes : rdata->result, + false); release_mid(mid); add_credits(server, &credits, 0); } @@ -1318,7 +1323,7 @@ cifs_async_readv(struct cifs_io_subrequest *rdata) int rc; READ_REQ *smb = NULL; int wct; - struct cifs_tcon *tcon = tlink_tcon(rdata->cfile->tlink); + struct cifs_tcon *tcon = tlink_tcon(rdata->req->cfile->tlink); struct smb_rqst rqst = { .rq_iov = rdata->iov, .rq_nvec = 2 }; @@ -1343,7 +1348,7 @@ cifs_async_readv(struct cifs_io_subrequest *rdata) smb->hdr.PidHigh = cpu_to_le16((__u16)(rdata->pid >> 16)); smb->AndXCommand = 0xFF; /* none */ - smb->Fid = rdata->cfile->fid.netfid; + smb->Fid = rdata->req->cfile->fid.netfid; smb->OffsetLow = cpu_to_le32(rdata->subreq.start & 0xFFFFFFFF); if (wct == 12) smb->OffsetHigh = cpu_to_le32(rdata->subreq.start >> 32); @@ -1613,15 +1618,16 @@ static void cifs_writev_callback(struct mid_q_entry *mid) { struct cifs_io_subrequest *wdata = mid->callback_data; - struct cifs_tcon *tcon = tlink_tcon(wdata->cfile->tlink); - unsigned int written; + struct cifs_tcon *tcon = tlink_tcon(wdata->req->cfile->tlink); WRITE_RSP *smb = (WRITE_RSP *)mid->resp_buf; struct cifs_credits credits = { .value = 1, .instance = 0 }; + ssize_t result; + size_t written; switch (mid->mid_state) { case MID_RESPONSE_RECEIVED: - wdata->result = cifs_check_receive(mid, tcon->ses->server, 0); - if (wdata->result != 0) + result = cifs_check_receive(mid, tcon->ses->server, 0); + if (result != 0) break; written = le16_to_cpu(smb->CountHigh); @@ -1637,20 +1643,20 @@ cifs_writev_callback(struct mid_q_entry *mid) written &= 0xFFFF; if (written < wdata->subreq.len) - wdata->result = -ENOSPC; + result = -ENOSPC; else - wdata->subreq.len = written; + result = written; break; case MID_REQUEST_SUBMITTED: case MID_RETRY_NEEDED: - wdata->result = -EAGAIN; + result = -EAGAIN; break; default: - wdata->result = -EIO; + result = -EIO; break; } - queue_work(cifsiod_wq, &wdata->work); + cifs_write_subrequest_terminated(wdata, result, true); release_mid(mid); add_credits(tcon->ses->server, &credits, 0); } @@ -1662,7 +1668,7 @@ cifs_async_writev(struct cifs_io_subrequest *wdata) int rc = -EACCES; WRITE_REQ *smb = NULL; int wct; - struct cifs_tcon *tcon = tlink_tcon(wdata->cfile->tlink); + struct cifs_tcon *tcon = tlink_tcon(wdata->req->cfile->tlink); struct kvec iov[2]; struct smb_rqst rqst = { }; @@ -1672,7 +1678,8 @@ cifs_async_writev(struct cifs_io_subrequest *wdata) wct = 12; if (wdata->subreq.start >> 32 > 0) { /* can not handle big offset for old srv */ - return -EIO; + rc = -EIO; + goto out; } } @@ -1684,7 +1691,7 @@ cifs_async_writev(struct cifs_io_subrequest *wdata) smb->hdr.PidHigh = cpu_to_le16((__u16)(wdata->pid >> 16)); smb->AndXCommand = 0xFF; /* none */ - smb->Fid = wdata->cfile->fid.netfid; + smb->Fid = wdata->req->cfile->fid.netfid; smb->OffsetLow = cpu_to_le32(wdata->subreq.start & 0xFFFFFFFF); if (wct == 14) smb->OffsetHigh = cpu_to_le32(wdata->subreq.start >> 32); @@ -1724,17 +1731,17 @@ cifs_async_writev(struct cifs_io_subrequest *wdata) iov[1].iov_len += 4; /* pad bigger by four bytes */ } - cifs_get_writedata(wdata); rc = cifs_call_async(tcon->ses->server, &rqst, NULL, cifs_writev_callback, NULL, wdata, 0, NULL); if (rc == 0) cifs_stats_inc(&tcon->stats.cifs_stats.num_writes); - else - cifs_put_writedata(wdata); async_writev_out: cifs_small_buf_release(smb); +out: + if (rc) + cifs_write_subrequest_terminated(wdata, rc, false); return rc; } diff --git a/fs/smb/client/file.c b/fs/smb/client/file.c index a1e6a4c83dc6..ff55749f5709 100644 --- a/fs/smb/client/file.c +++ b/fs/smb/client/file.c @@ -22,6 +22,7 @@ #include #include #include +#include #include #include "cifsfs.h" #include "cifspdu.h" @@ -172,7 +173,7 @@ static void cifs_create_write_requests(struct netfs_io_request *wreq, failed_return_credits: add_credits_and_wake_if(server, &wdata->credits, 0); failed: - netfs_write_subrequest_terminated(subreq, rc, false); + cifs_write_subrequest_terminated(wdata, rc, false); free_xid(xid); } @@ -394,6 +395,7 @@ const struct netfs_request_ops cifs_req_ops = { .create_write_requests = cifs_create_write_requests, }; +#if 0 // TODO remove 397 /* * Remove the dirty flags from a span of pages. */ @@ -518,6 +520,7 @@ void cifs_pages_write_redirty(struct inode *inode, loff_t start, unsigned int le rcu_read_unlock(); } +#endif // end netfslib remove 397 /* * Mark as invalid, all open files on tree connections since they @@ -2467,20 +2470,23 @@ int cifs_lock(struct file *file, int cmd, struct file_lock *flock) return rc; } -/* - * update the file size (if needed) after a write. Should be called with - * the inode->i_lock held - */ -void -cifs_update_eof(struct cifsInodeInfo *cifsi, loff_t offset, - unsigned int bytes_written) +void cifs_write_subrequest_terminated(struct cifs_io_subrequest *wdata, ssize_t result, + bool was_async) { - loff_t end_of_write = offset + bytes_written; + struct netfs_io_request *wreq = wdata->rreq; + loff_t new_server_eof; - if (end_of_write > cifsi->netfs.remote_i_size) - netfs_resize_file(&cifsi->netfs, end_of_write, true); + if (result > 0) { + new_server_eof = wdata->subreq.start + wdata->subreq.transferred + result; + + if (new_server_eof > netfs_inode(wreq->inode)->remote_i_size) + netfs_resize_file(netfs_inode(wreq->inode), new_server_eof, true); + } + + netfs_write_subrequest_terminated(&wdata->subreq, result, was_async); } +#if 0 // TODO remove 2483 static ssize_t cifs_write(struct cifsFileInfo *open_file, __u32 pid, const char *write_data, size_t write_size, loff_t *offset) @@ -2564,6 +2570,7 @@ cifs_write(struct cifsFileInfo *open_file, __u32 pid, const char *write_data, free_xid(xid); return total_written; } +#endif // end netfslib remove 2483 struct cifsFileInfo *find_readable_file(struct cifsInodeInfo *cifs_inode, bool fsuid_only) @@ -2769,6 +2776,7 @@ cifs_get_readable_path(struct cifs_tcon *tcon, const char *name, return -ENOENT; } +#if 0 // TODO remove 2773 void cifs_writedata_release(struct cifs_io_subrequest *wdata) { @@ -3459,7 +3467,11 @@ static int cifs_write_end(struct file *file, struct address_space *mapping, return rc; } +#endif // End netfs removal 2773 +/* + * Flush data on a strict file. + */ int cifs_strict_fsync(struct file *file, loff_t start, loff_t end, int datasync) { @@ -3514,6 +3526,9 @@ int cifs_strict_fsync(struct file *file, loff_t start, loff_t end, return rc; } +/* + * Flush data on a non-strict data. + */ int cifs_fsync(struct file *file, loff_t start, loff_t end, int datasync) { unsigned int xid; @@ -3580,6 +3595,7 @@ int cifs_flush(struct file *file, fl_owner_t id) return rc; } +#if 0 // TODO remove 3594 static void collect_uncached_write_data(struct cifs_aio_ctx *ctx); static void @@ -4042,6 +4058,7 @@ ssize_t cifs_user_writev(struct kiocb *iocb, struct iov_iter *from) { return __cifs_writev(iocb, from, false); } +#endif // TODO remove 3594 static ssize_t cifs_writev(struct kiocb *iocb, struct iov_iter *from) @@ -4053,7 +4070,10 @@ cifs_writev(struct kiocb *iocb, struct iov_iter *from) struct TCP_Server_Info *server = tlink_tcon(cfile->tlink)->ses->server; ssize_t rc; - inode_lock(inode); + rc = netfs_start_io_write(inode); + if (rc < 0) + return rc; + /* * We need to hold the sem to be sure nobody modifies lock list * with a brlock that prevents writing. @@ -4067,13 +4087,12 @@ cifs_writev(struct kiocb *iocb, struct iov_iter *from) if (!cifs_find_lock_conflict(cfile, iocb->ki_pos, iov_iter_count(from), server->vals->exclusive_lock_type, 0, NULL, CIFS_WRITE_OP)) - rc = __generic_file_write_iter(iocb, from); + rc = netfs_buffered_write_iter_locked(iocb, from, NULL); else rc = -EACCES; out: up_read(&cinode->lock_sem); - inode_unlock(inode); - + netfs_end_io_write(inode); if (rc > 0) rc = generic_write_sync(iocb, rc); return rc; @@ -4096,9 +4115,9 @@ cifs_strict_writev(struct kiocb *iocb, struct iov_iter *from) if (CIFS_CACHE_WRITE(cinode)) { if (cap_unix(tcon->ses) && - (CIFS_UNIX_FCNTL_CAP & le64_to_cpu(tcon->fsUnixInfo.Capability)) - && ((cifs_sb->mnt_cifs_flags & CIFS_MOUNT_NOPOSIXBRL) == 0)) { - written = generic_file_write_iter(iocb, from); + (CIFS_UNIX_FCNTL_CAP & le64_to_cpu(tcon->fsUnixInfo.Capability)) && + ((cifs_sb->mnt_cifs_flags & CIFS_MOUNT_NOPOSIXBRL) == 0)) { + written = netfs_file_write_iter(iocb, from); goto out; } written = cifs_writev(iocb, from); @@ -4110,7 +4129,7 @@ cifs_strict_writev(struct kiocb *iocb, struct iov_iter *from) * affected pages because it may cause a error with mandatory locks on * these pages but not on the region from pos to ppos+len-1. */ - written = cifs_user_writev(iocb, from); + written = netfs_file_write_iter(iocb, from); if (CIFS_CACHE_READ(cinode)) { /* * We have read level caching and we have just sent a write @@ -4129,6 +4148,7 @@ cifs_strict_writev(struct kiocb *iocb, struct iov_iter *from) return written; } +#if 0 // TODO remove 4143 static struct cifs_io_subrequest *cifs_readdata_alloc(work_func_t complete) { struct cifs_io_subrequest *rdata; @@ -4568,7 +4588,9 @@ ssize_t cifs_direct_readv(struct kiocb *iocb, struct iov_iter *to) ssize_t cifs_user_readv(struct kiocb *iocb, struct iov_iter *to) { return __cifs_readv(iocb, to, false); + } +#endif // end netfslib removal 4143 ssize_t cifs_loose_read_iter(struct kiocb *iocb, struct iov_iter *iter) { @@ -4576,13 +4598,13 @@ ssize_t cifs_loose_read_iter(struct kiocb *iocb, struct iov_iter *iter) struct inode *inode = file_inode(iocb->ki_filp); if (iocb->ki_flags & IOCB_DIRECT) - return cifs_user_readv(iocb, iter); + return netfs_unbuffered_read_iter(iocb, iter); rc = cifs_revalidate_mapping(inode); if (rc) return rc; - return generic_file_read_iter(iocb, iter); + return netfs_file_read_iter(iocb, iter); } ssize_t cifs_file_write_iter(struct kiocb *iocb, struct iov_iter *from) @@ -4593,7 +4615,7 @@ ssize_t cifs_file_write_iter(struct kiocb *iocb, struct iov_iter *from) int rc; if (iocb->ki_filp->f_flags & O_DIRECT) { - written = cifs_user_writev(iocb, from); + written = netfs_unbuffered_write_iter(iocb, from); if (written > 0 && CIFS_CACHE_READ(cinode)) { cifs_zap_mapping(inode); cifs_dbg(FYI, @@ -4608,17 +4630,15 @@ ssize_t cifs_file_write_iter(struct kiocb *iocb, struct iov_iter *from) if (written) return written; - written = generic_file_write_iter(iocb, from); - - if (CIFS_CACHE_WRITE(CIFS_I(inode))) - goto out; + written = netfs_file_write_iter(iocb, from); - rc = filemap_fdatawrite(inode->i_mapping); - if (rc) - cifs_dbg(FYI, "cifs_file_write_iter: %d rc on %p inode\n", - rc, inode); + if (!CIFS_CACHE_WRITE(CIFS_I(inode))) { + rc = filemap_fdatawrite(inode->i_mapping); + if (rc) + cifs_dbg(FYI, "cifs_file_write_iter: %d rc on %p inode\n", + rc, inode); + } -out: cifs_put_writer(cinode); return written; } @@ -4643,12 +4663,15 @@ cifs_strict_readv(struct kiocb *iocb, struct iov_iter *to) * pos+len-1. */ if (!CIFS_CACHE_READ(cinode)) - return cifs_user_readv(iocb, to); + return netfs_unbuffered_read_iter(iocb, to); if (cap_unix(tcon->ses) && (CIFS_UNIX_FCNTL_CAP & le64_to_cpu(tcon->fsUnixInfo.Capability)) && - ((cifs_sb->mnt_cifs_flags & CIFS_MOUNT_NOPOSIXBRL) == 0)) - return generic_file_read_iter(iocb, to); + ((cifs_sb->mnt_cifs_flags & CIFS_MOUNT_NOPOSIXBRL) == 0)) { + if (iocb->ki_flags & IOCB_DIRECT) + return netfs_unbuffered_read_iter(iocb, to); + return netfs_buffered_read_iter(iocb, to); + } /* * We need to hold the sem to be sure nobody modifies lock list @@ -4657,12 +4680,17 @@ cifs_strict_readv(struct kiocb *iocb, struct iov_iter *to) down_read(&cinode->lock_sem); if (!cifs_find_lock_conflict(cfile, iocb->ki_pos, iov_iter_count(to), tcon->ses->server->vals->shared_lock_type, - 0, NULL, CIFS_READ_OP)) - rc = generic_file_read_iter(iocb, to); + 0, NULL, CIFS_READ_OP)) { + if (iocb->ki_flags & IOCB_DIRECT) + rc = netfs_unbuffered_read_iter(iocb, to); + else + rc = netfs_buffered_read_iter(iocb, to); + } up_read(&cinode->lock_sem); return rc; } +#if 0 // TODO remove 4633 static ssize_t cifs_read(struct file *file, char *read_data, size_t read_size, loff_t *offset) { @@ -4754,29 +4782,11 @@ cifs_read(struct file *file, char *read_data, size_t read_size, loff_t *offset) free_xid(xid); return total_read; } +#endif // end netfslib remove 4633 -/* - * If the page is mmap'ed into a process' page tables, then we need to make - * sure that it doesn't change while being written back. - */ static vm_fault_t cifs_page_mkwrite(struct vm_fault *vmf) { - struct folio *folio = page_folio(vmf->page); - - /* Wait for the folio to be written to the cache before we allow it to - * be modified. We then assume the entire folio will need writing back. - */ -#ifdef CONFIG_CIFS_FSCACHE - if (folio_test_fscache(folio) && - folio_wait_fscache_killable(folio) < 0) - return VM_FAULT_RETRY; -#endif - - folio_wait_writeback(folio); - - if (folio_lock_killable(folio) < 0) - return VM_FAULT_RETRY; - return VM_FAULT_LOCKED; + return netfs_page_mkwrite(vmf, NULL); } static const struct vm_operations_struct cifs_file_vm_ops = { @@ -4822,6 +4832,7 @@ int cifs_file_mmap(struct file *file, struct vm_area_struct *vma) return rc; } +#if 0 // TODO remove 4794 /* * Unlock a bunch of folios in the pagecache. */ @@ -5106,6 +5117,7 @@ static int cifs_read_folio(struct file *file, struct folio *folio) free_xid(xid); return rc; } +#endif // end netfslib remove 4794 static int is_inode_writable(struct cifsInodeInfo *cifs_inode) { @@ -5152,6 +5164,7 @@ bool is_size_safe_to_change(struct cifsInodeInfo *cifsInode, __u64 end_of_file) return true; } +#if 0 // TODO remove 5152 static int cifs_write_begin(struct file *file, struct address_space *mapping, loff_t pos, unsigned len, struct page **pagep, void **fsdata) @@ -5268,6 +5281,7 @@ static int cifs_launder_folio(struct folio *folio) folio_wait_fscache(folio); return rc; } +#endif // end netfslib remove 5152 void cifs_oplock_break(struct work_struct *work) { @@ -5358,6 +5372,7 @@ void cifs_oplock_break(struct work_struct *work) cifs_done_oplock_break(cinode); } +#if 0 // TODO remove 5333 /* * The presence of cifs_direct_io() in the address space ops vector * allowes open() O_DIRECT flags which would have failed otherwise. @@ -5376,6 +5391,7 @@ cifs_direct_io(struct kiocb *iocb, struct iov_iter *iter) */ return -EINVAL; } +#endif // netfs end remove 5333 static int cifs_swap_activate(struct swap_info_struct *sis, struct file *swap_file, sector_t *span) @@ -5438,22 +5454,20 @@ static void cifs_swap_deactivate(struct file *file) } const struct address_space_operations cifs_addr_ops = { - .read_folio = cifs_read_folio, - .readahead = cifs_readahead, - .writepages = cifs_writepages, - .write_begin = cifs_write_begin, - .write_end = cifs_write_end, - .dirty_folio = netfs_dirty_folio, - .release_folio = cifs_release_folio, - .direct_IO = cifs_direct_io, - .invalidate_folio = cifs_invalidate_folio, - .launder_folio = cifs_launder_folio, - .migrate_folio = filemap_migrate_folio, + .read_folio = netfs_read_folio, + .readahead = netfs_readahead, + .writepages = netfs_writepages, + .dirty_folio = netfs_dirty_folio, + .release_folio = netfs_release_folio, + .direct_IO = noop_direct_IO, + .invalidate_folio = netfs_invalidate_folio, + .launder_folio = netfs_launder_folio, + .migrate_folio = filemap_migrate_folio, /* * TODO: investigate and if useful we could add an is_dirty_writeback * helper if needed */ - .swap_activate = cifs_swap_activate, + .swap_activate = cifs_swap_activate, .swap_deactivate = cifs_swap_deactivate, }; @@ -5463,13 +5477,11 @@ const struct address_space_operations cifs_addr_ops = { * to leave cifs_readahead out of the address space operations. */ const struct address_space_operations cifs_addr_ops_smallbuf = { - .read_folio = cifs_read_folio, - .writepages = cifs_writepages, - .write_begin = cifs_write_begin, - .write_end = cifs_write_end, - .dirty_folio = netfs_dirty_folio, - .release_folio = cifs_release_folio, - .invalidate_folio = cifs_invalidate_folio, - .launder_folio = cifs_launder_folio, - .migrate_folio = filemap_migrate_folio, + .read_folio = netfs_read_folio, + .writepages = netfs_writepages, + .dirty_folio = netfs_dirty_folio, + .release_folio = netfs_release_folio, + .invalidate_folio = netfs_invalidate_folio, + .launder_folio = netfs_launder_folio, + .migrate_folio = filemap_migrate_folio, }; diff --git a/fs/smb/client/fscache.c b/fs/smb/client/fscache.c index c4a3cb736881..228fe57bbde3 100644 --- a/fs/smb/client/fscache.c +++ b/fs/smb/client/fscache.c @@ -137,6 +137,7 @@ void cifs_fscache_release_inode_cookie(struct inode *inode) } } +#if 0 // TODO remove /* * Fallback page reading interface. */ @@ -245,3 +246,4 @@ int __cifs_fscache_query_occupancy(struct inode *inode, fscache_end_operation(&cres); return ret; } +#endif diff --git a/fs/smb/client/fscache.h b/fs/smb/client/fscache.h index a3d73720914f..c2c05a778a71 100644 --- a/fs/smb/client/fscache.h +++ b/fs/smb/client/fscache.h @@ -74,6 +74,7 @@ static inline void cifs_invalidate_cache(struct inode *inode, unsigned int flags i_size_read(inode), flags); } +#if 0 // TODO remove extern int __cifs_fscache_query_occupancy(struct inode *inode, pgoff_t first, unsigned int nr_pages, pgoff_t *_data_first, @@ -108,6 +109,7 @@ static inline void cifs_readahead_to_fscache(struct inode *inode, if (cifs_inode_cookie(inode)) __cifs_readahead_to_fscache(inode, pos, len); } +#endif #else /* CONFIG_CIFS_FSCACHE */ static inline @@ -125,6 +127,7 @@ static inline void cifs_fscache_unuse_inode_cookie(struct inode *inode, bool upd static inline struct fscache_cookie *cifs_inode_cookie(struct inode *inode) { return NULL; } static inline void cifs_invalidate_cache(struct inode *inode, unsigned int flags) {} +#if 0 // TODO remove static inline int cifs_fscache_query_occupancy(struct inode *inode, pgoff_t first, unsigned int nr_pages, pgoff_t *_data_first, @@ -143,6 +146,7 @@ cifs_readpage_from_fscache(struct inode *inode, struct page *page) static inline void cifs_readahead_to_fscache(struct inode *inode, loff_t pos, size_t len) {} +#endif #endif /* CONFIG_CIFS_FSCACHE */ diff --git a/fs/smb/client/inode.c b/fs/smb/client/inode.c index 24489e1e238a..7acaf75feabc 100644 --- a/fs/smb/client/inode.c +++ b/fs/smb/client/inode.c @@ -28,14 +28,29 @@ #include "cached_dir.h" #include "reparse.h" +/* + * Set parameters for the netfs library + */ +static void cifs_set_netfs_context(struct inode *inode) +{ + struct cifsInodeInfo *cifs_i = CIFS_I(inode); + struct cifs_sb_info *cifs_sb = CIFS_SB(inode->i_sb); + + netfs_inode_init(&cifs_i->netfs, &cifs_req_ops, true); + if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_STRICT_IO) + __set_bit(NETFS_ICTX_WRITETHROUGH, &cifs_i->netfs.flags); +} + static void cifs_set_ops(struct inode *inode) { struct cifs_sb_info *cifs_sb = CIFS_SB(inode->i_sb); + struct netfs_inode *ictx = netfs_inode(inode); switch (inode->i_mode & S_IFMT) { case S_IFREG: inode->i_op = &cifs_file_inode_ops; if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_DIRECT_IO) { + set_bit(NETFS_ICTX_UNBUFFERED, &ictx->flags); if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_NO_BRL) inode->i_fop = &cifs_file_direct_nobrl_ops; else @@ -220,8 +235,10 @@ cifs_fattr_to_inode(struct inode *inode, struct cifs_fattr *fattr) if (fattr->cf_flags & CIFS_FATTR_JUNCTION) inode->i_flags |= S_AUTOMOUNT; - if (inode->i_state & I_NEW) + if (inode->i_state & I_NEW) { + cifs_set_netfs_context(inode); cifs_set_ops(inode); + } return 0; } diff --git a/fs/smb/client/smb2pdu.c b/fs/smb/client/smb2pdu.c index 84e3675eb41e..b58fdee40755 100644 --- a/fs/smb/client/smb2pdu.c +++ b/fs/smb/client/smb2pdu.c @@ -4386,10 +4386,12 @@ smb2_new_read_req(void **buf, unsigned int *total_len, req->Length = cpu_to_le32(io_parms->length); req->Offset = cpu_to_le64(io_parms->offset); - trace_smb3_read_enter(0 /* xid */, - io_parms->persistent_fid, - io_parms->tcon->tid, io_parms->tcon->ses->Suid, - io_parms->offset, io_parms->length); + trace_smb3_read_enter(rdata ? rdata->rreq->debug_id : 0, + rdata ? rdata->subreq.debug_index : 0, + rdata ? rdata->xid : 0, + io_parms->persistent_fid, + io_parms->tcon->tid, io_parms->tcon->ses->Suid, + io_parms->offset, io_parms->length); #ifdef CONFIG_CIFS_SMB_DIRECT /* * If we want to do a RDMA write, fill in and append @@ -4451,7 +4453,7 @@ static void smb2_readv_callback(struct mid_q_entry *mid) { struct cifs_io_subrequest *rdata = mid->callback_data; - struct cifs_tcon *tcon = tlink_tcon(rdata->cfile->tlink); + struct cifs_tcon *tcon = tlink_tcon(rdata->req->cfile->tlink); struct TCP_Server_Info *server = rdata->server; struct smb2_hdr *shdr = (struct smb2_hdr *)rdata->iov[0].iov_base; @@ -4520,17 +4522,33 @@ smb2_readv_callback(struct mid_q_entry *mid) #endif if (rdata->result && rdata->result != -ENODATA) { cifs_stats_fail_inc(tcon, SMB2_READ_HE); - trace_smb3_read_err(0 /* xid */, - rdata->cfile->fid.persistent_fid, + trace_smb3_read_err(rdata->rreq->debug_id, + rdata->subreq.debug_index, + rdata->xid, + rdata->req->cfile->fid.persistent_fid, tcon->tid, tcon->ses->Suid, rdata->subreq.start, rdata->subreq.len, rdata->result); } else - trace_smb3_read_done(0 /* xid */, - rdata->cfile->fid.persistent_fid, + trace_smb3_read_done(rdata->rreq->debug_id, + rdata->subreq.debug_index, + rdata->xid, + rdata->req->cfile->fid.persistent_fid, tcon->tid, tcon->ses->Suid, rdata->subreq.start, rdata->got_bytes); - queue_work(cifsiod_wq, &rdata->work); + if (rdata->result == -ENODATA) { + /* We may have got an EOF error because fallocate + * failed to enlarge the file. + */ + if (rdata->subreq.start < rdata->subreq.rreq->i_size) + rdata->result = 0; + } + if (rdata->result == 0 || rdata->result == -EAGAIN) + iov_iter_advance(&rdata->subreq.io_iter, rdata->got_bytes); + rdata->have_credits = false; + netfs_subreq_terminated(&rdata->subreq, + (rdata->result == 0 || rdata->result == -EAGAIN) ? + rdata->got_bytes : rdata->result, true); release_mid(mid); add_credits(server, &credits, 0); } @@ -4546,7 +4564,7 @@ smb2_async_readv(struct cifs_io_subrequest *rdata) struct smb_rqst rqst = { .rq_iov = rdata->iov, .rq_nvec = 1 }; struct TCP_Server_Info *server; - struct cifs_tcon *tcon = tlink_tcon(rdata->cfile->tlink); + struct cifs_tcon *tcon = tlink_tcon(rdata->req->cfile->tlink); unsigned int total_len; int credit_request; @@ -4556,12 +4574,12 @@ smb2_async_readv(struct cifs_io_subrequest *rdata) if (!rdata->server) rdata->server = cifs_pick_channel(tcon->ses); - io_parms.tcon = tlink_tcon(rdata->cfile->tlink); + io_parms.tcon = tlink_tcon(rdata->req->cfile->tlink); io_parms.server = server = rdata->server; io_parms.offset = rdata->subreq.start; io_parms.length = rdata->subreq.len; - io_parms.persistent_fid = rdata->cfile->fid.persistent_fid; - io_parms.volatile_fid = rdata->cfile->fid.volatile_fid; + io_parms.persistent_fid = rdata->req->cfile->fid.persistent_fid; + io_parms.volatile_fid = rdata->req->cfile->fid.volatile_fid; io_parms.pid = rdata->pid; rc = smb2_new_read_req( @@ -4595,15 +4613,15 @@ smb2_async_readv(struct cifs_io_subrequest *rdata) flags |= CIFS_HAS_CREDITS; } - cifs_get_readdata(rdata); rc = cifs_call_async(server, &rqst, cifs_readv_receive, smb2_readv_callback, smb3_handle_read_data, rdata, flags, &rdata->credits); if (rc) { - cifs_put_readdata(rdata); cifs_stats_fail_inc(io_parms.tcon, SMB2_READ_HE); - trace_smb3_read_err(0 /* xid */, io_parms.persistent_fid, + trace_smb3_read_err(rdata->rreq->debug_id, + rdata->subreq.debug_index, + rdata->xid, io_parms.persistent_fid, io_parms.tcon->tid, io_parms.tcon->ses->Suid, io_parms.offset, io_parms.length, rc); @@ -4654,22 +4672,23 @@ SMB2_read(const unsigned int xid, struct cifs_io_parms *io_parms, if (rc != -ENODATA) { cifs_stats_fail_inc(io_parms->tcon, SMB2_READ_HE); cifs_dbg(VFS, "Send error in read = %d\n", rc); - trace_smb3_read_err(xid, + trace_smb3_read_err(0, 0, xid, req->PersistentFileId, io_parms->tcon->tid, ses->Suid, io_parms->offset, io_parms->length, rc); } else - trace_smb3_read_done(xid, req->PersistentFileId, io_parms->tcon->tid, + trace_smb3_read_done(0, 0, xid, + req->PersistentFileId, io_parms->tcon->tid, ses->Suid, io_parms->offset, 0); free_rsp_buf(resp_buftype, rsp_iov.iov_base); cifs_small_buf_release(req); return rc == -ENODATA ? 0 : rc; } else - trace_smb3_read_done(xid, - req->PersistentFileId, - io_parms->tcon->tid, ses->Suid, - io_parms->offset, io_parms->length); + trace_smb3_read_done(0, 0, xid, + req->PersistentFileId, + io_parms->tcon->tid, ses->Suid, + io_parms->offset, io_parms->length); cifs_small_buf_release(req); @@ -4703,11 +4722,12 @@ static void smb2_writev_callback(struct mid_q_entry *mid) { struct cifs_io_subrequest *wdata = mid->callback_data; - struct cifs_tcon *tcon = tlink_tcon(wdata->cfile->tlink); + struct cifs_tcon *tcon = tlink_tcon(wdata->req->cfile->tlink); struct TCP_Server_Info *server = wdata->server; - unsigned int written; struct smb2_write_rsp *rsp = (struct smb2_write_rsp *)mid->resp_buf; struct cifs_credits credits = { .value = 0, .instance = 0 }; + ssize_t result = 0; + size_t written; WARN_ONCE(wdata->server != mid->server, "wdata server %p != mid server %p", @@ -4717,8 +4737,8 @@ smb2_writev_callback(struct mid_q_entry *mid) case MID_RESPONSE_RECEIVED: credits.value = le16_to_cpu(rsp->hdr.CreditRequest); credits.instance = server->reconnect_instance; - wdata->result = smb2_check_receive(mid, server, 0); - if (wdata->result != 0) + result = smb2_check_receive(mid, server, 0); + if (result != 0) break; written = le32_to_cpu(rsp->DataLength); @@ -4735,17 +4755,18 @@ smb2_writev_callback(struct mid_q_entry *mid) wdata->result = -ENOSPC; else wdata->subreq.len = written; + iov_iter_advance(&wdata->subreq.io_iter, written); break; case MID_REQUEST_SUBMITTED: case MID_RETRY_NEEDED: - wdata->result = -EAGAIN; + result = -EAGAIN; break; case MID_RESPONSE_MALFORMED: credits.value = le16_to_cpu(rsp->hdr.CreditRequest); credits.instance = server->reconnect_instance; fallthrough; default: - wdata->result = -EIO; + result = -EIO; break; } #ifdef CONFIG_CIFS_SMB_DIRECT @@ -4761,10 +4782,10 @@ smb2_writev_callback(struct mid_q_entry *mid) wdata->mr = NULL; } #endif - if (wdata->result) { + if (result) { cifs_stats_fail_inc(tcon, SMB2_WRITE_HE); trace_smb3_write_err(0 /* no xid */, - wdata->cfile->fid.persistent_fid, + wdata->req->cfile->fid.persistent_fid, tcon->tid, tcon->ses->Suid, wdata->subreq.start, wdata->subreq.len, wdata->result); if (wdata->result == -ENOSPC) @@ -4772,11 +4793,11 @@ smb2_writev_callback(struct mid_q_entry *mid) tcon->tree_name); } else trace_smb3_write_done(0 /* no xid */, - wdata->cfile->fid.persistent_fid, + wdata->req->cfile->fid.persistent_fid, tcon->tid, tcon->ses->Suid, wdata->subreq.start, wdata->subreq.len); - queue_work(cifsiod_wq, &wdata->work); + cifs_write_subrequest_terminated(wdata, result ?: written, true); release_mid(mid); add_credits(server, &credits, 0); } @@ -4788,7 +4809,7 @@ smb2_async_writev(struct cifs_io_subrequest *wdata) int rc = -EACCES, flags = 0; struct smb2_write_req *req = NULL; struct smb2_hdr *shdr; - struct cifs_tcon *tcon = tlink_tcon(wdata->cfile->tlink); + struct cifs_tcon *tcon = tlink_tcon(wdata->req->cfile->tlink); struct TCP_Server_Info *server = wdata->server; struct kvec iov[1]; struct smb_rqst rqst = { }; @@ -4809,8 +4830,8 @@ smb2_async_writev(struct cifs_io_subrequest *wdata) .server = server, .offset = wdata->subreq.start, .length = wdata->subreq.len, - .persistent_fid = wdata->cfile->fid.persistent_fid, - .volatile_fid = wdata->cfile->fid.volatile_fid, + .persistent_fid = wdata->req->cfile->fid.persistent_fid, + .volatile_fid = wdata->req->cfile->fid.volatile_fid, .pid = wdata->pid, }; io_parms = &_io_parms; @@ -4818,7 +4839,7 @@ smb2_async_writev(struct cifs_io_subrequest *wdata) rc = smb2_plain_req_init(SMB2_WRITE, tcon, server, (void **) &req, &total_len); if (rc) - return rc; + goto out; if (smb3_encryption_required(tcon)) flags |= CIFS_TRANSFORM_REQ; @@ -4917,7 +4938,6 @@ smb2_async_writev(struct cifs_io_subrequest *wdata) flags |= CIFS_HAS_CREDITS; } - cifs_get_writedata(wdata); rc = cifs_call_async(server, &rqst, NULL, smb2_writev_callback, NULL, wdata, flags, &wdata->credits); @@ -4929,12 +4949,14 @@ smb2_async_writev(struct cifs_io_subrequest *wdata) io_parms->offset, io_parms->length, rc); - cifs_put_writedata(wdata); cifs_stats_fail_inc(tcon, SMB2_WRITE_HE); } async_writev_out: cifs_small_buf_release(req); +out: + if (rc) + cifs_write_subrequest_terminated(wdata, rc, true); return rc; } diff --git a/fs/smb/client/trace.h b/fs/smb/client/trace.h index ce90ae0d77f8..9b4fbbaba4b9 100644 --- a/fs/smb/client/trace.h +++ b/fs/smb/client/trace.h @@ -21,6 +21,62 @@ /* For logging errors in read or write */ DECLARE_EVENT_CLASS(smb3_rw_err_class, + TP_PROTO(unsigned int rreq_debug_id, + unsigned int rreq_debug_index, + unsigned int xid, + __u64 fid, + __u32 tid, + __u64 sesid, + __u64 offset, + __u32 len, + int rc), + TP_ARGS(rreq_debug_id, rreq_debug_index, + xid, fid, tid, sesid, offset, len, rc), + TP_STRUCT__entry( + __field(unsigned int, rreq_debug_id) + __field(unsigned int, rreq_debug_index) + __field(unsigned int, xid) + __field(__u64, fid) + __field(__u32, tid) + __field(__u64, sesid) + __field(__u64, offset) + __field(__u32, len) + __field(int, rc) + ), + TP_fast_assign( + __entry->rreq_debug_id = rreq_debug_id; + __entry->rreq_debug_index = rreq_debug_index; + __entry->xid = xid; + __entry->fid = fid; + __entry->tid = tid; + __entry->sesid = sesid; + __entry->offset = offset; + __entry->len = len; + __entry->rc = rc; + ), + TP_printk("\tR=%08x[%x] xid=%u sid=0x%llx tid=0x%x fid=0x%llx offset=0x%llx len=0x%x rc=%d", + __entry->rreq_debug_id, __entry->rreq_debug_index, + __entry->xid, __entry->sesid, __entry->tid, __entry->fid, + __entry->offset, __entry->len, __entry->rc) +) + +#define DEFINE_SMB3_RW_ERR_EVENT(name) \ +DEFINE_EVENT(smb3_rw_err_class, smb3_##name, \ + TP_PROTO(unsigned int rreq_debug_id, \ + unsigned int rreq_debug_index, \ + unsigned int xid, \ + __u64 fid, \ + __u32 tid, \ + __u64 sesid, \ + __u64 offset, \ + __u32 len, \ + int rc), \ + TP_ARGS(rreq_debug_id, rreq_debug_index, xid, fid, tid, sesid, offset, len, rc)) + +DEFINE_SMB3_RW_ERR_EVENT(read_err); + +/* For logging errors in other file I/O ops */ +DECLARE_EVENT_CLASS(smb3_other_err_class, TP_PROTO(unsigned int xid, __u64 fid, __u32 tid, @@ -52,8 +108,8 @@ DECLARE_EVENT_CLASS(smb3_rw_err_class, __entry->offset, __entry->len, __entry->rc) ) -#define DEFINE_SMB3_RW_ERR_EVENT(name) \ -DEFINE_EVENT(smb3_rw_err_class, smb3_##name, \ +#define DEFINE_SMB3_OTHER_ERR_EVENT(name) \ +DEFINE_EVENT(smb3_other_err_class, smb3_##name, \ TP_PROTO(unsigned int xid, \ __u64 fid, \ __u32 tid, \ @@ -63,15 +119,67 @@ DEFINE_EVENT(smb3_rw_err_class, smb3_##name, \ int rc), \ TP_ARGS(xid, fid, tid, sesid, offset, len, rc)) -DEFINE_SMB3_RW_ERR_EVENT(write_err); -DEFINE_SMB3_RW_ERR_EVENT(read_err); -DEFINE_SMB3_RW_ERR_EVENT(query_dir_err); -DEFINE_SMB3_RW_ERR_EVENT(zero_err); -DEFINE_SMB3_RW_ERR_EVENT(falloc_err); +DEFINE_SMB3_OTHER_ERR_EVENT(write_err); +DEFINE_SMB3_OTHER_ERR_EVENT(query_dir_err); +DEFINE_SMB3_OTHER_ERR_EVENT(zero_err); +DEFINE_SMB3_OTHER_ERR_EVENT(falloc_err); /* For logging successful read or write */ DECLARE_EVENT_CLASS(smb3_rw_done_class, + TP_PROTO(unsigned int rreq_debug_id, + unsigned int rreq_debug_index, + unsigned int xid, + __u64 fid, + __u32 tid, + __u64 sesid, + __u64 offset, + __u32 len), + TP_ARGS(rreq_debug_id, rreq_debug_index, + xid, fid, tid, sesid, offset, len), + TP_STRUCT__entry( + __field(unsigned int, rreq_debug_id) + __field(unsigned int, rreq_debug_index) + __field(unsigned int, xid) + __field(__u64, fid) + __field(__u32, tid) + __field(__u64, sesid) + __field(__u64, offset) + __field(__u32, len) + ), + TP_fast_assign( + __entry->rreq_debug_id = rreq_debug_id; + __entry->rreq_debug_index = rreq_debug_index; + __entry->xid = xid; + __entry->fid = fid; + __entry->tid = tid; + __entry->sesid = sesid; + __entry->offset = offset; + __entry->len = len; + ), + TP_printk("R=%08x[%x] xid=%u sid=0x%llx tid=0x%x fid=0x%llx offset=0x%llx len=0x%x", + __entry->rreq_debug_id, __entry->rreq_debug_index, + __entry->xid, __entry->sesid, __entry->tid, __entry->fid, + __entry->offset, __entry->len) +) + +#define DEFINE_SMB3_RW_DONE_EVENT(name) \ +DEFINE_EVENT(smb3_rw_done_class, smb3_##name, \ + TP_PROTO(unsigned int rreq_debug_id, \ + unsigned int rreq_debug_index, \ + unsigned int xid, \ + __u64 fid, \ + __u32 tid, \ + __u64 sesid, \ + __u64 offset, \ + __u32 len), \ + TP_ARGS(rreq_debug_id, rreq_debug_index, xid, fid, tid, sesid, offset, len)) + +DEFINE_SMB3_RW_DONE_EVENT(read_enter); +DEFINE_SMB3_RW_DONE_EVENT(read_done); + +/* For logging successful other op */ +DECLARE_EVENT_CLASS(smb3_other_done_class, TP_PROTO(unsigned int xid, __u64 fid, __u32 tid, @@ -100,8 +208,8 @@ DECLARE_EVENT_CLASS(smb3_rw_done_class, __entry->offset, __entry->len) ) -#define DEFINE_SMB3_RW_DONE_EVENT(name) \ -DEFINE_EVENT(smb3_rw_done_class, smb3_##name, \ +#define DEFINE_SMB3_OTHER_DONE_EVENT(name) \ +DEFINE_EVENT(smb3_other_done_class, smb3_##name, \ TP_PROTO(unsigned int xid, \ __u64 fid, \ __u32 tid, \ @@ -110,16 +218,14 @@ DEFINE_EVENT(smb3_rw_done_class, smb3_##name, \ __u32 len), \ TP_ARGS(xid, fid, tid, sesid, offset, len)) -DEFINE_SMB3_RW_DONE_EVENT(write_enter); -DEFINE_SMB3_RW_DONE_EVENT(read_enter); -DEFINE_SMB3_RW_DONE_EVENT(query_dir_enter); -DEFINE_SMB3_RW_DONE_EVENT(zero_enter); -DEFINE_SMB3_RW_DONE_EVENT(falloc_enter); -DEFINE_SMB3_RW_DONE_EVENT(write_done); -DEFINE_SMB3_RW_DONE_EVENT(read_done); -DEFINE_SMB3_RW_DONE_EVENT(query_dir_done); -DEFINE_SMB3_RW_DONE_EVENT(zero_done); -DEFINE_SMB3_RW_DONE_EVENT(falloc_done); +DEFINE_SMB3_OTHER_DONE_EVENT(write_enter); +DEFINE_SMB3_OTHER_DONE_EVENT(query_dir_enter); +DEFINE_SMB3_OTHER_DONE_EVENT(zero_enter); +DEFINE_SMB3_OTHER_DONE_EVENT(falloc_enter); +DEFINE_SMB3_OTHER_DONE_EVENT(write_done); +DEFINE_SMB3_OTHER_DONE_EVENT(query_dir_done); +DEFINE_SMB3_OTHER_DONE_EVENT(zero_done); +DEFINE_SMB3_OTHER_DONE_EVENT(falloc_done); /* For logging successful set EOF (truncate) */ DECLARE_EVENT_CLASS(smb3_eof_class, diff --git a/fs/smb/client/transport.c b/fs/smb/client/transport.c index 5a69a7430ffa..4bf8b2ff26f5 100644 --- a/fs/smb/client/transport.c +++ b/fs/smb/client/transport.c @@ -1808,8 +1808,11 @@ cifs_readv_receive(struct TCP_Server_Info *server, struct mid_q_entry *mid) length = data_len; /* An RDMA read is already done. */ else #endif + { length = cifs_read_iter_from_socket(server, &rdata->subreq.io_iter, data_len); + iov_iter_revert(&rdata->subreq.io_iter, data_len); + } if (length > 0) rdata->got_bytes += length; server->total_read += length; From patchwork Mon Feb 5 22:57:22 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 197095 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7301:168b:b0:106:860b:bbdd with SMTP id ma11csp1198373dyb; Mon, 5 Feb 2024 15:02:37 -0800 (PST) X-Google-Smtp-Source: AGHT+IFFUDlFXeb+ewwzlWRmuO0qTxQ8+xg0eaOxvhJ/v1fV6fgYE4I+iEZeOqU3eAIv0dyr2KRa X-Received: by 2002:ac8:5781:0:b0:42c:1f5c:d363 with SMTP id v1-20020ac85781000000b0042c1f5cd363mr758596qta.1.1707174157089; Mon, 05 Feb 2024 15:02:37 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1707174157; cv=pass; d=google.com; s=arc-20160816; b=MvwGqkqKf0c51qQFFYCdfo5f+AwibRjr/tSM1Wba7UyzDKO8T2JnCthonhf5xPUKCZ /Fp/rPwojIFWSq5x1ttIIX7bUrnzvpYxLF0YhV6engr1LqeuXC48Arc6XYzi6jE5XVFh glDHrS1l6KWZAAjDPa/dvtJPUdfK4FFn7tOYaCd4FEewGIFEZ0bz+jEoWWqRmxyCt+SJ bL6ig2GEIVmYrU4UqfsQMAeqDJ2PX41idX3DmBNc2tVdEClDZl5GDY/w3i4+/vw06agZ EyFYHV13IoKGX8Rq2eA9K+cJ9Koqnhltoavet2WHypetLqhouSckCDaoE3E8yxZG74dI T7dA== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=evq7I8yHT5lFqzTbu9gfMKyqUUTsNpYO73yZf6A4u3M=; fh=4lc+K0VOckg2s6sWXIsuv4n+ysAOmJ9iQhMp71He+Q8=; b=fGvfQJJO3u+8ZFECPxc0IwQ8/yXhZ7ihrzryJBlrGH0gAym+eO/rSz4Fk3Z9a1lyqc RXitWcUyfS9IZZZsrquopzJoGRpw5F8c1Sf+iJPCxUtrsT7QJBm3W7A1jGgqUtw4WLBX OwEV/XPGr8tIwrURf5gOdxP97nZAlbpON4WsPjdjwwQkSQdgGTxE90BxsaKwWeUN0ojY 5eDLUkPWdY8tmXfvsqoAHkZdfgiCGvpUtxitK71EqhyorsTslM0Ofifq4yoANCVtVUR9 mjMDxRe1umZKTOTe6euaPNiBCPMnCHP4jTusdIDnpVstmPPT9Ukw+uuQ4izWMk4DV0e/ oONQ==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=ehtDcmNT; arc=pass (i=1 spf=pass spfdomain=redhat.com dkim=pass dkdomain=redhat.com dmarc=pass fromdomain=redhat.com); spf=pass (google.com: domain of linux-kernel+bounces-54056-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-54056-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com X-Forwarded-Encrypted: i=1; AJvYcCXf7kCWq2pdhM6IdfuaS0AkSdayFSx6QbzQO7Lf1UNsZznb0gWuU3Se5sniBvEWKQo6WWvYEj2bqRIcUn2igj2r3uyySA== Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [2604:1380:45d1:ec00::1]) by mx.google.com with ESMTPS id t19-20020a05622a01d300b0042c075c6b62si970206qtw.709.2024.02.05.15.02.36 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 05 Feb 2024 15:02:37 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-54056-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) client-ip=2604:1380:45d1:ec00::1; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=ehtDcmNT; arc=pass (i=1 spf=pass spfdomain=redhat.com dkim=pass dkdomain=redhat.com dmarc=pass fromdomain=redhat.com); spf=pass (google.com: domain of linux-kernel+bounces-54056-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-54056-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id C86701C27A8C for ; Mon, 5 Feb 2024 23:02:36 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 3FFE95677B; Mon, 5 Feb 2024 22:58:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="ehtDcmNT" Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D93D1524D3 for ; Mon, 5 Feb 2024 22:58:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707173888; cv=none; b=Kw8vrKkWa4g+JTMDmQuEaLLAGr0vLvU1Wlo1Pkv6x7dK0vZu8OLC0MAARfDPMNOwVU+yUxrsfnAK5SxuvrQqNlkn5tOUrsFmHP8u/S3QMl7Hi1swmvCdd/UlUg3Ie4CPHiL5YrCg8LhDewsGGFurrvpmMoRTyWu6DfdtOYR8n+s= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707173888; c=relaxed/simple; bh=aFcgEdqM3jB4MxkKG8QYFByp2mlVnQrASaEMkbzpn5s=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=YN0qwO7Rl8za61ktWtHa5NhJVhSyPenNNdxqkpNN10VVxQT1pdl1fyq4mRy1+0dgV9Hh/jl/dGNypge/VLd2OKsPhaS0+OxXAPq0hsJPtIHDTUAAPLSab2qhIuOHgaOQFK2PIHyYDycwWPRzZfSqrdFDBTLBTaXHD4UQarGB3Cc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=ehtDcmNT; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1707173883; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=evq7I8yHT5lFqzTbu9gfMKyqUUTsNpYO73yZf6A4u3M=; b=ehtDcmNTOwf8G2XxsGq+NEEqOiGvMf47cSH+qH7NYhlWyz0PVFTk9jIJcjMR3WEPpbkobX BlKWs5pwm5dZHCDOkMnQQOlZekgIa/YnFtdER7nViXn2b/jDpJYK7tRzp0cK2blKHi9aG1 c6NZLUmL1WmPnLjg6r54z6lD+qut0ag= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-360-JFCPVP8FMbKlXIqy2aI0EQ-1; Mon, 05 Feb 2024 17:57:59 -0500 X-MC-Unique: JFCPVP8FMbKlXIqy2aI0EQ-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id B39F988D0C7; Mon, 5 Feb 2024 22:57:58 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.42.28.245]) by smtp.corp.redhat.com (Postfix) with ESMTP id 9090B8B; Mon, 5 Feb 2024 22:57:56 +0000 (UTC) From: David Howells To: Steve French Cc: David Howells , Jeff Layton , Matthew Wilcox , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Christian Brauner , netfs@lists.linux.dev, linux-cifs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Steve French , Shyam Prasad N , Rohith Surabattula Subject: [PATCH v5 10/12] cifs: Remove some code that's no longer used, part 1 Date: Mon, 5 Feb 2024 22:57:22 +0000 Message-ID: <20240205225726.3104808-11-dhowells@redhat.com> In-Reply-To: <20240205225726.3104808-1-dhowells@redhat.com> References: <20240205225726.3104808-1-dhowells@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.5 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1790101848958116691 X-GMAIL-MSGID: 1790101848958116691 Remove some code that was #if'd out with the netfslib conversion. This is split into parts for file.c as the diff generator otherwise produces a hard to read diff for part of it where a big chunk is cut out. Signed-off-by: David Howells cc: Steve French cc: Shyam Prasad N cc: Rohith Surabattula cc: Jeff Layton cc: linux-cifs@vger.kernel.org cc: netfs@lists.linux.dev cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org --- fs/smb/client/cifsglob.h | 12 - fs/smb/client/cifsproto.h | 25 -- fs/smb/client/file.c | 640 -------------------------------------- fs/smb/client/fscache.c | 111 ------- fs/smb/client/fscache.h | 58 ---- 5 files changed, 846 deletions(-) diff --git a/fs/smb/client/cifsglob.h b/fs/smb/client/cifsglob.h index 259dc3893e28..70f205997006 100644 --- a/fs/smb/client/cifsglob.h +++ b/fs/smb/client/cifsglob.h @@ -1485,18 +1485,6 @@ struct cifs_io_subrequest { struct smbd_mr *mr; #endif struct cifs_credits credits; - -#if 0 // TODO: Remove following elements - struct list_head list; - struct completion done; - struct work_struct work; - struct cifsFileInfo *cfile; - struct address_space *mapping; - struct cifs_aio_ctx *ctx; - enum writeback_sync_modes sync_mode; - bool uncached; - struct bio_vec *bv; -#endif }; /* diff --git a/fs/smb/client/cifsproto.h b/fs/smb/client/cifsproto.h index d869918fb5c8..1468df1ff47d 100644 --- a/fs/smb/client/cifsproto.h +++ b/fs/smb/client/cifsproto.h @@ -590,36 +590,11 @@ void __cifs_put_smb_ses(struct cifs_ses *ses); extern struct cifs_ses * cifs_get_smb_ses(struct TCP_Server_Info *server, struct smb3_fs_context *ctx); -#if 0 // TODO Remove -void cifs_readdata_release(struct cifs_io_subrequest *rdata); -static inline void cifs_get_readdata(struct cifs_io_subrequest *rdata) -{ - refcount_inc(&rdata->subreq.ref); -} -static inline void cifs_put_readdata(struct cifs_io_subrequest *rdata) -{ - if (refcount_dec_and_test(&rdata->subreq.ref)) - cifs_readdata_release(rdata); -} -#endif int cifs_async_readv(struct cifs_io_subrequest *rdata); int cifs_readv_receive(struct TCP_Server_Info *server, struct mid_q_entry *mid); int cifs_async_writev(struct cifs_io_subrequest *wdata); void cifs_writev_complete(struct work_struct *work); -#if 0 // TODO Remove -struct cifs_io_subrequest *cifs_writedata_alloc(work_func_t complete); -void cifs_writedata_release(struct cifs_io_subrequest *rdata); -static inline void cifs_get_writedata(struct cifs_io_subrequest *wdata) -{ - refcount_inc(&wdata->subreq.ref); -} -static inline void cifs_put_writedata(struct cifs_io_subrequest *wdata) -{ - if (refcount_dec_and_test(&wdata->subreq.ref)) - cifs_writedata_release(wdata); -} -#endif int cifs_query_mf_symlink(unsigned int xid, struct cifs_tcon *tcon, struct cifs_sb_info *cifs_sb, const unsigned char *path, char *pbuf, diff --git a/fs/smb/client/file.c b/fs/smb/client/file.c index ff55749f5709..e31fd607a441 100644 --- a/fs/smb/client/file.c +++ b/fs/smb/client/file.c @@ -395,133 +395,6 @@ const struct netfs_request_ops cifs_req_ops = { .create_write_requests = cifs_create_write_requests, }; -#if 0 // TODO remove 397 -/* - * Remove the dirty flags from a span of pages. - */ -static void cifs_undirty_folios(struct inode *inode, loff_t start, unsigned int len) -{ - struct address_space *mapping = inode->i_mapping; - struct folio *folio; - pgoff_t end; - - XA_STATE(xas, &mapping->i_pages, start / PAGE_SIZE); - - rcu_read_lock(); - - end = (start + len - 1) / PAGE_SIZE; - xas_for_each_marked(&xas, folio, end, PAGECACHE_TAG_DIRTY) { - if (xas_retry(&xas, folio)) - continue; - xas_pause(&xas); - rcu_read_unlock(); - folio_lock(folio); - folio_clear_dirty_for_io(folio); - folio_unlock(folio); - rcu_read_lock(); - } - - rcu_read_unlock(); -} - -/* - * Completion of write to server. - */ -void cifs_pages_written_back(struct inode *inode, loff_t start, unsigned int len) -{ - struct address_space *mapping = inode->i_mapping; - struct folio *folio; - pgoff_t end; - - XA_STATE(xas, &mapping->i_pages, start / PAGE_SIZE); - - if (!len) - return; - - rcu_read_lock(); - - end = (start + len - 1) / PAGE_SIZE; - xas_for_each(&xas, folio, end) { - if (xas_retry(&xas, folio)) - continue; - if (!folio_test_writeback(folio)) { - WARN_ONCE(1, "bad %x @%llx page %lx %lx\n", - len, start, folio->index, end); - continue; - } - - folio_detach_private(folio); - folio_end_writeback(folio); - } - - rcu_read_unlock(); -} - -/* - * Failure of write to server. - */ -void cifs_pages_write_failed(struct inode *inode, loff_t start, unsigned int len) -{ - struct address_space *mapping = inode->i_mapping; - struct folio *folio; - pgoff_t end; - - XA_STATE(xas, &mapping->i_pages, start / PAGE_SIZE); - - if (!len) - return; - - rcu_read_lock(); - - end = (start + len - 1) / PAGE_SIZE; - xas_for_each(&xas, folio, end) { - if (xas_retry(&xas, folio)) - continue; - if (!folio_test_writeback(folio)) { - WARN_ONCE(1, "bad %x @%llx page %lx %lx\n", - len, start, folio->index, end); - continue; - } - - folio_set_error(folio); - folio_end_writeback(folio); - } - - rcu_read_unlock(); -} - -/* - * Redirty pages after a temporary failure. - */ -void cifs_pages_write_redirty(struct inode *inode, loff_t start, unsigned int len) -{ - struct address_space *mapping = inode->i_mapping; - struct folio *folio; - pgoff_t end; - - XA_STATE(xas, &mapping->i_pages, start / PAGE_SIZE); - - if (!len) - return; - - rcu_read_lock(); - - end = (start + len - 1) / PAGE_SIZE; - xas_for_each(&xas, folio, end) { - if (!folio_test_writeback(folio)) { - WARN_ONCE(1, "bad %x @%llx page %lx %lx\n", - len, start, folio->index, end); - continue; - } - - filemap_dirty_folio(folio->mapping, folio); - folio_end_writeback(folio); - } - - rcu_read_unlock(); -} -#endif // end netfslib remove 397 - /* * Mark as invalid, all open files on tree connections since they * were closed when session to server was lost. @@ -2486,92 +2359,6 @@ void cifs_write_subrequest_terminated(struct cifs_io_subrequest *wdata, ssize_t netfs_write_subrequest_terminated(&wdata->subreq, result, was_async); } -#if 0 // TODO remove 2483 -static ssize_t -cifs_write(struct cifsFileInfo *open_file, __u32 pid, const char *write_data, - size_t write_size, loff_t *offset) -{ - int rc = 0; - unsigned int bytes_written = 0; - unsigned int total_written; - struct cifs_tcon *tcon; - struct TCP_Server_Info *server; - unsigned int xid; - struct dentry *dentry = open_file->dentry; - struct cifsInodeInfo *cifsi = CIFS_I(d_inode(dentry)); - struct cifs_io_parms io_parms = {0}; - - cifs_dbg(FYI, "write %zd bytes to offset %lld of %pd\n", - write_size, *offset, dentry); - - tcon = tlink_tcon(open_file->tlink); - server = tcon->ses->server; - - if (!server->ops->sync_write) - return -ENOSYS; - - xid = get_xid(); - - for (total_written = 0; write_size > total_written; - total_written += bytes_written) { - rc = -EAGAIN; - while (rc == -EAGAIN) { - struct kvec iov[2]; - unsigned int len; - - if (open_file->invalidHandle) { - /* we could deadlock if we called - filemap_fdatawait from here so tell - reopen_file not to flush data to - server now */ - rc = cifs_reopen_file(open_file, false); - if (rc != 0) - break; - } - - len = min(server->ops->wp_retry_size(d_inode(dentry)), - (unsigned int)write_size - total_written); - /* iov[0] is reserved for smb header */ - iov[1].iov_base = (char *)write_data + total_written; - iov[1].iov_len = len; - io_parms.pid = pid; - io_parms.tcon = tcon; - io_parms.offset = *offset; - io_parms.length = len; - rc = server->ops->sync_write(xid, &open_file->fid, - &io_parms, &bytes_written, iov, 1); - } - if (rc || (bytes_written == 0)) { - if (total_written) - break; - else { - free_xid(xid); - return rc; - } - } else { - spin_lock(&d_inode(dentry)->i_lock); - cifs_update_eof(cifsi, *offset, bytes_written); - spin_unlock(&d_inode(dentry)->i_lock); - *offset += bytes_written; - } - } - - cifs_stats_bytes_written(tcon, total_written); - - if (total_written > 0) { - spin_lock(&d_inode(dentry)->i_lock); - if (*offset > d_inode(dentry)->i_size) { - i_size_write(d_inode(dentry), *offset); - d_inode(dentry)->i_blocks = (512 - 1 + *offset) >> 9; - } - spin_unlock(&d_inode(dentry)->i_lock); - } - mark_inode_dirty_sync(d_inode(dentry)); - free_xid(xid); - return total_written; -} -#endif // end netfslib remove 2483 - struct cifsFileInfo *find_readable_file(struct cifsInodeInfo *cifs_inode, bool fsuid_only) { @@ -4832,293 +4619,6 @@ int cifs_file_mmap(struct file *file, struct vm_area_struct *vma) return rc; } -#if 0 // TODO remove 4794 -/* - * Unlock a bunch of folios in the pagecache. - */ -static void cifs_unlock_folios(struct address_space *mapping, pgoff_t first, pgoff_t last) -{ - struct folio *folio; - XA_STATE(xas, &mapping->i_pages, first); - - rcu_read_lock(); - xas_for_each(&xas, folio, last) { - folio_unlock(folio); - } - rcu_read_unlock(); -} - -static void cifs_readahead_complete(struct work_struct *work) -{ - struct cifs_io_subrequest *rdata = container_of(work, - struct cifs_io_subrequest, work); - struct folio *folio; - pgoff_t last; - bool good = rdata->result == 0 || (rdata->result == -EAGAIN && rdata->got_bytes); - - XA_STATE(xas, &rdata->mapping->i_pages, rdata->subreq.start / PAGE_SIZE); - - if (good) - cifs_readahead_to_fscache(rdata->mapping->host, - rdata->subreq.start, rdata->subreq.len); - - if (iov_iter_count(&rdata->subreq.io_iter) > 0) - iov_iter_zero(iov_iter_count(&rdata->subreq.io_iter), &rdata->subreq.io_iter); - - last = (rdata->subreq.start + rdata->subreq.len - 1) / PAGE_SIZE; - - rcu_read_lock(); - xas_for_each(&xas, folio, last) { - if (good) { - flush_dcache_folio(folio); - folio_mark_uptodate(folio); - } - folio_unlock(folio); - } - rcu_read_unlock(); - - cifs_put_readdata(rdata); -} - -static void cifs_readahead(struct readahead_control *ractl) -{ - struct cifsFileInfo *open_file = ractl->file->private_data; - struct cifs_sb_info *cifs_sb = CIFS_FILE_SB(ractl->file); - struct TCP_Server_Info *server; - unsigned int xid, nr_pages, cache_nr_pages = 0; - unsigned int ra_pages; - pgoff_t next_cached = ULONG_MAX, ra_index; - bool caching = fscache_cookie_enabled(cifs_inode_cookie(ractl->mapping->host)) && - cifs_inode_cookie(ractl->mapping->host)->cache_priv; - bool check_cache = caching; - pid_t pid; - int rc = 0; - - /* Note that readahead_count() lags behind our dequeuing of pages from - * the ractl, wo we have to keep track for ourselves. - */ - ra_pages = readahead_count(ractl); - ra_index = readahead_index(ractl); - - xid = get_xid(); - - if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_RWPIDFORWARD) - pid = open_file->pid; - else - pid = current->tgid; - - server = cifs_pick_channel(tlink_tcon(open_file->tlink)->ses); - - cifs_dbg(FYI, "%s: file=%p mapping=%p num_pages=%u\n", - __func__, ractl->file, ractl->mapping, ra_pages); - - /* - * Chop the readahead request up into rsize-sized read requests. - */ - while ((nr_pages = ra_pages)) { - unsigned int i; - struct cifs_io_subrequest *rdata; - struct cifs_credits credits_on_stack; - struct cifs_credits *credits = &credits_on_stack; - struct folio *folio; - pgoff_t fsize; - size_t rsize; - - /* - * Find out if we have anything cached in the range of - * interest, and if so, where the next chunk of cached data is. - */ - if (caching) { - if (check_cache) { - rc = cifs_fscache_query_occupancy( - ractl->mapping->host, ra_index, nr_pages, - &next_cached, &cache_nr_pages); - if (rc < 0) - caching = false; - check_cache = false; - } - - if (ra_index == next_cached) { - /* - * TODO: Send a whole batch of pages to be read - * by the cache. - */ - folio = readahead_folio(ractl); - fsize = folio_nr_pages(folio); - ra_pages -= fsize; - ra_index += fsize; - if (cifs_readpage_from_fscache(ractl->mapping->host, - &folio->page) < 0) { - /* - * TODO: Deal with cache read failure - * here, but for the moment, delegate - * that to readpage. - */ - caching = false; - } - folio_unlock(folio); - next_cached += fsize; - cache_nr_pages -= fsize; - if (cache_nr_pages == 0) - check_cache = true; - continue; - } - } - - if (open_file->invalidHandle) { - rc = cifs_reopen_file(open_file, true); - if (rc) { - if (rc == -EAGAIN) - continue; - break; - } - } - - if (cifs_sb->ctx->rsize == 0) - cifs_sb->ctx->rsize = - server->ops->negotiate_rsize(tlink_tcon(open_file->tlink), - cifs_sb->ctx); - - rc = server->ops->wait_mtu_credits(server, cifs_sb->ctx->rsize, - &rsize, credits); - if (rc) - break; - nr_pages = min_t(size_t, rsize / PAGE_SIZE, ra_pages); - if (next_cached != ULONG_MAX) - nr_pages = min_t(size_t, nr_pages, next_cached - ra_index); - - /* - * Give up immediately if rsize is too small to read an entire - * page. The VFS will fall back to readpage. We should never - * reach this point however since we set ra_pages to 0 when the - * rsize is smaller than a cache page. - */ - if (unlikely(!nr_pages)) { - add_credits_and_wake_if(server, credits, 0); - break; - } - - rdata = cifs_readdata_alloc(cifs_readahead_complete); - if (!rdata) { - /* best to give up if we're out of mem */ - add_credits_and_wake_if(server, credits, 0); - break; - } - - rdata->subreq.start = ra_index * PAGE_SIZE; - rdata->subreq.len = nr_pages * PAGE_SIZE; - rdata->cfile = cifsFileInfo_get(open_file); - rdata->server = server; - rdata->mapping = ractl->mapping; - rdata->pid = pid; - rdata->credits = credits_on_stack; - - for (i = 0; i < nr_pages; i++) { - if (!readahead_folio(ractl)) - WARN_ON(1); - } - ra_pages -= nr_pages; - ra_index += nr_pages; - - iov_iter_xarray(&rdata->subreq.io_iter, ITER_DEST, &rdata->mapping->i_pages, - rdata->subreq.start, rdata->subreq.len); - - rc = adjust_credits(server, &rdata->credits, rdata->subreq.len); - if (!rc) { - if (rdata->cfile->invalidHandle) - rc = -EAGAIN; - else - rc = server->ops->async_readv(rdata); - } - - if (rc) { - add_credits_and_wake_if(server, &rdata->credits, 0); - cifs_unlock_folios(rdata->mapping, - rdata->subreq.start / PAGE_SIZE, - (rdata->subreq.start + rdata->subreq.len - 1) / PAGE_SIZE); - /* Fallback to the readpage in error/reconnect cases */ - cifs_put_readdata(rdata); - break; - } - - cifs_put_readdata(rdata); - } - - free_xid(xid); -} - -/* - * cifs_readpage_worker must be called with the page pinned - */ -static int cifs_readpage_worker(struct file *file, struct page *page, - loff_t *poffset) -{ - struct inode *inode = file_inode(file); - struct timespec64 atime, mtime; - char *read_data; - int rc; - - /* Is the page cached? */ - rc = cifs_readpage_from_fscache(inode, page); - if (rc == 0) - goto read_complete; - - read_data = kmap(page); - /* for reads over a certain size could initiate async read ahead */ - - rc = cifs_read(file, read_data, PAGE_SIZE, poffset); - - if (rc < 0) - goto io_error; - else - cifs_dbg(FYI, "Bytes read %d\n", rc); - - /* we do not want atime to be less than mtime, it broke some apps */ - atime = inode_set_atime_to_ts(inode, current_time(inode)); - mtime = inode_get_mtime(inode); - if (timespec64_compare(&atime, &mtime) < 0) - inode_set_atime_to_ts(inode, inode_get_mtime(inode)); - - if (PAGE_SIZE > rc) - memset(read_data + rc, 0, PAGE_SIZE - rc); - - flush_dcache_page(page); - SetPageUptodate(page); - rc = 0; - -io_error: - kunmap(page); - -read_complete: - unlock_page(page); - return rc; -} - -static int cifs_read_folio(struct file *file, struct folio *folio) -{ - struct page *page = &folio->page; - loff_t offset = page_file_offset(page); - int rc = -EACCES; - unsigned int xid; - - xid = get_xid(); - - if (file->private_data == NULL) { - rc = -EBADF; - free_xid(xid); - return rc; - } - - cifs_dbg(FYI, "read_folio %p at offset %d 0x%x\n", - page, (int)offset, (int)offset); - - rc = cifs_readpage_worker(file, page, &offset); - - free_xid(xid); - return rc; -} -#endif // end netfslib remove 4794 - static int is_inode_writable(struct cifsInodeInfo *cifs_inode) { struct cifsFileInfo *open_file; @@ -5164,125 +4664,6 @@ bool is_size_safe_to_change(struct cifsInodeInfo *cifsInode, __u64 end_of_file) return true; } -#if 0 // TODO remove 5152 -static int cifs_write_begin(struct file *file, struct address_space *mapping, - loff_t pos, unsigned len, - struct page **pagep, void **fsdata) -{ - int oncethru = 0; - pgoff_t index = pos >> PAGE_SHIFT; - loff_t offset = pos & (PAGE_SIZE - 1); - loff_t page_start = pos & PAGE_MASK; - loff_t i_size; - struct page *page; - int rc = 0; - - cifs_dbg(FYI, "write_begin from %lld len %d\n", (long long)pos, len); - -start: - page = grab_cache_page_write_begin(mapping, index); - if (!page) { - rc = -ENOMEM; - goto out; - } - - if (PageUptodate(page)) - goto out; - - /* - * If we write a full page it will be up to date, no need to read from - * the server. If the write is short, we'll end up doing a sync write - * instead. - */ - if (len == PAGE_SIZE) - goto out; - - /* - * optimize away the read when we have an oplock, and we're not - * expecting to use any of the data we'd be reading in. That - * is, when the page lies beyond the EOF, or straddles the EOF - * and the write will cover all of the existing data. - */ - if (CIFS_CACHE_READ(CIFS_I(mapping->host))) { - i_size = i_size_read(mapping->host); - if (page_start >= i_size || - (offset == 0 && (pos + len) >= i_size)) { - zero_user_segments(page, 0, offset, - offset + len, - PAGE_SIZE); - /* - * PageChecked means that the parts of the page - * to which we're not writing are considered up - * to date. Once the data is copied to the - * page, it can be set uptodate. - */ - SetPageChecked(page); - goto out; - } - } - - if ((file->f_flags & O_ACCMODE) != O_WRONLY && !oncethru) { - /* - * might as well read a page, it is fast enough. If we get - * an error, we don't need to return it. cifs_write_end will - * do a sync write instead since PG_uptodate isn't set. - */ - cifs_readpage_worker(file, page, &page_start); - put_page(page); - oncethru = 1; - goto start; - } else { - /* we could try using another file handle if there is one - - but how would we lock it to prevent close of that handle - racing with this read? In any case - this will be written out by write_end so is fine */ - } -out: - *pagep = page; - return rc; -} - -static bool cifs_release_folio(struct folio *folio, gfp_t gfp) -{ - if (folio_test_private(folio)) - return 0; - if (folio_test_fscache(folio)) { - if (current_is_kswapd() || !(gfp & __GFP_FS)) - return false; - folio_wait_fscache(folio); - } - fscache_note_page_release(cifs_inode_cookie(folio->mapping->host)); - return true; -} - -static void cifs_invalidate_folio(struct folio *folio, size_t offset, - size_t length) -{ - folio_wait_fscache(folio); -} - -static int cifs_launder_folio(struct folio *folio) -{ - int rc = 0; - loff_t range_start = folio_pos(folio); - loff_t range_end = range_start + folio_size(folio); - struct writeback_control wbc = { - .sync_mode = WB_SYNC_ALL, - .nr_to_write = 0, - .range_start = range_start, - .range_end = range_end, - }; - - cifs_dbg(FYI, "Launder page: %lu\n", folio->index); - - if (folio_clear_dirty_for_io(folio)) - rc = cifs_writepage_locked(&folio->page, &wbc); - - folio_wait_fscache(folio); - return rc; -} -#endif // end netfslib remove 5152 - void cifs_oplock_break(struct work_struct *work) { struct cifsFileInfo *cfile = container_of(work, struct cifsFileInfo, @@ -5372,27 +4753,6 @@ void cifs_oplock_break(struct work_struct *work) cifs_done_oplock_break(cinode); } -#if 0 // TODO remove 5333 -/* - * The presence of cifs_direct_io() in the address space ops vector - * allowes open() O_DIRECT flags which would have failed otherwise. - * - * In the non-cached mode (mount with cache=none), we shunt off direct read and write requests - * so this method should never be called. - * - * Direct IO is not yet supported in the cached mode. - */ -static ssize_t -cifs_direct_io(struct kiocb *iocb, struct iov_iter *iter) -{ - /* - * FIXME - * Eventually need to support direct IO for non forcedirectio mounts - */ - return -EINVAL; -} -#endif // netfs end remove 5333 - static int cifs_swap_activate(struct swap_info_struct *sis, struct file *swap_file, sector_t *span) { diff --git a/fs/smb/client/fscache.c b/fs/smb/client/fscache.c index 228fe57bbde3..bd9284923cc6 100644 --- a/fs/smb/client/fscache.c +++ b/fs/smb/client/fscache.c @@ -136,114 +136,3 @@ void cifs_fscache_release_inode_cookie(struct inode *inode) cifsi->netfs.cache = NULL; } } - -#if 0 // TODO remove -/* - * Fallback page reading interface. - */ -static int fscache_fallback_read_page(struct inode *inode, struct page *page) -{ - struct netfs_cache_resources cres; - struct fscache_cookie *cookie = cifs_inode_cookie(inode); - struct iov_iter iter; - struct bio_vec bvec; - int ret; - - memset(&cres, 0, sizeof(cres)); - bvec_set_page(&bvec, page, PAGE_SIZE, 0); - iov_iter_bvec(&iter, ITER_DEST, &bvec, 1, PAGE_SIZE); - - ret = fscache_begin_read_operation(&cres, cookie); - if (ret < 0) - return ret; - - ret = fscache_read(&cres, page_offset(page), &iter, NETFS_READ_HOLE_FAIL, - NULL, NULL); - fscache_end_operation(&cres); - return ret; -} - -/* - * Fallback page writing interface. - */ -static int fscache_fallback_write_pages(struct inode *inode, loff_t start, size_t len, - bool no_space_allocated_yet) -{ - struct netfs_cache_resources cres; - struct fscache_cookie *cookie = cifs_inode_cookie(inode); - struct iov_iter iter; - int ret; - - memset(&cres, 0, sizeof(cres)); - iov_iter_xarray(&iter, ITER_SOURCE, &inode->i_mapping->i_pages, start, len); - - ret = fscache_begin_write_operation(&cres, cookie); - if (ret < 0) - return ret; - - ret = cres.ops->prepare_write(&cres, &start, &len, len, i_size_read(inode), - no_space_allocated_yet); - if (ret == 0) - ret = fscache_write(&cres, start, &iter, NULL, NULL); - fscache_end_operation(&cres); - return ret; -} - -/* - * Retrieve a page from FS-Cache - */ -int __cifs_readpage_from_fscache(struct inode *inode, struct page *page) -{ - int ret; - - cifs_dbg(FYI, "%s: (fsc:%p, p:%p, i:0x%p\n", - __func__, cifs_inode_cookie(inode), page, inode); - - ret = fscache_fallback_read_page(inode, page); - if (ret < 0) - return ret; - - /* Read completed synchronously */ - SetPageUptodate(page); - return 0; -} - -void __cifs_readahead_to_fscache(struct inode *inode, loff_t pos, size_t len) -{ - cifs_dbg(FYI, "%s: (fsc: %p, p: %llx, l: %zx, i: %p)\n", - __func__, cifs_inode_cookie(inode), pos, len, inode); - - fscache_fallback_write_pages(inode, pos, len, true); -} - -/* - * Query the cache occupancy. - */ -int __cifs_fscache_query_occupancy(struct inode *inode, - pgoff_t first, unsigned int nr_pages, - pgoff_t *_data_first, - unsigned int *_data_nr_pages) -{ - struct netfs_cache_resources cres; - struct fscache_cookie *cookie = cifs_inode_cookie(inode); - loff_t start, data_start; - size_t len, data_len; - int ret; - - ret = fscache_begin_read_operation(&cres, cookie); - if (ret < 0) - return ret; - - start = first * PAGE_SIZE; - len = nr_pages * PAGE_SIZE; - ret = cres.ops->query_occupancy(&cres, start, len, PAGE_SIZE, - &data_start, &data_len); - if (ret == 0) { - *_data_first = data_start / PAGE_SIZE; - *_data_nr_pages = len / PAGE_SIZE; - } - - fscache_end_operation(&cres); - return ret; -} -#endif diff --git a/fs/smb/client/fscache.h b/fs/smb/client/fscache.h index c2c05a778a71..ece1a826adb9 100644 --- a/fs/smb/client/fscache.h +++ b/fs/smb/client/fscache.h @@ -74,43 +74,6 @@ static inline void cifs_invalidate_cache(struct inode *inode, unsigned int flags i_size_read(inode), flags); } -#if 0 // TODO remove -extern int __cifs_fscache_query_occupancy(struct inode *inode, - pgoff_t first, unsigned int nr_pages, - pgoff_t *_data_first, - unsigned int *_data_nr_pages); - -static inline int cifs_fscache_query_occupancy(struct inode *inode, - pgoff_t first, unsigned int nr_pages, - pgoff_t *_data_first, - unsigned int *_data_nr_pages) -{ - if (!cifs_inode_cookie(inode)) - return -ENOBUFS; - return __cifs_fscache_query_occupancy(inode, first, nr_pages, - _data_first, _data_nr_pages); -} - -extern int __cifs_readpage_from_fscache(struct inode *pinode, struct page *ppage); -extern void __cifs_readahead_to_fscache(struct inode *pinode, loff_t pos, size_t len); - - -static inline int cifs_readpage_from_fscache(struct inode *inode, - struct page *page) -{ - if (cifs_inode_cookie(inode)) - return __cifs_readpage_from_fscache(inode, page); - return -ENOBUFS; -} - -static inline void cifs_readahead_to_fscache(struct inode *inode, - loff_t pos, size_t len) -{ - if (cifs_inode_cookie(inode)) - __cifs_readahead_to_fscache(inode, pos, len); -} -#endif - #else /* CONFIG_CIFS_FSCACHE */ static inline void cifs_fscache_fill_coherency(struct inode *inode, @@ -127,27 +90,6 @@ static inline void cifs_fscache_unuse_inode_cookie(struct inode *inode, bool upd static inline struct fscache_cookie *cifs_inode_cookie(struct inode *inode) { return NULL; } static inline void cifs_invalidate_cache(struct inode *inode, unsigned int flags) {} -#if 0 // TODO remove -static inline int cifs_fscache_query_occupancy(struct inode *inode, - pgoff_t first, unsigned int nr_pages, - pgoff_t *_data_first, - unsigned int *_data_nr_pages) -{ - *_data_first = ULONG_MAX; - *_data_nr_pages = 0; - return -ENOBUFS; -} - -static inline int -cifs_readpage_from_fscache(struct inode *inode, struct page *page) -{ - return -ENOBUFS; -} - -static inline -void cifs_readahead_to_fscache(struct inode *inode, loff_t pos, size_t len) {} -#endif - #endif /* CONFIG_CIFS_FSCACHE */ #endif /* _CIFS_FSCACHE_H */ From patchwork Mon Feb 5 22:57:23 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 197098 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7301:168b:b0:106:860b:bbdd with SMTP id ma11csp1199209dyb; Mon, 5 Feb 2024 15:04:06 -0800 (PST) X-Google-Smtp-Source: AGHT+IGfyZ3txyR6y37NBYanY9BZXsTPM4Bcn0TN0gVgX4+LhsSxgE5/y6QB0chcSRvQJ5tMdn5X X-Received: by 2002:a17:90a:fe0c:b0:296:72ff:4662 with SMTP id ck12-20020a17090afe0c00b0029672ff4662mr1441993pjb.15.1707174246017; Mon, 05 Feb 2024 15:04:06 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1707174246; cv=pass; d=google.com; s=arc-20160816; b=b9pHsitW7jXcYSfUFkN3JLyKZYWe1WoCHnv/Nbw+S8ehUv8w3NamEgVsoP4vRvC4nu TnpL6Z3+5OE9fFpW15FQSBVv07VqF1NH7P2ri1PhrCvswSx9v3/ot+s8arIh1ods1bBB 3spbjS79rsmTDGqMfMQz6DkK7RXguquhDFp6AfbQC+M3fLNWL/CjUrzt0DXQlHD7P1jE 4pJjDrwRlVmLKRhJRTU2IszBX4iOZuZHuGWzGC5s17gREB0WrzQwafKyi7RkVem/6LQl Zm6MpzEN6GOdmuOLWIl7lBJC1gK9FTPtP3wSN4esn1SpRL+0l+IMNvhojBVVUtN5gXvc I5sQ== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=BeM8S0+ClS48cNX3X/7GzV8RjM7CKKvdghmlWtBhsBM=; fh=YmQ33/PUSGNkTiiwkGPIDujHaqDNMixq0Dgrs8W67nQ=; b=bUxzbqdNeaDVpRMqY6aXXyhSKH8ZqNXq3pXhIITZtLbC56By6wWbXWKQNDatzypjHa eGcsyQRSiAMoDwanzBoeapC2Hs1yAuRP6qg1QlHKOr68wjf5JAspZ6ftFF/87Cp/jWIQ ++f0/APyvyjfy09mBBPn08eMYsFoFV1/tpNba1JXXf22ptF2OqFE0o0dAKqRmOOFLNpp 7Dv/B1j8NgOk2WVpcOc3ovAXW9Ft0JxVwlZ9+KaONXihvpUcZEwdk6JIcKB55tV3n/5W 2HOXPi1luhEAJF7biMnFym2K0pPhxRnlx4DPD24T4u9M/l4eohl5TM3zKdWD2A/AWjev IjGA==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=fn5WScXm; arc=pass (i=1 spf=pass spfdomain=redhat.com dkim=pass dkdomain=redhat.com dmarc=pass fromdomain=redhat.com); spf=pass (google.com: domain of linux-kernel+bounces-54057-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-54057-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com X-Forwarded-Encrypted: i=2; AJvYcCU2Ms81sRkuXdVZPhgqwhwQXEzQZ/Zuw0X0YN2CgtUuk+7lvC0GhRk21oS3AZvNQY8n2bJO6hnKZDp1iAZjs6BUv1Umvw== Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [139.178.88.99]) by mx.google.com with ESMTPS id ne22-20020a17090b375600b002965fd67447si15158pjb.52.2024.02.05.15.04.05 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 05 Feb 2024 15:04:06 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-54057-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) client-ip=139.178.88.99; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=fn5WScXm; arc=pass (i=1 spf=pass spfdomain=redhat.com dkim=pass dkdomain=redhat.com dmarc=pass fromdomain=redhat.com); spf=pass (google.com: domain of linux-kernel+bounces-54057-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-54057-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id A849428F410 for ; Mon, 5 Feb 2024 23:02:30 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 61BC25676C; Mon, 5 Feb 2024 22:58:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="fn5WScXm" Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D3DEF537EB for ; Mon, 5 Feb 2024 22:58:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707173889; cv=none; b=pKwn5SrPwbHMD3YZ2EfxlGFmPDb9/b+a+1zFdGjytKVLdftJie0P6X/CR0cPJ2aSoC5WBeUQ8cMLS9MYuV7qm9pcUpy+PDin4lFwcyshKJs1ncmL5W3EwfWNhmLVWCHgkdsUJ3EzgZVtTWOvGEVby2gETjmg731av2ZVzuu4/rU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707173889; c=relaxed/simple; bh=1R5fDGKnWvIm55OiZr8PBwbJLDuEoQRT0AihIu6a+bU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=iaYNMV+MEwMEBWOPUlJdvoRQRLl7ytz9CTcq/9a0XGdGRhW50qgCep/GKNuaSQG6R1AvWNaHTcHXvozJAyuTsD3qERJKtscF4Ko9LZSazFxmu6TO2vLqhJV/9CNWoEbnl9MmezbVvkib6BoW3t+XrHWg+POPLevMszgK8uDbcGE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=fn5WScXm; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1707173885; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=BeM8S0+ClS48cNX3X/7GzV8RjM7CKKvdghmlWtBhsBM=; b=fn5WScXmLPpGrdWBkVP0e0bQTQow/YwKtT+asAJknT91Lq2RYavMeQX6Lj3ipMnW0DVj27 hvLa2WghOw6HZKWnCt9ppU13HF91vrzhenI3UMXgit/lsDipx3dCw5nxhF7KSyNLqjRMwz KFxckEejuxY9BMqQO2wQ8SRt4QyOf9A= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-466-mT9sdABmMcCjSScI-zS0sg-1; Mon, 05 Feb 2024 17:58:02 -0500 X-MC-Unique: mT9sdABmMcCjSScI-zS0sg-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 5D16F386A0A3; Mon, 5 Feb 2024 22:58:01 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.42.28.245]) by smtp.corp.redhat.com (Postfix) with ESMTP id 51F803C2E; Mon, 5 Feb 2024 22:57:59 +0000 (UTC) From: David Howells To: Steve French Cc: David Howells , Jeff Layton , Matthew Wilcox , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Christian Brauner , netfs@lists.linux.dev, linux-cifs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Steve French , Shyam Prasad N , Rohith Surabattula Subject: [PATCH v5 11/12] cifs: Remove some code that's no longer used, part 2 Date: Mon, 5 Feb 2024 22:57:23 +0000 Message-ID: <20240205225726.3104808-12-dhowells@redhat.com> In-Reply-To: <20240205225726.3104808-1-dhowells@redhat.com> References: <20240205225726.3104808-1-dhowells@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.1 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1790101942134342182 X-GMAIL-MSGID: 1790101942134342182 Remove some code that was #if'd out with the netfslib conversion. This is split into parts for file.c as the diff generator otherwise produces a hard to read diff for part of it where a big chunk is cut out. Signed-off-by: David Howells cc: Steve French cc: Shyam Prasad N cc: Rohith Surabattula cc: Jeff Layton cc: linux-cifs@vger.kernel.org cc: netfs@lists.linux.dev cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org --- fs/smb/client/file.c | 694 +------------------------------------------ 1 file changed, 1 insertion(+), 693 deletions(-) diff --git a/fs/smb/client/file.c b/fs/smb/client/file.c index e31fd607a441..c50d4071c877 100644 --- a/fs/smb/client/file.c +++ b/fs/smb/client/file.c @@ -2563,699 +2563,6 @@ cifs_get_readable_path(struct cifs_tcon *tcon, const char *name, return -ENOENT; } -#if 0 // TODO remove 2773 -void -cifs_writedata_release(struct cifs_io_subrequest *wdata) -{ - if (wdata->uncached) - kref_put(&wdata->ctx->refcount, cifs_aio_ctx_release); -#ifdef CONFIG_CIFS_SMB_DIRECT - if (wdata->mr) { - smbd_deregister_mr(wdata->mr); - wdata->mr = NULL; - } -#endif - - if (wdata->cfile) - cifsFileInfo_put(wdata->cfile); - - kfree(wdata); -} - -/* - * Write failed with a retryable error. Resend the write request. It's also - * possible that the page was redirtied so re-clean the page. - */ -static void -cifs_writev_requeue(struct cifs_io_subrequest *wdata) -{ - int rc = 0; - struct inode *inode = d_inode(wdata->cfile->dentry); - struct TCP_Server_Info *server; - unsigned int rest_len = wdata->subreq.len; - loff_t fpos = wdata->subreq.start; - - server = tlink_tcon(wdata->cfile->tlink)->ses->server; - do { - struct cifs_io_subrequest *wdata2; - unsigned int wsize, cur_len; - - wsize = server->ops->wp_retry_size(inode); - if (wsize < rest_len) { - if (wsize < PAGE_SIZE) { - rc = -EOPNOTSUPP; - break; - } - cur_len = min(round_down(wsize, PAGE_SIZE), rest_len); - } else { - cur_len = rest_len; - } - - wdata2 = cifs_writedata_alloc(cifs_writev_complete); - if (!wdata2) { - rc = -ENOMEM; - break; - } - - wdata2->sync_mode = wdata->sync_mode; - wdata2->subreq.start = fpos; - wdata2->subreq.len = cur_len; - wdata2->subreq.io_iter = wdata->subreq.io_iter; - - iov_iter_advance(&wdata2->subreq.io_iter, fpos - wdata->subreq.start); - iov_iter_truncate(&wdata2->subreq.io_iter, wdata2->subreq.len); - - if (iov_iter_is_xarray(&wdata2->subreq.io_iter)) - /* Check for pages having been redirtied and clean - * them. We can do this by walking the xarray. If - * it's not an xarray, then it's a DIO and we shouldn't - * be mucking around with the page bits. - */ - cifs_undirty_folios(inode, fpos, cur_len); - - rc = cifs_get_writable_file(CIFS_I(inode), FIND_WR_ANY, - &wdata2->cfile); - if (!wdata2->cfile) { - cifs_dbg(VFS, "No writable handle to retry writepages rc=%d\n", - rc); - if (!is_retryable_error(rc)) - rc = -EBADF; - } else { - wdata2->pid = wdata2->cfile->pid; - rc = server->ops->async_writev(wdata2); - } - - cifs_put_writedata(wdata2); - if (rc) { - if (is_retryable_error(rc)) - continue; - fpos += cur_len; - rest_len -= cur_len; - break; - } - - fpos += cur_len; - rest_len -= cur_len; - } while (rest_len > 0); - - /* Clean up remaining pages from the original wdata */ - if (iov_iter_is_xarray(&wdata->subreq.io_iter)) - cifs_pages_write_failed(inode, fpos, rest_len); - - if (rc != 0 && !is_retryable_error(rc)) - mapping_set_error(inode->i_mapping, rc); - cifs_put_writedata(wdata); -} - -void -cifs_writev_complete(struct work_struct *work) -{ - struct cifs_io_subrequest *wdata = container_of(work, - struct cifs_io_subrequest, work); - struct inode *inode = d_inode(wdata->cfile->dentry); - - if (wdata->result == 0) { - spin_lock(&inode->i_lock); - cifs_update_eof(CIFS_I(inode), wdata->subreq.start, wdata->subreq.len); - spin_unlock(&inode->i_lock); - cifs_stats_bytes_written(tlink_tcon(wdata->cfile->tlink), - wdata->subreq.len); - } else if (wdata->sync_mode == WB_SYNC_ALL && wdata->result == -EAGAIN) - return cifs_writev_requeue(wdata); - - if (wdata->result == -EAGAIN) - cifs_pages_write_redirty(inode, wdata->subreq.start, wdata->subreq.len); - else if (wdata->result < 0) - cifs_pages_write_failed(inode, wdata->subreq.start, wdata->subreq.len); - else - cifs_pages_written_back(inode, wdata->subreq.start, wdata->subreq.len); - - if (wdata->result != -EAGAIN) - mapping_set_error(inode->i_mapping, wdata->result); - cifs_put_writedata(wdata); -} - -struct cifs_io_subrequest *cifs_writedata_alloc(work_func_t complete) -{ - struct cifs_io_subrequest *wdata; - - wdata = kzalloc(sizeof(*wdata), GFP_NOFS); - if (wdata != NULL) { - refcount_set(&wdata->subreq.ref, 1); - INIT_LIST_HEAD(&wdata->list); - init_completion(&wdata->done); - INIT_WORK(&wdata->work, complete); - } - return wdata; -} - -static int cifs_partialpagewrite(struct page *page, unsigned from, unsigned to) -{ - struct address_space *mapping = page->mapping; - loff_t offset = (loff_t)page->index << PAGE_SHIFT; - char *write_data; - int rc = -EFAULT; - int bytes_written = 0; - struct inode *inode; - struct cifsFileInfo *open_file; - - if (!mapping || !mapping->host) - return -EFAULT; - - inode = page->mapping->host; - - offset += (loff_t)from; - write_data = kmap(page); - write_data += from; - - if ((to > PAGE_SIZE) || (from > to)) { - kunmap(page); - return -EIO; - } - - /* racing with truncate? */ - if (offset > mapping->host->i_size) { - kunmap(page); - return 0; /* don't care */ - } - - /* check to make sure that we are not extending the file */ - if (mapping->host->i_size - offset < (loff_t)to) - to = (unsigned)(mapping->host->i_size - offset); - - rc = cifs_get_writable_file(CIFS_I(mapping->host), FIND_WR_ANY, - &open_file); - if (!rc) { - bytes_written = cifs_write(open_file, open_file->pid, - write_data, to - from, &offset); - cifsFileInfo_put(open_file); - /* Does mm or vfs already set times? */ - simple_inode_init_ts(inode); - if ((bytes_written > 0) && (offset)) - rc = 0; - else if (bytes_written < 0) - rc = bytes_written; - else - rc = -EFAULT; - } else { - cifs_dbg(FYI, "No writable handle for write page rc=%d\n", rc); - if (!is_retryable_error(rc)) - rc = -EIO; - } - - kunmap(page); - return rc; -} - -/* - * Extend the region to be written back to include subsequent contiguously - * dirty pages if possible, but don't sleep while doing so. - */ -static void cifs_extend_writeback(struct address_space *mapping, - long *_count, - loff_t start, - int max_pages, - size_t max_len, - unsigned int *_len) -{ - struct folio_batch batch; - struct folio *folio; - unsigned int psize, nr_pages; - size_t len = *_len; - pgoff_t index = (start + len) / PAGE_SIZE; - bool stop = true; - unsigned int i; - XA_STATE(xas, &mapping->i_pages, index); - - folio_batch_init(&batch); - - do { - /* Firstly, we gather up a batch of contiguous dirty pages - * under the RCU read lock - but we can't clear the dirty flags - * there if any of those pages are mapped. - */ - rcu_read_lock(); - - xas_for_each(&xas, folio, ULONG_MAX) { - stop = true; - if (xas_retry(&xas, folio)) - continue; - if (xa_is_value(folio)) - break; - if (folio->index != index) - break; - if (!folio_try_get_rcu(folio)) { - xas_reset(&xas); - continue; - } - nr_pages = folio_nr_pages(folio); - if (nr_pages > max_pages) - break; - - /* Has the page moved or been split? */ - if (unlikely(folio != xas_reload(&xas))) { - folio_put(folio); - break; - } - - if (!folio_trylock(folio)) { - folio_put(folio); - break; - } - if (!folio_test_dirty(folio) || folio_test_writeback(folio)) { - folio_unlock(folio); - folio_put(folio); - break; - } - - max_pages -= nr_pages; - psize = folio_size(folio); - len += psize; - stop = false; - if (max_pages <= 0 || len >= max_len || *_count <= 0) - stop = true; - - index += nr_pages; - if (!folio_batch_add(&batch, folio)) - break; - if (stop) - break; - } - - if (!stop) - xas_pause(&xas); - rcu_read_unlock(); - - /* Now, if we obtained any pages, we can shift them to being - * writable and mark them for caching. - */ - if (!folio_batch_count(&batch)) - break; - - for (i = 0; i < folio_batch_count(&batch); i++) { - folio = batch.folios[i]; - /* The folio should be locked, dirty and not undergoing - * writeback from the loop above. - */ - if (!folio_clear_dirty_for_io(folio)) - WARN_ON(1); - folio_start_writeback(folio); - - *_count -= folio_nr_pages(folio); - folio_unlock(folio); - } - - folio_batch_release(&batch); - cond_resched(); - } while (!stop); - - *_len = len; -} - -/* - * Write back the locked page and any subsequent non-locked dirty pages. - */ -static ssize_t cifs_write_back_from_locked_folio(struct address_space *mapping, - struct writeback_control *wbc, - struct folio *folio, - loff_t start, loff_t end) -{ - struct inode *inode = mapping->host; - struct TCP_Server_Info *server; - struct cifs_io_subrequest *wdata; - struct cifs_sb_info *cifs_sb = CIFS_SB(inode->i_sb); - struct cifs_credits credits_on_stack; - struct cifs_credits *credits = &credits_on_stack; - struct cifsFileInfo *cfile = NULL; - unsigned int xid, len; - loff_t i_size = i_size_read(inode); - size_t max_len, wsize; - long count = wbc->nr_to_write; - int rc; - - /* The folio should be locked, dirty and not undergoing writeback. */ - folio_start_writeback(folio); - - count -= folio_nr_pages(folio); - len = folio_size(folio); - - xid = get_xid(); - server = cifs_pick_channel(cifs_sb_master_tcon(cifs_sb)->ses); - - rc = cifs_get_writable_file(CIFS_I(inode), FIND_WR_ANY, &cfile); - if (rc) { - cifs_dbg(VFS, "No writable handle in writepages rc=%d\n", rc); - goto err_xid; - } - - rc = server->ops->wait_mtu_credits(server, cifs_sb->ctx->wsize, - &wsize, credits); - if (rc != 0) - goto err_close; - - wdata = cifs_writedata_alloc(cifs_writev_complete); - if (!wdata) { - rc = -ENOMEM; - goto err_uncredit; - } - - wdata->sync_mode = wbc->sync_mode; - wdata->subreq.start = folio_pos(folio); - wdata->pid = cfile->pid; - wdata->credits = credits_on_stack; - wdata->cfile = cfile; - wdata->server = server; - cfile = NULL; - - /* Find all consecutive lockable dirty pages, stopping when we find a - * page that is not immediately lockable, is not dirty or is missing, - * or we reach the end of the range. - */ - if (start < i_size) { - /* Trim the write to the EOF; the extra data is ignored. Also - * put an upper limit on the size of a single storedata op. - */ - max_len = wsize; - max_len = min_t(unsigned long long, max_len, end - start + 1); - max_len = min_t(unsigned long long, max_len, i_size - start); - - if (len < max_len) { - int max_pages = INT_MAX; - -#ifdef CONFIG_CIFS_SMB_DIRECT - if (server->smbd_conn) - max_pages = server->smbd_conn->max_frmr_depth; -#endif - max_pages -= folio_nr_pages(folio); - - if (max_pages > 0) - cifs_extend_writeback(mapping, &count, start, - max_pages, max_len, &len); - } - len = min_t(loff_t, len, max_len); - } - - wdata->subreq.len = len; - - /* We now have a contiguous set of dirty pages, each with writeback - * set; the first page is still locked at this point, but all the rest - * have been unlocked. - */ - folio_unlock(folio); - - if (start < i_size) { - iov_iter_xarray(&wdata->subreq.io_iter, ITER_SOURCE, &mapping->i_pages, - start, len); - - rc = adjust_credits(wdata->server, &wdata->credits, wdata->subreq.len); - if (rc) - goto err_wdata; - - if (wdata->cfile->invalidHandle) - rc = -EAGAIN; - else - rc = wdata->server->ops->async_writev(wdata); - if (rc >= 0) { - cifs_put_writedata(wdata); - goto err_close; - } - } else { - /* The dirty region was entirely beyond the EOF. */ - cifs_pages_written_back(inode, start, len); - rc = 0; - } - -err_wdata: - cifs_put_writedata(wdata); -err_uncredit: - add_credits_and_wake_if(server, credits, 0); -err_close: - if (cfile) - cifsFileInfo_put(cfile); -err_xid: - free_xid(xid); - if (rc == 0) { - wbc->nr_to_write = count; - rc = len; - } else if (is_retryable_error(rc)) { - cifs_pages_write_redirty(inode, start, len); - } else { - cifs_pages_write_failed(inode, start, len); - mapping_set_error(mapping, rc); - } - /* Indication to update ctime and mtime as close is deferred */ - set_bit(CIFS_INO_MODIFIED_ATTR, &CIFS_I(inode)->flags); - return rc; -} - -/* - * write a region of pages back to the server - */ -static int cifs_writepages_region(struct address_space *mapping, - struct writeback_control *wbc, - loff_t start, loff_t end, loff_t *_next) -{ - struct folio_batch fbatch; - int skips = 0; - - folio_batch_init(&fbatch); - do { - int nr; - pgoff_t index = start / PAGE_SIZE; - - nr = filemap_get_folios_tag(mapping, &index, end / PAGE_SIZE, - PAGECACHE_TAG_DIRTY, &fbatch); - if (!nr) - break; - - for (int i = 0; i < nr; i++) { - ssize_t ret; - struct folio *folio = fbatch.folios[i]; - -redo_folio: - start = folio_pos(folio); /* May regress with THPs */ - - /* At this point we hold neither the i_pages lock nor the - * page lock: the page may be truncated or invalidated - * (changing page->mapping to NULL), or even swizzled - * back from swapper_space to tmpfs file mapping - */ - if (wbc->sync_mode != WB_SYNC_NONE) { - ret = folio_lock_killable(folio); - if (ret < 0) - goto write_error; - } else { - if (!folio_trylock(folio)) - goto skip_write; - } - - if (folio->mapping != mapping || - !folio_test_dirty(folio)) { - start += folio_size(folio); - folio_unlock(folio); - continue; - } - - if (folio_test_writeback(folio) || - folio_test_fscache(folio)) { - folio_unlock(folio); - if (wbc->sync_mode == WB_SYNC_NONE) - goto skip_write; - - folio_wait_writeback(folio); -#ifdef CONFIG_CIFS_FSCACHE - folio_wait_fscache(folio); -#endif - goto redo_folio; - } - - if (!folio_clear_dirty_for_io(folio)) - /* We hold the page lock - it should've been dirty. */ - WARN_ON(1); - - ret = cifs_write_back_from_locked_folio(mapping, wbc, folio, start, end); - if (ret < 0) - goto write_error; - - start += ret; - continue; - -write_error: - folio_batch_release(&fbatch); - *_next = start; - return ret; - -skip_write: - /* - * Too many skipped writes, or need to reschedule? - * Treat it as a write error without an error code. - */ - if (skips >= 5 || need_resched()) { - ret = 0; - goto write_error; - } - - /* Otherwise, just skip that folio and go on to the next */ - skips++; - start += folio_size(folio); - continue; - } - - folio_batch_release(&fbatch); - cond_resched(); - } while (wbc->nr_to_write > 0); - - *_next = start; - return 0; -} - -/* - * Write some of the pending data back to the server - */ -static int cifs_writepages(struct address_space *mapping, - struct writeback_control *wbc) -{ - loff_t start, next; - int ret; - - /* We have to be careful as we can end up racing with setattr() - * truncating the pagecache since the caller doesn't take a lock here - * to prevent it. - */ - - if (wbc->range_cyclic) { - start = mapping->writeback_index * PAGE_SIZE; - ret = cifs_writepages_region(mapping, wbc, start, LLONG_MAX, &next); - if (ret == 0) { - mapping->writeback_index = next / PAGE_SIZE; - if (start > 0 && wbc->nr_to_write > 0) { - ret = cifs_writepages_region(mapping, wbc, 0, - start, &next); - if (ret == 0) - mapping->writeback_index = - next / PAGE_SIZE; - } - } - } else if (wbc->range_start == 0 && wbc->range_end == LLONG_MAX) { - ret = cifs_writepages_region(mapping, wbc, 0, LLONG_MAX, &next); - if (wbc->nr_to_write > 0 && ret == 0) - mapping->writeback_index = next / PAGE_SIZE; - } else { - ret = cifs_writepages_region(mapping, wbc, - wbc->range_start, wbc->range_end, &next); - } - - return ret; -} - -static int -cifs_writepage_locked(struct page *page, struct writeback_control *wbc) -{ - int rc; - unsigned int xid; - - xid = get_xid(); -/* BB add check for wbc flags */ - get_page(page); - if (!PageUptodate(page)) - cifs_dbg(FYI, "ppw - page not up to date\n"); - - /* - * Set the "writeback" flag, and clear "dirty" in the radix tree. - * - * A writepage() implementation always needs to do either this, - * or re-dirty the page with "redirty_page_for_writepage()" in - * the case of a failure. - * - * Just unlocking the page will cause the radix tree tag-bits - * to fail to update with the state of the page correctly. - */ - set_page_writeback(page); -retry_write: - rc = cifs_partialpagewrite(page, 0, PAGE_SIZE); - if (is_retryable_error(rc)) { - if (wbc->sync_mode == WB_SYNC_ALL && rc == -EAGAIN) - goto retry_write; - redirty_page_for_writepage(wbc, page); - } else if (rc != 0) { - SetPageError(page); - mapping_set_error(page->mapping, rc); - } else { - SetPageUptodate(page); - } - end_page_writeback(page); - put_page(page); - free_xid(xid); - return rc; -} - -static int cifs_write_end(struct file *file, struct address_space *mapping, - loff_t pos, unsigned len, unsigned copied, - struct page *page, void *fsdata) -{ - int rc; - struct inode *inode = mapping->host; - struct cifsFileInfo *cfile = file->private_data; - struct cifs_sb_info *cifs_sb = CIFS_SB(cfile->dentry->d_sb); - struct folio *folio = page_folio(page); - __u32 pid; - - if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_RWPIDFORWARD) - pid = cfile->pid; - else - pid = current->tgid; - - cifs_dbg(FYI, "write_end for page %p from pos %lld with %d bytes\n", - page, pos, copied); - - if (folio_test_checked(folio)) { - if (copied == len) - folio_mark_uptodate(folio); - folio_clear_checked(folio); - } else if (!folio_test_uptodate(folio) && copied == PAGE_SIZE) - folio_mark_uptodate(folio); - - if (!folio_test_uptodate(folio)) { - char *page_data; - unsigned offset = pos & (PAGE_SIZE - 1); - unsigned int xid; - - xid = get_xid(); - /* this is probably better than directly calling - partialpage_write since in this function the file handle is - known which we might as well leverage */ - /* BB check if anything else missing out of ppw - such as updating last write time */ - page_data = kmap(page); - rc = cifs_write(cfile, pid, page_data + offset, copied, &pos); - /* if (rc < 0) should we set writebehind rc? */ - kunmap(page); - - free_xid(xid); - } else { - rc = copied; - pos += copied; - set_page_dirty(page); - } - - if (rc > 0) { - spin_lock(&inode->i_lock); - if (pos > inode->i_size) { - i_size_write(inode, pos); - inode->i_blocks = (512 - 1 + pos) >> 9; - } - spin_unlock(&inode->i_lock); - } - - unlock_page(page); - put_page(page); - /* Indication to update ctime and mtime as close is deferred */ - set_bit(CIFS_INO_MODIFIED_ATTR, &CIFS_I(inode)->flags); - - return rc; -} -#endif // End netfs removal 2773 - /* * Flush data on a strict file. */ @@ -4571,6 +3878,7 @@ cifs_read(struct file *file, char *read_data, size_t read_size, loff_t *offset) } #endif // end netfslib remove 4633 + static vm_fault_t cifs_page_mkwrite(struct vm_fault *vmf) { return netfs_page_mkwrite(vmf, NULL); From patchwork Mon Feb 5 22:57:24 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 197107 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7301:168b:b0:106:860b:bbdd with SMTP id ma11csp1205089dyb; Mon, 5 Feb 2024 15:16:49 -0800 (PST) X-Google-Smtp-Source: AGHT+IHRAYbyn2ghjcXLNxSq2rbcU7ukr6lcWv37vnVXAbrjl4kHa/VwseBK0dd7gUSC5v/LUO7Y X-Received: by 2002:a17:90b:613:b0:296:77a2:42f3 with SMTP id gb19-20020a17090b061300b0029677a242f3mr904450pjb.33.1707175008884; Mon, 05 Feb 2024 15:16:48 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1707175008; cv=pass; d=google.com; s=arc-20160816; b=1AxcAS6VQ12e+b575Dcb/0lhRP2u86edJx1MR+NgynkTyiqgzfxy/nf7uNSAqtZJeI aFhs7CiUQi2nIZK7waKnsnHxR5dKp0swwxExfamxcaesbCc8f3G8wwUfGpAiEX8/x017 VSXEN5Njc/BY7JJ3E3hQft8HAyK1rXpqMak0d6Jb9Vmr/ni7aZUfVW2oY2b/F91Ppuvv 6IXdaOW9xK3E2Upz0Oz2KxJV00XjDqcGjZCIllHvKdVQVwZ5k5jxH/V0GO/EuZemDsri ljOKr5fl0xv9eInBODNjP74UQZ43hnHfo96tkeks5Wv/9cPa+1j8m734fKFoFYWFs0Ap Nhdw== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=+BvaLvIzaqiW92H4OrJARavAzNoR15lr4+yzzQDLAE0=; fh=tnOy8I+Gvun+kDfaN7GtlpSyd0xkeEkU22UfNzDM11c=; b=Ek3ynQhiRK9SyIp8+82Mk4V+VVdbLWuXmr5RzvxxzhaLieDTtUZt+sdxY4tDYb1yCc 1hdTSrzkasFFLEmyj9eZBDlGiClJH96wVBDaNsTORCOzf3fVxD2CRrQQN7Uh8DNE/rH0 FoLvanPM8kvpmUxKZNsa7XIytozXPW4BZeROZsYUgguW7+gubVyE6awe+VaqeSXqUrFh rY3LZiquMnu79AWT52RXl0IWu0FuDRjpsyoAhQsMWdBTaRcTiY1viPlwrwblekO2k7bU PchvaD8KtyDGpe2ZfK/JXBLiBbiJ59wD4LBn0NQB82n0wBvBdeoS8owSYhE83X9khbc3 3uSg==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b="R8Irc/YE"; arc=pass (i=1 spf=pass spfdomain=redhat.com dkim=pass dkdomain=redhat.com dmarc=pass fromdomain=redhat.com); spf=pass (google.com: domain of linux-kernel+bounces-54058-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) smtp.mailfrom="linux-kernel+bounces-54058-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com X-Forwarded-Encrypted: i=1; AJvYcCUTXqbpU9vg8IWt7xM6POTJIRovlFQBoam2ZkgnG6u0YQjFKolygCkdkBFHuUO+IsVA5825hKjspg0U4nkky4WFpyfkZA== Received: from sy.mirrors.kernel.org (sy.mirrors.kernel.org. [147.75.48.161]) by mx.google.com with ESMTPS id jf5-20020a170903268500b001d9bdc912ffsi534305plb.403.2024.02.05.15.16.48 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 05 Feb 2024 15:16:48 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-54058-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) client-ip=147.75.48.161; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b="R8Irc/YE"; arc=pass (i=1 spf=pass spfdomain=redhat.com dkim=pass dkdomain=redhat.com dmarc=pass fromdomain=redhat.com); spf=pass (google.com: domain of linux-kernel+bounces-54058-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) smtp.mailfrom="linux-kernel+bounces-54058-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sy.mirrors.kernel.org (Postfix) with ESMTPS id DDCCCB272DC for ; Mon, 5 Feb 2024 23:02:48 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id A7E7456B6B; Mon, 5 Feb 2024 22:58:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="R8Irc/YE" Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 332FE54BC8 for ; Mon, 5 Feb 2024 22:58:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707173892; cv=none; b=Y5N3NIcRGh6Rh0wk7pez4rKDUTngcfPMip+Eb8BA+BbfpP8+pBIldtKwfuGrIWx6sI5JyCFChB75eLU5czssLuUGSXs34bQUgc/5Xm2zPyYnamAUeCfofFsJwvPnhhPbnQgyW27+XmG4I/DbXDKa/OP5re2GEN4zhkbWM0kmlSs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707173892; c=relaxed/simple; bh=awC40ad0+beGdXb/6kG2Pc3T8IKlZAZUC1iEQYx76VE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=SWGxXyg9qaYeLOHxhG3KTHhuAw6U8Iwl9UthDYjneUjfPgjP5DIPZpDs31NEY3SlYB8Ps88Hc74g86pVGSUhVYbIPHocGzrwgrmKS1BQ2zFLelxrFU5EOdVkDzZUJu5/3rofp0fHm+Y82qLcDXImFPJLOg05a6x3ckXCeO3K+wE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=R8Irc/YE; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1707173888; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=+BvaLvIzaqiW92H4OrJARavAzNoR15lr4+yzzQDLAE0=; b=R8Irc/YEgoiR4NywoNwJaGjnQtRFFEGT+pm46DYAM2w12SjQExanc/5AF7RIDo6OprioRt N0HuR5bX/tNIHl/mCfRH3bl87OeJuNO+xbGInVQS8nZQJ6rN1c2syGjRUGKcgHs7T9MwqL 6FOCwViAPogT13p1B/hAiHTwOkWE+f4= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-621-PqAvUyAlNYyk1zBi1dWldg-1; Mon, 05 Feb 2024 17:58:05 -0500 X-MC-Unique: PqAvUyAlNYyk1zBi1dWldg-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 3FA9C845DC0; Mon, 5 Feb 2024 22:58:04 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.42.28.245]) by smtp.corp.redhat.com (Postfix) with ESMTP id 1917C400DF3E; Mon, 5 Feb 2024 22:58:02 +0000 (UTC) From: David Howells To: Steve French Cc: David Howells , Jeff Layton , Matthew Wilcox , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Christian Brauner , netfs@lists.linux.dev, linux-cifs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Steve French , Shyam Prasad N , Rohith Surabattula Subject: [PATCH v5 12/12] cifs: Remove some code that's no longer used, part 3 Date: Mon, 5 Feb 2024 22:57:24 +0000 Message-ID: <20240205225726.3104808-13-dhowells@redhat.com> In-Reply-To: <20240205225726.3104808-1-dhowells@redhat.com> References: <20240205225726.3104808-1-dhowells@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.2 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1790102742086009359 X-GMAIL-MSGID: 1790102742086009359 Remove some code that was #if'd out with the netfslib conversion. This is split into parts for file.c as the diff generator otherwise produces a hard to read diff for part of it where a big chunk is cut out. Signed-off-by: David Howells cc: Steve French cc: Shyam Prasad N cc: Rohith Surabattula cc: Jeff Layton cc: linux-cifs@vger.kernel.org cc: netfs@lists.linux.dev cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org --- fs/smb/client/file.c | 1004 ------------------------------------------ 1 file changed, 1004 deletions(-) diff --git a/fs/smb/client/file.c b/fs/smb/client/file.c index c50d4071c877..d3ca2ac3e749 100644 --- a/fs/smb/client/file.c +++ b/fs/smb/client/file.c @@ -2689,471 +2689,6 @@ int cifs_flush(struct file *file, fl_owner_t id) return rc; } -#if 0 // TODO remove 3594 -static void collect_uncached_write_data(struct cifs_aio_ctx *ctx); - -static void -cifs_uncached_writev_complete(struct work_struct *work) -{ - struct cifs_io_subrequest *wdata = container_of(work, - struct cifs_io_subrequest, work); - struct inode *inode = d_inode(wdata->cfile->dentry); - struct cifsInodeInfo *cifsi = CIFS_I(inode); - - spin_lock(&inode->i_lock); - cifs_update_eof(cifsi, wdata->subreq.start, wdata->subreq.len); - if (cifsi->netfs.remote_i_size > inode->i_size) - i_size_write(inode, cifsi->netfs.remote_i_size); - spin_unlock(&inode->i_lock); - - complete(&wdata->done); - collect_uncached_write_data(wdata->ctx); - /* the below call can possibly free the last ref to aio ctx */ - cifs_put_writedata(wdata); -} - -static int -cifs_resend_wdata(struct cifs_io_subrequest *wdata, struct list_head *wdata_list, - struct cifs_aio_ctx *ctx) -{ - size_t wsize; - struct cifs_credits credits; - int rc; - struct TCP_Server_Info *server = wdata->server; - - do { - if (wdata->cfile->invalidHandle) { - rc = cifs_reopen_file(wdata->cfile, false); - if (rc == -EAGAIN) - continue; - else if (rc) - break; - } - - - /* - * Wait for credits to resend this wdata. - * Note: we are attempting to resend the whole wdata not in - * segments - */ - do { - rc = server->ops->wait_mtu_credits(server, wdata->subreq.len, - &wsize, &credits); - if (rc) - goto fail; - - if (wsize < wdata->subreq.len) { - add_credits_and_wake_if(server, &credits, 0); - msleep(1000); - } - } while (wsize < wdata->subreq.len); - wdata->credits = credits; - - rc = adjust_credits(server, &wdata->credits, wdata->subreq.len); - - if (!rc) { - if (wdata->cfile->invalidHandle) - rc = -EAGAIN; - else { - set_bit(NETFS_SREQ_RETRYING, &wdata->subreq.flags); -#ifdef CONFIG_CIFS_SMB_DIRECT - if (wdata->mr) { - wdata->mr->need_invalidate = true; - smbd_deregister_mr(wdata->mr); - wdata->mr = NULL; - } -#endif - rc = server->ops->async_writev(wdata); - } - } - - /* If the write was successfully sent, we are done */ - if (!rc) { - list_add_tail(&wdata->list, wdata_list); - return 0; - } - - /* Roll back credits and retry if needed */ - add_credits_and_wake_if(server, &wdata->credits, 0); - } while (rc == -EAGAIN); - -fail: - cifs_put_writedata(wdata); - return rc; -} - -/* - * Select span of a bvec iterator we're going to use. Limit it by both maximum - * size and maximum number of segments. - */ -static size_t cifs_limit_bvec_subset(const struct iov_iter *iter, size_t max_size, - size_t max_segs, unsigned int *_nsegs) -{ - const struct bio_vec *bvecs = iter->bvec; - unsigned int nbv = iter->nr_segs, ix = 0, nsegs = 0; - size_t len, span = 0, n = iter->count; - size_t skip = iter->iov_offset; - - if (WARN_ON(!iov_iter_is_bvec(iter)) || n == 0) - return 0; - - while (n && ix < nbv && skip) { - len = bvecs[ix].bv_len; - if (skip < len) - break; - skip -= len; - n -= len; - ix++; - } - - while (n && ix < nbv) { - len = min3(n, bvecs[ix].bv_len - skip, max_size); - span += len; - max_size -= len; - nsegs++; - ix++; - if (max_size == 0 || nsegs >= max_segs) - break; - skip = 0; - n -= len; - } - - *_nsegs = nsegs; - return span; -} - -static int -cifs_write_from_iter(loff_t fpos, size_t len, struct iov_iter *from, - struct cifsFileInfo *open_file, - struct cifs_sb_info *cifs_sb, struct list_head *wdata_list, - struct cifs_aio_ctx *ctx) -{ - int rc = 0; - size_t cur_len, max_len; - struct cifs_io_subrequest *wdata; - pid_t pid; - struct TCP_Server_Info *server; - unsigned int xid, max_segs = INT_MAX; - - if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_RWPIDFORWARD) - pid = open_file->pid; - else - pid = current->tgid; - - server = cifs_pick_channel(tlink_tcon(open_file->tlink)->ses); - xid = get_xid(); - -#ifdef CONFIG_CIFS_SMB_DIRECT - if (server->smbd_conn) - max_segs = server->smbd_conn->max_frmr_depth; -#endif - - do { - struct cifs_credits credits_on_stack; - struct cifs_credits *credits = &credits_on_stack; - unsigned int nsegs = 0; - size_t wsize; - - if (signal_pending(current)) { - rc = -EINTR; - break; - } - - if (open_file->invalidHandle) { - rc = cifs_reopen_file(open_file, false); - if (rc == -EAGAIN) - continue; - else if (rc) - break; - } - - rc = server->ops->wait_mtu_credits(server, cifs_sb->ctx->wsize, - &wsize, credits); - if (rc) - break; - - max_len = min_t(const size_t, len, wsize); - if (!max_len) { - rc = -EAGAIN; - add_credits_and_wake_if(server, credits, 0); - break; - } - - cur_len = cifs_limit_bvec_subset(from, max_len, max_segs, &nsegs); - cifs_dbg(FYI, "write_from_iter len=%zx/%zx nsegs=%u/%lu/%u\n", - cur_len, max_len, nsegs, from->nr_segs, max_segs); - if (cur_len == 0) { - rc = -EIO; - add_credits_and_wake_if(server, credits, 0); - break; - } - - wdata = cifs_writedata_alloc(cifs_uncached_writev_complete); - if (!wdata) { - rc = -ENOMEM; - add_credits_and_wake_if(server, credits, 0); - break; - } - - wdata->uncached = true; - wdata->sync_mode = WB_SYNC_ALL; - wdata->subreq.start = (__u64)fpos; - wdata->cfile = cifsFileInfo_get(open_file); - wdata->server = server; - wdata->pid = pid; - wdata->subreq.len = cur_len; - wdata->credits = credits_on_stack; - wdata->subreq.io_iter = *from; - wdata->ctx = ctx; - kref_get(&ctx->refcount); - - iov_iter_truncate(&wdata->subreq.io_iter, cur_len); - - rc = adjust_credits(server, &wdata->credits, wdata->subreq.len); - - if (!rc) { - if (wdata->cfile->invalidHandle) - rc = -EAGAIN; - else - rc = server->ops->async_writev(wdata); - } - - if (rc) { - add_credits_and_wake_if(server, &wdata->credits, 0); - cifs_put_writedata(wdata); - if (rc == -EAGAIN) - continue; - break; - } - - list_add_tail(&wdata->list, wdata_list); - iov_iter_advance(from, cur_len); - fpos += cur_len; - len -= cur_len; - } while (len > 0); - - free_xid(xid); - return rc; -} - -static void collect_uncached_write_data(struct cifs_aio_ctx *ctx) -{ - struct cifs_io_subrequest *wdata, *tmp; - struct cifs_tcon *tcon; - struct cifs_sb_info *cifs_sb; - struct dentry *dentry = ctx->cfile->dentry; - ssize_t rc; - - tcon = tlink_tcon(ctx->cfile->tlink); - cifs_sb = CIFS_SB(dentry->d_sb); - - mutex_lock(&ctx->aio_mutex); - - if (list_empty(&ctx->list)) { - mutex_unlock(&ctx->aio_mutex); - return; - } - - rc = ctx->rc; - /* - * Wait for and collect replies for any successful sends in order of - * increasing offset. Once an error is hit, then return without waiting - * for any more replies. - */ -restart_loop: - list_for_each_entry_safe(wdata, tmp, &ctx->list, list) { - if (!rc) { - if (!try_wait_for_completion(&wdata->done)) { - mutex_unlock(&ctx->aio_mutex); - return; - } - - if (wdata->result) - rc = wdata->result; - else - ctx->total_len += wdata->subreq.len; - - /* resend call if it's a retryable error */ - if (rc == -EAGAIN) { - struct list_head tmp_list; - struct iov_iter tmp_from = ctx->iter; - - INIT_LIST_HEAD(&tmp_list); - list_del_init(&wdata->list); - - if (ctx->direct_io) - rc = cifs_resend_wdata( - wdata, &tmp_list, ctx); - else { - iov_iter_advance(&tmp_from, - wdata->subreq.start - ctx->pos); - - rc = cifs_write_from_iter(wdata->subreq.start, - wdata->subreq.len, &tmp_from, - ctx->cfile, cifs_sb, &tmp_list, - ctx); - - cifs_put_writedata(wdata); - } - - list_splice(&tmp_list, &ctx->list); - goto restart_loop; - } - } - list_del_init(&wdata->list); - cifs_put_writedata(wdata); - } - - cifs_stats_bytes_written(tcon, ctx->total_len); - set_bit(CIFS_INO_INVALID_MAPPING, &CIFS_I(dentry->d_inode)->flags); - - ctx->rc = (rc == 0) ? ctx->total_len : rc; - - mutex_unlock(&ctx->aio_mutex); - - if (ctx->iocb && ctx->iocb->ki_complete) - ctx->iocb->ki_complete(ctx->iocb, ctx->rc); - else - complete(&ctx->done); -} - -static ssize_t __cifs_writev( - struct kiocb *iocb, struct iov_iter *from, bool direct) -{ - struct file *file = iocb->ki_filp; - ssize_t total_written = 0; - struct cifsFileInfo *cfile; - struct cifs_tcon *tcon; - struct cifs_sb_info *cifs_sb; - struct cifs_aio_ctx *ctx; - int rc; - - rc = generic_write_checks(iocb, from); - if (rc <= 0) - return rc; - - cifs_sb = CIFS_FILE_SB(file); - cfile = file->private_data; - tcon = tlink_tcon(cfile->tlink); - - if (!tcon->ses->server->ops->async_writev) - return -ENOSYS; - - ctx = cifs_aio_ctx_alloc(); - if (!ctx) - return -ENOMEM; - - ctx->cfile = cifsFileInfo_get(cfile); - - if (!is_sync_kiocb(iocb)) - ctx->iocb = iocb; - - ctx->pos = iocb->ki_pos; - ctx->direct_io = direct; - ctx->nr_pinned_pages = 0; - - if (user_backed_iter(from)) { - /* - * Extract IOVEC/UBUF-type iterators to a BVEC-type iterator as - * they contain references to the calling process's virtual - * memory layout which won't be available in an async worker - * thread. This also takes a pin on every folio involved. - */ - rc = netfs_extract_user_iter(from, iov_iter_count(from), - &ctx->iter, 0); - if (rc < 0) { - kref_put(&ctx->refcount, cifs_aio_ctx_release); - return rc; - } - - ctx->nr_pinned_pages = rc; - ctx->bv = (void *)ctx->iter.bvec; - ctx->bv_need_unpin = iov_iter_extract_will_pin(from); - } else if ((iov_iter_is_bvec(from) || iov_iter_is_kvec(from)) && - !is_sync_kiocb(iocb)) { - /* - * If the op is asynchronous, we need to copy the list attached - * to a BVEC/KVEC-type iterator, but we assume that the storage - * will be pinned by the caller; in any case, we may or may not - * be able to pin the pages, so we don't try. - */ - ctx->bv = (void *)dup_iter(&ctx->iter, from, GFP_KERNEL); - if (!ctx->bv) { - kref_put(&ctx->refcount, cifs_aio_ctx_release); - return -ENOMEM; - } - } else { - /* - * Otherwise, we just pass the iterator down as-is and rely on - * the caller to make sure the pages referred to by the - * iterator don't evaporate. - */ - ctx->iter = *from; - } - - ctx->len = iov_iter_count(&ctx->iter); - - /* grab a lock here due to read response handlers can access ctx */ - mutex_lock(&ctx->aio_mutex); - - rc = cifs_write_from_iter(iocb->ki_pos, ctx->len, &ctx->iter, - cfile, cifs_sb, &ctx->list, ctx); - - /* - * If at least one write was successfully sent, then discard any rc - * value from the later writes. If the other write succeeds, then - * we'll end up returning whatever was written. If it fails, then - * we'll get a new rc value from that. - */ - if (!list_empty(&ctx->list)) - rc = 0; - - mutex_unlock(&ctx->aio_mutex); - - if (rc) { - kref_put(&ctx->refcount, cifs_aio_ctx_release); - return rc; - } - - if (!is_sync_kiocb(iocb)) { - kref_put(&ctx->refcount, cifs_aio_ctx_release); - return -EIOCBQUEUED; - } - - rc = wait_for_completion_killable(&ctx->done); - if (rc) { - mutex_lock(&ctx->aio_mutex); - ctx->rc = rc = -EINTR; - total_written = ctx->total_len; - mutex_unlock(&ctx->aio_mutex); - } else { - rc = ctx->rc; - total_written = ctx->total_len; - } - - kref_put(&ctx->refcount, cifs_aio_ctx_release); - - if (unlikely(!total_written)) - return rc; - - iocb->ki_pos += total_written; - return total_written; -} - -ssize_t cifs_direct_writev(struct kiocb *iocb, struct iov_iter *from) -{ - struct file *file = iocb->ki_filp; - - cifs_revalidate_mapping(file->f_inode); - return __cifs_writev(iocb, from, true); -} - -ssize_t cifs_user_writev(struct kiocb *iocb, struct iov_iter *from) -{ - return __cifs_writev(iocb, from, false); -} -#endif // TODO remove 3594 - static ssize_t cifs_writev(struct kiocb *iocb, struct iov_iter *from) { @@ -3242,450 +2777,6 @@ cifs_strict_writev(struct kiocb *iocb, struct iov_iter *from) return written; } -#if 0 // TODO remove 4143 -static struct cifs_io_subrequest *cifs_readdata_alloc(work_func_t complete) -{ - struct cifs_io_subrequest *rdata; - - rdata = kzalloc(sizeof(*rdata), GFP_KERNEL); - if (rdata) { - refcount_set(&rdata->subreq.ref, 1); - INIT_LIST_HEAD(&rdata->list); - init_completion(&rdata->done); - INIT_WORK(&rdata->work, complete); - } - - return rdata; -} - -void -cifs_readdata_release(struct cifs_io_subrequest *rdata) -{ - if (rdata->ctx) - kref_put(&rdata->ctx->refcount, cifs_aio_ctx_release); -#ifdef CONFIG_CIFS_SMB_DIRECT - if (rdata->mr) { - smbd_deregister_mr(rdata->mr); - rdata->mr = NULL; - } -#endif - if (rdata->cfile) - cifsFileInfo_put(rdata->cfile); - - kfree(rdata); -} - -static void collect_uncached_read_data(struct cifs_aio_ctx *ctx); - -static void -cifs_uncached_readv_complete(struct work_struct *work) -{ - struct cifs_io_subrequest *rdata = - container_of(work, struct cifs_io_subrequest, work); - - complete(&rdata->done); - collect_uncached_read_data(rdata->ctx); - /* the below call can possibly free the last ref to aio ctx */ - cifs_put_readdata(rdata); -} - -static int cifs_resend_rdata(struct cifs_io_subrequest *rdata, - struct list_head *rdata_list, - struct cifs_aio_ctx *ctx) -{ - size_t rsize; - struct cifs_credits credits; - int rc; - struct TCP_Server_Info *server; - - /* XXX: should we pick a new channel here? */ - server = rdata->server; - - do { - if (rdata->cfile->invalidHandle) { - rc = cifs_reopen_file(rdata->cfile, true); - if (rc == -EAGAIN) - continue; - else if (rc) - break; - } - - /* - * Wait for credits to resend this rdata. - * Note: we are attempting to resend the whole rdata not in - * segments - */ - do { - rc = server->ops->wait_mtu_credits(server, rdata->subreq.len, - &rsize, &credits); - - if (rc) - goto fail; - - if (rsize < rdata->subreq.len) { - add_credits_and_wake_if(server, &credits, 0); - msleep(1000); - } - } while (rsize < rdata->subreq.len); - rdata->credits = credits; - - rc = adjust_credits(server, &rdata->credits, rdata->subreq.len); - if (!rc) { - if (rdata->cfile->invalidHandle) - rc = -EAGAIN; - else { -#ifdef CONFIG_CIFS_SMB_DIRECT - if (rdata->mr) { - rdata->mr->need_invalidate = true; - smbd_deregister_mr(rdata->mr); - rdata->mr = NULL; - } -#endif - rc = server->ops->async_readv(rdata); - } - } - - /* If the read was successfully sent, we are done */ - if (!rc) { - /* Add to aio pending list */ - list_add_tail(&rdata->list, rdata_list); - return 0; - } - - /* Roll back credits and retry if needed */ - add_credits_and_wake_if(server, &rdata->credits, 0); - } while (rc == -EAGAIN); - -fail: - cifs_put_readdata(rdata); - return rc; -} - -static int -cifs_send_async_read(loff_t fpos, size_t len, struct cifsFileInfo *open_file, - struct cifs_sb_info *cifs_sb, struct list_head *rdata_list, - struct cifs_aio_ctx *ctx) -{ - struct cifs_io_subrequest *rdata; - unsigned int nsegs, max_segs = INT_MAX; - struct cifs_credits credits_on_stack; - struct cifs_credits *credits = &credits_on_stack; - size_t cur_len, max_len, rsize; - int rc; - pid_t pid; - struct TCP_Server_Info *server; - - server = cifs_pick_channel(tlink_tcon(open_file->tlink)->ses); - -#ifdef CONFIG_CIFS_SMB_DIRECT - if (server->smbd_conn) - max_segs = server->smbd_conn->max_frmr_depth; -#endif - - if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_RWPIDFORWARD) - pid = open_file->pid; - else - pid = current->tgid; - - do { - if (open_file->invalidHandle) { - rc = cifs_reopen_file(open_file, true); - if (rc == -EAGAIN) - continue; - else if (rc) - break; - } - - if (cifs_sb->ctx->rsize == 0) - cifs_sb->ctx->rsize = - server->ops->negotiate_rsize(tlink_tcon(open_file->tlink), - cifs_sb->ctx); - - rc = server->ops->wait_mtu_credits(server, cifs_sb->ctx->rsize, - &rsize, credits); - if (rc) - break; - - max_len = min_t(size_t, len, rsize); - - cur_len = cifs_limit_bvec_subset(&ctx->iter, max_len, - max_segs, &nsegs); - cifs_dbg(FYI, "read-to-iter len=%zx/%zx nsegs=%u/%lu/%u\n", - cur_len, max_len, nsegs, ctx->iter.nr_segs, max_segs); - if (cur_len == 0) { - rc = -EIO; - add_credits_and_wake_if(server, credits, 0); - break; - } - - rdata = cifs_readdata_alloc(cifs_uncached_readv_complete); - if (!rdata) { - add_credits_and_wake_if(server, credits, 0); - rc = -ENOMEM; - break; - } - - rdata->server = server; - rdata->cfile = cifsFileInfo_get(open_file); - rdata->subreq.start = fpos; - rdata->subreq.len = cur_len; - rdata->pid = pid; - rdata->credits = credits_on_stack; - rdata->ctx = ctx; - kref_get(&ctx->refcount); - - rdata->subreq.io_iter = ctx->iter; - iov_iter_truncate(&rdata->subreq.io_iter, cur_len); - - rc = adjust_credits(server, &rdata->credits, rdata->subreq.len); - - if (!rc) { - if (rdata->cfile->invalidHandle) - rc = -EAGAIN; - else - rc = server->ops->async_readv(rdata); - } - - if (rc) { - add_credits_and_wake_if(server, &rdata->credits, 0); - cifs_put_readdata(rdata); - if (rc == -EAGAIN) - continue; - break; - } - - list_add_tail(&rdata->list, rdata_list); - iov_iter_advance(&ctx->iter, cur_len); - fpos += cur_len; - len -= cur_len; - } while (len > 0); - - return rc; -} - -static void -collect_uncached_read_data(struct cifs_aio_ctx *ctx) -{ - struct cifs_io_subrequest *rdata, *tmp; - struct cifs_sb_info *cifs_sb; - int rc; - - cifs_sb = CIFS_SB(ctx->cfile->dentry->d_sb); - - mutex_lock(&ctx->aio_mutex); - - if (list_empty(&ctx->list)) { - mutex_unlock(&ctx->aio_mutex); - return; - } - - rc = ctx->rc; - /* the loop below should proceed in the order of increasing offsets */ -again: - list_for_each_entry_safe(rdata, tmp, &ctx->list, list) { - if (!rc) { - if (!try_wait_for_completion(&rdata->done)) { - mutex_unlock(&ctx->aio_mutex); - return; - } - - if (rdata->result == -EAGAIN) { - /* resend call if it's a retryable error */ - struct list_head tmp_list; - unsigned int got_bytes = rdata->got_bytes; - - list_del_init(&rdata->list); - INIT_LIST_HEAD(&tmp_list); - - if (ctx->direct_io) { - /* - * Re-use rdata as this is a - * direct I/O - */ - rc = cifs_resend_rdata( - rdata, - &tmp_list, ctx); - } else { - rc = cifs_send_async_read( - rdata->subreq.start + got_bytes, - rdata->subreq.len - got_bytes, - rdata->cfile, cifs_sb, - &tmp_list, ctx); - - cifs_put_readdata(rdata); - } - - list_splice(&tmp_list, &ctx->list); - - goto again; - } else if (rdata->result) - rc = rdata->result; - - /* if there was a short read -- discard anything left */ - if (rdata->got_bytes && rdata->got_bytes < rdata->subreq.len) - rc = -ENODATA; - - ctx->total_len += rdata->got_bytes; - } - list_del_init(&rdata->list); - cifs_put_readdata(rdata); - } - - /* mask nodata case */ - if (rc == -ENODATA) - rc = 0; - - ctx->rc = (rc == 0) ? (ssize_t)ctx->total_len : rc; - - mutex_unlock(&ctx->aio_mutex); - - if (ctx->iocb && ctx->iocb->ki_complete) - ctx->iocb->ki_complete(ctx->iocb, ctx->rc); - else - complete(&ctx->done); -} - -static ssize_t __cifs_readv( - struct kiocb *iocb, struct iov_iter *to, bool direct) -{ - size_t len; - struct file *file = iocb->ki_filp; - struct cifs_sb_info *cifs_sb; - struct cifsFileInfo *cfile; - struct cifs_tcon *tcon; - ssize_t rc, total_read = 0; - loff_t offset = iocb->ki_pos; - struct cifs_aio_ctx *ctx; - - len = iov_iter_count(to); - if (!len) - return 0; - - cifs_sb = CIFS_FILE_SB(file); - cfile = file->private_data; - tcon = tlink_tcon(cfile->tlink); - - if (!tcon->ses->server->ops->async_readv) - return -ENOSYS; - - if ((file->f_flags & O_ACCMODE) == O_WRONLY) - cifs_dbg(FYI, "attempting read on write only file instance\n"); - - ctx = cifs_aio_ctx_alloc(); - if (!ctx) - return -ENOMEM; - - ctx->pos = offset; - ctx->direct_io = direct; - ctx->len = len; - ctx->cfile = cifsFileInfo_get(cfile); - ctx->nr_pinned_pages = 0; - - if (!is_sync_kiocb(iocb)) - ctx->iocb = iocb; - - if (user_backed_iter(to)) { - /* - * Extract IOVEC/UBUF-type iterators to a BVEC-type iterator as - * they contain references to the calling process's virtual - * memory layout which won't be available in an async worker - * thread. This also takes a pin on every folio involved. - */ - rc = netfs_extract_user_iter(to, iov_iter_count(to), - &ctx->iter, 0); - if (rc < 0) { - kref_put(&ctx->refcount, cifs_aio_ctx_release); - return rc; - } - - ctx->nr_pinned_pages = rc; - ctx->bv = (void *)ctx->iter.bvec; - ctx->bv_need_unpin = iov_iter_extract_will_pin(to); - ctx->should_dirty = true; - } else if ((iov_iter_is_bvec(to) || iov_iter_is_kvec(to)) && - !is_sync_kiocb(iocb)) { - /* - * If the op is asynchronous, we need to copy the list attached - * to a BVEC/KVEC-type iterator, but we assume that the storage - * will be retained by the caller; in any case, we may or may - * not be able to pin the pages, so we don't try. - */ - ctx->bv = (void *)dup_iter(&ctx->iter, to, GFP_KERNEL); - if (!ctx->bv) { - kref_put(&ctx->refcount, cifs_aio_ctx_release); - return -ENOMEM; - } - } else { - /* - * Otherwise, we just pass the iterator down as-is and rely on - * the caller to make sure the pages referred to by the - * iterator don't evaporate. - */ - ctx->iter = *to; - } - - if (direct) { - rc = filemap_write_and_wait_range(file->f_inode->i_mapping, - offset, offset + len - 1); - if (rc) { - kref_put(&ctx->refcount, cifs_aio_ctx_release); - return -EAGAIN; - } - } - - /* grab a lock here due to read response handlers can access ctx */ - mutex_lock(&ctx->aio_mutex); - - rc = cifs_send_async_read(offset, len, cfile, cifs_sb, &ctx->list, ctx); - - /* if at least one read request send succeeded, then reset rc */ - if (!list_empty(&ctx->list)) - rc = 0; - - mutex_unlock(&ctx->aio_mutex); - - if (rc) { - kref_put(&ctx->refcount, cifs_aio_ctx_release); - return rc; - } - - if (!is_sync_kiocb(iocb)) { - kref_put(&ctx->refcount, cifs_aio_ctx_release); - return -EIOCBQUEUED; - } - - rc = wait_for_completion_killable(&ctx->done); - if (rc) { - mutex_lock(&ctx->aio_mutex); - ctx->rc = rc = -EINTR; - total_read = ctx->total_len; - mutex_unlock(&ctx->aio_mutex); - } else { - rc = ctx->rc; - total_read = ctx->total_len; - } - - kref_put(&ctx->refcount, cifs_aio_ctx_release); - - if (total_read) { - iocb->ki_pos += total_read; - return total_read; - } - return rc; -} - -ssize_t cifs_direct_readv(struct kiocb *iocb, struct iov_iter *to) -{ - return __cifs_readv(iocb, to, true); -} - -ssize_t cifs_user_readv(struct kiocb *iocb, struct iov_iter *to) -{ - return __cifs_readv(iocb, to, false); - -} -#endif // end netfslib removal 4143 - ssize_t cifs_loose_read_iter(struct kiocb *iocb, struct iov_iter *iter) { ssize_t rc; @@ -3784,101 +2875,6 @@ cifs_strict_readv(struct kiocb *iocb, struct iov_iter *to) return rc; } -#if 0 // TODO remove 4633 -static ssize_t -cifs_read(struct file *file, char *read_data, size_t read_size, loff_t *offset) -{ - int rc = -EACCES; - unsigned int bytes_read = 0; - unsigned int total_read; - unsigned int current_read_size; - unsigned int rsize; - struct cifs_sb_info *cifs_sb; - struct cifs_tcon *tcon; - struct TCP_Server_Info *server; - unsigned int xid; - char *cur_offset; - struct cifsFileInfo *open_file; - struct cifs_io_parms io_parms = {0}; - int buf_type = CIFS_NO_BUFFER; - __u32 pid; - - xid = get_xid(); - cifs_sb = CIFS_FILE_SB(file); - - /* FIXME: set up handlers for larger reads and/or convert to async */ - rsize = min_t(unsigned int, cifs_sb->ctx->rsize, CIFSMaxBufSize); - - if (file->private_data == NULL) { - rc = -EBADF; - free_xid(xid); - return rc; - } - open_file = file->private_data; - tcon = tlink_tcon(open_file->tlink); - server = cifs_pick_channel(tcon->ses); - - if (!server->ops->sync_read) { - free_xid(xid); - return -ENOSYS; - } - - if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_RWPIDFORWARD) - pid = open_file->pid; - else - pid = current->tgid; - - if ((file->f_flags & O_ACCMODE) == O_WRONLY) - cifs_dbg(FYI, "attempting read on write only file instance\n"); - - for (total_read = 0, cur_offset = read_data; read_size > total_read; - total_read += bytes_read, cur_offset += bytes_read) { - do { - current_read_size = min_t(uint, read_size - total_read, - rsize); - /* - * For windows me and 9x we do not want to request more - * than it negotiated since it will refuse the read - * then. - */ - if (!(tcon->ses->capabilities & - tcon->ses->server->vals->cap_large_files)) { - current_read_size = min_t(uint, - current_read_size, CIFSMaxBufSize); - } - if (open_file->invalidHandle) { - rc = cifs_reopen_file(open_file, true); - if (rc != 0) - break; - } - io_parms.pid = pid; - io_parms.tcon = tcon; - io_parms.offset = *offset; - io_parms.length = current_read_size; - io_parms.server = server; - rc = server->ops->sync_read(xid, &open_file->fid, &io_parms, - &bytes_read, &cur_offset, - &buf_type); - } while (rc == -EAGAIN); - - if (rc || (bytes_read == 0)) { - if (total_read) { - break; - } else { - free_xid(xid); - return rc; - } - } else { - cifs_stats_bytes_read(tcon, total_read); - *offset += bytes_read; - } - } - free_xid(xid); - return total_read; -} -#endif // end netfslib remove 4633 - - static vm_fault_t cifs_page_mkwrite(struct vm_fault *vmf) { return netfs_page_mkwrite(vmf, NULL);