From patchwork Fri Oct 14 03:07:43 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jia Zhu X-Patchwork-Id: 2471 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4ac7:0:0:0:0:0 with SMTP id y7csp590140wrs; Thu, 13 Oct 2022 20:10:16 -0700 (PDT) X-Google-Smtp-Source: AMsMyM6F8xgUkyaP7fkDoKbvQFfmZZa2o5OZQs1WjDfLAgghoidtWNyQ2HyMGo20g+aw4bezvnIG X-Received: by 2002:a63:6c84:0:b0:43c:700f:6218 with SMTP id h126-20020a636c84000000b0043c700f6218mr2682650pgc.420.1665717016139; Thu, 13 Oct 2022 20:10:16 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1665717016; cv=none; d=google.com; s=arc-20160816; b=itJkSk9YVOIdyf7Ne0T3n+SnL7Xv3vhqJZfdlYWsq+C5+NPD+0n8mOSZg3L0I3fYKv q1hJ6en85P+AOt2EZlyxbTpqdjwS9cNnzNoPcLq/DxQlBIx4v3XekDEoi78PL7ihvcDm MnCYGo6N/yp4IWXVHJXgZQeYnqy+TILD8mGn7Z2+CUBSqzTaiQ7lkZX9DYmURIVEo3Y5 OiIcxMp/PpE/N00CGcxzGZgdIiG/paokqk+CYEfhCT+X9T3dISCUagn4/+vIIbuOJLND 1+yRW00X+g+eLqwfz9it86yGD8hiE9lXelUsC0DlWaU58I2F9ESdH9+VtZCfubMX8WeP 3A2g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=1FT3pqXaLpgHD46SI9bXcLzjkfeKWYrP6QJHIa7fSgI=; b=euhR9684Hl0W1PeTGHY3hhat+YfIUXwiI3/CjTg+isgfRuCt1rRcHjQxDhiNmOmI6j NKjCt22bBORSIwSh39KuFzmvs2KKq2iSFanz3fhYS5g0Qy9drvnPAzodT0F4pTTus99p BT2jSsl6UIWDRR275jmfJ8TqEfLCXKkJwe7rlJi8JTgrFW+VyCuh8kAPJrKOfpkPXxCv MCD74C7X3dKkT/FffEKOCbH5MlQUUPEdyjhZL9gxBGUraHK8wscTRcDk6evO+ca0wAWE szfDgsBUlCNfZzaT4w8HwNcvy7qNOImCnCRZWVsni92pYM4vv+x/tm7pJ+1N0rqaQeXn sfOg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=kP1rDwaQ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=bytedance.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id v10-20020a63b64a000000b00438e839828bsi1416789pgt.72.2022.10.13.20.10.03; Thu, 13 Oct 2022 20:10:16 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=kP1rDwaQ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=bytedance.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229617AbiJNDIv (ORCPT + 99 others); Thu, 13 Oct 2022 23:08:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43772 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229679AbiJNDIe (ORCPT ); Thu, 13 Oct 2022 23:08:34 -0400 Received: from mail-pj1-x102f.google.com (mail-pj1-x102f.google.com [IPv6:2607:f8b0:4864:20::102f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EBE581A2095 for ; Thu, 13 Oct 2022 20:08:23 -0700 (PDT) Received: by mail-pj1-x102f.google.com with SMTP id o17-20020a17090aac1100b0020d98b0c0f4so5298277pjq.4 for ; Thu, 13 Oct 2022 20:08:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=1FT3pqXaLpgHD46SI9bXcLzjkfeKWYrP6QJHIa7fSgI=; b=kP1rDwaQBKUF2dx9dU7gfWxrzs2OIZnr2Yr6Wbk5ixu8d0RGev1Q7fli4Nt2Lb5ABo eP/UBvKONsjfVlL61cE7OZVkfTNA7XjwAN7ULKQnz+oimLxOFNZJr6Hhc/c8ycMBXN69 OK4mmQwh5G7DwyBPl+OwYZJd8fTHD2IT9WMBOmte2Szv55YmwXIHEcE6YxGbRwkePNlx rqmlPEy7a+78cjXo8yu2RVolq5cn2kEx/jHfzhcSjiz9e6/l1naSy5O0fAjOxdsvad8B Gd+0u9t1996iFmDHbjbdsWtSAqRFH8FhgwXkH8lxXcSGejXogWdzw9G56Ve6EsNEK6lp O8mw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=1FT3pqXaLpgHD46SI9bXcLzjkfeKWYrP6QJHIa7fSgI=; b=4mHix+Dk+X+uJbbAceM4xtH3a2H2/1e+iHxL2V3pqHnF7lOBy4R+/7NvOpbtdDCWh0 9cjLbUffEp4SmCCa4euCR3wDPCasi9DNMEdc1cL2aRT0z38zbcyDvONKPwPuv7E7RA3l AaV0FQL71A+wqSwUSgX+bb+Vnk3orrrZaIN4v5i8QnJIExaBQXhsiMsjCGj4E6aNYIJX /uZxcFvMT6AmP4KjrXWLm0osi/rugxhno/iXuZI7WpL6umNiV+jJXYt9xAgYV6/vpUXN dU0egvE/Fj3OTYzxbu/KWBzgELh9Qs6yKmM8+Iuu/40JJqLbHzMrrMi7xqrynAeYGrdu tKeA== X-Gm-Message-State: ACrzQf2DAbkVpFErG1p3htxdK/jQYaSjZQChfgZHxmKeUUqVXTk2N9E3 9pHHhJ/Ra0kc0C0g2hZCdtJd4w== X-Received: by 2002:a17:90b:3b8d:b0:20d:5c7a:abf1 with SMTP id pc13-20020a17090b3b8d00b0020d5c7aabf1mr3368685pjb.118.1665716898605; Thu, 13 Oct 2022 20:08:18 -0700 (PDT) Received: from C02G705SMD6V.bytedance.net ([63.216.146.183]) by smtp.gmail.com with ESMTPSA id h4-20020a17090a710400b0020ae09e9724sm425524pjk.53.2022.10.13.20.08.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 13 Oct 2022 20:08:18 -0700 (PDT) From: Jia Zhu To: dhowells@redhat.com, xiang@kernel.org, jefflexu@linux.alibaba.com Cc: linux-cachefs@redhat.com, linux-erofs@lists.ozlabs.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, yinxin.x@bytedance.com, Jia Zhu Subject: [PATCH V2 3/5] cachefiles: resend an open request if the read request's object is closed Date: Fri, 14 Oct 2022 11:07:43 +0800 Message-Id: <20221014030745.25748-4-zhujia.zj@bytedance.com> X-Mailer: git-send-email 2.37.0 (Apple Git-136) In-Reply-To: <20221014030745.25748-1-zhujia.zj@bytedance.com> References: <20221014030745.25748-1-zhujia.zj@bytedance.com> MIME-Version: 1.0 X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1746630886251516428?= X-GMAIL-MSGID: =?utf-8?q?1746630886251516428?= When an anonymous fd is closed by user daemon, if there is a new read request for this file comes up, the anonymous fd should be re-opened to handle that read request rather than fail it directly. 1. Introduce reopening state for objects that are closed but have inflight/subsequent read requests. 2. No longer flush READ requests but only CLOSE requests when anonymous fd is closed. 3. Enqueue the reopen work to workqueue, thus user daemon could get rid of daemon_read context and handle that request smoothly. Otherwise, the user daemon will send a reopen request and wait for itself to process the request. Signed-off-by: Jia Zhu Reviewed-by: Xin Yin Reviewed-by: Jingbo Xu --- fs/cachefiles/internal.h | 3 ++ fs/cachefiles/ondemand.c | 94 ++++++++++++++++++++++++++++------------ 2 files changed, 69 insertions(+), 28 deletions(-) diff --git a/fs/cachefiles/internal.h b/fs/cachefiles/internal.h index be1e7fbdb444..21ef5007f488 100644 --- a/fs/cachefiles/internal.h +++ b/fs/cachefiles/internal.h @@ -48,9 +48,11 @@ struct cachefiles_volume { enum cachefiles_object_state { CACHEFILES_ONDEMAND_OBJSTATE_close, /* Anonymous fd closed by daemon or initial state */ CACHEFILES_ONDEMAND_OBJSTATE_open, /* Anonymous fd associated with object is available */ + CACHEFILES_ONDEMAND_OBJSTATE_reopening, /* Object that was closed and is being reopened. */ }; struct cachefiles_ondemand_info { + struct work_struct work; int ondemand_id; enum cachefiles_object_state state; struct cachefiles_object *object; @@ -324,6 +326,7 @@ cachefiles_ondemand_set_object_##_state(struct cachefiles_object *object) \ CACHEFILES_OBJECT_STATE_FUNCS(open); CACHEFILES_OBJECT_STATE_FUNCS(close); +CACHEFILES_OBJECT_STATE_FUNCS(reopening); #else static inline ssize_t cachefiles_ondemand_daemon_read(struct cachefiles_cache *cache, char __user *_buffer, size_t buflen) diff --git a/fs/cachefiles/ondemand.c b/fs/cachefiles/ondemand.c index 6e47667c6690..c9eea89befec 100644 --- a/fs/cachefiles/ondemand.c +++ b/fs/cachefiles/ondemand.c @@ -18,14 +18,10 @@ static int cachefiles_ondemand_fd_release(struct inode *inode, info->ondemand_id = CACHEFILES_ONDEMAND_ID_CLOSED; cachefiles_ondemand_set_object_close(object); - /* - * Flush all pending READ requests since their completion depends on - * anon_fd. - */ - xas_for_each(&xas, req, ULONG_MAX) { + /* Only flush CACHEFILES_REQ_NEW marked req to avoid race with daemon_read */ + xas_for_each_marked(&xas, req, ULONG_MAX, CACHEFILES_REQ_NEW) { if (req->msg.object_id == object_id && - req->msg.opcode == CACHEFILES_OP_READ) { - req->error = -EIO; + req->msg.opcode == CACHEFILES_OP_CLOSE) { complete(&req->done); xas_store(&xas, NULL); } @@ -179,6 +175,7 @@ int cachefiles_ondemand_copen(struct cachefiles_cache *cache, char *args) trace_cachefiles_ondemand_copen(req->object, id, size); cachefiles_ondemand_set_object_open(req->object); + wake_up_all(&cache->daemon_pollwq); out: complete(&req->done); @@ -238,6 +235,40 @@ static int cachefiles_ondemand_get_fd(struct cachefiles_req *req) return ret; } +static void ondemand_object_worker(struct work_struct *work) +{ + struct cachefiles_object *object = + ((struct cachefiles_ondemand_info *)work)->object; + + cachefiles_ondemand_init_object(object); +} + +/* + * Reopen the closed object with associated read request. + * Skip read requests whose related object are reopening. + */ +static struct cachefiles_req *cachefiles_ondemand_select_req(struct xa_state *xas, + unsigned long xa_max) +{ + struct cachefiles_req *req; + struct cachefiles_ondemand_info *info; + + xas_for_each_marked(xas, req, xa_max, CACHEFILES_REQ_NEW) { + if (req->msg.opcode != CACHEFILES_OP_READ) + return req; + info = req->object->private; + if (info->state == CACHEFILES_ONDEMAND_OBJSTATE_close) { + cachefiles_ondemand_set_object_reopening(req->object); + queue_work(fscache_wq, &info->work); + continue; + } else if (info->state == CACHEFILES_ONDEMAND_OBJSTATE_reopening) { + continue; + } + return req; + } + return NULL; +} + ssize_t cachefiles_ondemand_daemon_read(struct cachefiles_cache *cache, char __user *_buffer, size_t buflen) { @@ -248,16 +279,16 @@ ssize_t cachefiles_ondemand_daemon_read(struct cachefiles_cache *cache, int ret = 0; XA_STATE(xas, &cache->reqs, cache->req_id_next); + xa_lock(&cache->reqs); /* * Cyclically search for a request that has not ever been processed, * to prevent requests from being processed repeatedly, and make * request distribution fair. */ - xa_lock(&cache->reqs); - req = xas_find_marked(&xas, UINT_MAX, CACHEFILES_REQ_NEW); + req = cachefiles_ondemand_select_req(&xas, ULONG_MAX); if (!req && cache->req_id_next > 0) { xas_set(&xas, 0); - req = xas_find_marked(&xas, cache->req_id_next - 1, CACHEFILES_REQ_NEW); + req = cachefiles_ondemand_select_req(&xas, cache->req_id_next - 1); } if (!req) { xa_unlock(&cache->reqs); @@ -277,14 +308,18 @@ ssize_t cachefiles_ondemand_daemon_read(struct cachefiles_cache *cache, xa_unlock(&cache->reqs); id = xas.xa_index; - msg->msg_id = id; if (msg->opcode == CACHEFILES_OP_OPEN) { ret = cachefiles_ondemand_get_fd(req); - if (ret) + if (ret) { + cachefiles_ondemand_set_object_close(req->object); goto error; + } } + msg->msg_id = id; + msg->object_id = req->object->private->ondemand_id; + if (copy_to_user(_buffer, msg, n) != 0) { ret = -EFAULT; goto err_put_fd; @@ -317,19 +352,23 @@ static int cachefiles_ondemand_send_req(struct cachefiles_object *object, void *private) { struct cachefiles_cache *cache = object->volume->cache; - struct cachefiles_req *req; + struct cachefiles_req *req = NULL; XA_STATE(xas, &cache->reqs, 0); int ret; if (!test_bit(CACHEFILES_ONDEMAND_MODE, &cache->flags)) return 0; - if (test_bit(CACHEFILES_DEAD, &cache->flags)) - return -EIO; + if (test_bit(CACHEFILES_DEAD, &cache->flags)) { + ret = -EIO; + goto out; + } req = kzalloc(sizeof(*req) + data_len, GFP_KERNEL); - if (!req) - return -ENOMEM; + if (!req) { + ret = -ENOMEM; + goto out; + } req->object = object; init_completion(&req->done); @@ -367,7 +406,7 @@ static int cachefiles_ondemand_send_req(struct cachefiles_object *object, /* coupled with the barrier in cachefiles_flush_reqs() */ smp_mb(); - if (opcode != CACHEFILES_OP_OPEN && + if (opcode == CACHEFILES_OP_CLOSE && !cachefiles_ondemand_object_is_open(object)) { WARN_ON_ONCE(object->private->ondemand_id == 0); xas_unlock(&xas); @@ -392,7 +431,15 @@ static int cachefiles_ondemand_send_req(struct cachefiles_object *object, wake_up_all(&cache->daemon_pollwq); wait_for_completion(&req->done); ret = req->error; + kfree(req); + return ret; out: + /* Reset the object to close state in error handling path. + * If error occurs after creating the anonymous fd, + * cachefiles_ondemand_fd_release() will set object to close. + */ + if (opcode == CACHEFILES_OP_OPEN) + cachefiles_ondemand_set_object_close(req->object); kfree(req); return ret; } @@ -439,7 +486,6 @@ static int cachefiles_ondemand_init_close_req(struct cachefiles_req *req, if (!cachefiles_ondemand_object_is_open(object)) return -ENOENT; - req->msg.object_id = object->private->ondemand_id; trace_cachefiles_ondemand_close(object, &req->msg); return 0; } @@ -455,16 +501,7 @@ static int cachefiles_ondemand_init_read_req(struct cachefiles_req *req, struct cachefiles_object *object = req->object; struct cachefiles_read *load = (void *)req->msg.data; struct cachefiles_read_ctx *read_ctx = private; - int object_id = object->private->ondemand_id; - - /* Stop enqueuing requests when daemon has closed anon_fd. */ - if (!cachefiles_ondemand_object_is_open(object)) { - WARN_ON_ONCE(object_id == 0); - pr_info_once("READ: anonymous fd closed prematurely.\n"); - return -EIO; - } - req->msg.object_id = object_id; load->off = read_ctx->off; load->len = read_ctx->len; trace_cachefiles_ondemand_read(object, &req->msg, load); @@ -513,6 +550,7 @@ int cachefiles_ondemand_init_obj_info(struct cachefiles_object *object, return -ENOMEM; object->private->object = object; + INIT_WORK(&object->private->work, ondemand_object_worker); return 0; }