Message ID | 20230914154553.71939-1-shikemeng@huaweicloud.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:612c:172:b0:3f2:4152:657d with SMTP id h50csp246822vqi; Thu, 14 Sep 2023 03:27:05 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHrn5YTXU6uogXSstrrYMdIRTtWy+hAS33CWS31+rRA1Q5Qg7+KHlazGk8GNrapkMnLY9Ym X-Received: by 2002:a05:6a20:3ca2:b0:133:5da8:2fa7 with SMTP id b34-20020a056a203ca200b001335da82fa7mr2214383pzj.25.1694687224930; Thu, 14 Sep 2023 03:27:04 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1694687224; cv=none; d=google.com; s=arc-20160816; b=Da3krPXiPPKJbyIPs0tUHgQ1WvzAeemkyt/oZDTxBl84Xx7Q69AJ8r+wsrN0UTFrXx Xnbe6STHMzkfnFSHRfTCcX2XrZI6GtC6l/ebP66ArjZbIl9RTpXMcv3clgl7XO4rnQdK N9/4mxfIo9v1nC9VonSmCokxgDW4vITa7Qr8PTNPUq47n1xWvm8yhfTpguU5oGCIYhC8 FYE2Ghw8BsaE+bphicVeI36gbb4ZpcJCq9j1I86ShbuJOJEmNTNvbnjUbaoK5mgWWqeT YoM7uqD1E8yYavebZm1ZUI9R5/4QWRuq7o2BheIGVJlMHeMCQztKkQGjkAL5yr8o+F0p Lwig== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:to:from; bh=L9QNIidq5RbkOw2ehLd3Wr6Ahod8sxaq9h4KUByAmTQ=; fh=VLNdmcH5dSG6hQUpiZtN7SVZ9PRBlCiQFAxWzrn/V08=; b=OTXV82DPjPsfzBMfj3ojfKTZ0wIiz/jYhERnPpFRegOlP/XN9nqcyqyU90GH00MsWT JWRkhrkSoVom5CeIbCFLd4P69iGBeS0eqVbjoVH6cRwl0gwmzCt36MW3dsWjuYkImPe6 kVVEgqMytJuqbR3+SfyK6tirN8RkgJj1g+i77HDugMwIq5l/HiVDbs+SwfYAf0OUbIi+ LaAXtFomoFWgaFUaVEiwmnGTPLBXLCsOB+8VFycC+8DsXBRCp2xOksZgZj2Ri4pCdMod HkyA56sF+/c/fPtJObNaXu6wu7X5oDcpG0KUgcuCX41UoxUorFRFGU85qGXdYXQkCyeE dBKQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.34 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from howler.vger.email (howler.vger.email. [23.128.96.34]) by mx.google.com with ESMTPS id x5-20020a631705000000b0056534435116si1232857pgl.806.2023.09.14.03.27.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 14 Sep 2023 03:27:04 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.34 as permitted sender) client-ip=23.128.96.34; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.34 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by howler.vger.email (Postfix) with ESMTP id 98FF0829F675; Thu, 14 Sep 2023 00:46:50 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at howler.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231978AbjINHqd (ORCPT <rfc822;chrisfriedt@gmail.com> + 35 others); Thu, 14 Sep 2023 03:46:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48804 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229457AbjINHqd (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Thu, 14 Sep 2023 03:46:33 -0400 Received: from dggsgout12.his.huawei.com (unknown [45.249.212.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id ADE801BF6; Thu, 14 Sep 2023 00:46:28 -0700 (PDT) Received: from mail02.huawei.com (unknown [172.30.67.169]) by dggsgout12.his.huawei.com (SkyGuard) with ESMTP id 4RmTsS72Ldz4f3jM3; Thu, 14 Sep 2023 15:46:20 +0800 (CST) Received: from huaweicloud.com (unknown [10.175.124.27]) by APP3 (Coremail) with SMTP id _Ch0CgAngVBPugJlwkKgAQ--.8394S2; Thu, 14 Sep 2023 15:46:24 +0800 (CST) From: Kemeng Shi <shikemeng@huaweicloud.com> To: miklos@szeredi.hu, bernd.schubert@fastmail.fm, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH] fuse: remove unneeded lock which protecting update of congestion_threshold Date: Thu, 14 Sep 2023 23:45:53 +0800 Message-Id: <20230914154553.71939-1-shikemeng@huaweicloud.com> X-Mailer: git-send-email 2.30.0 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-CM-TRANSID: _Ch0CgAngVBPugJlwkKgAQ--.8394S2 X-Coremail-Antispam: 1UD129KBjvdXoWrZw4kJF4UJF1xur43XrW8Zwb_yoWkGrc_WF s8XFn7C3W5trWF9asrZw1Yqryvg34jyF1Yv395JryYvFy5tr4avF9rurn5ury7ZF4jq3s8 CrnYgFW3WwsF9jkaLaAFLSUrUUUUUb8apTn2vfkv8UJUUUU8Yxn0WfASr-VFAUDa7-sFnT 9fnUUIcSsGvfJTRUUUb7AYFVCjjxCrM7AC8VAFwI0_Jr0_Gr1l1xkIjI8I6I8E6xAIw20E Y4v20xvaj40_Wr0E3s1l1IIY67AEw4v_Jr0_Jr4l87I20VAvwVAaII0Ic2I_JFv_Gryl8c AvFVAK0II2c7xJM28CjxkF64kEwVA0rcxSw2x7M28EF7xvwVC0I7IYx2IY67AKxVW7JVWD JwA2z4x0Y4vE2Ix0cI8IcVCY1x0267AKxVW8Jr0_Cr1UM28EF7xvwVC2z280aVAFwI0_Gc CE3s1l84ACjcxK6I8E87Iv6xkF7I0E14v26rxl6s0DM2AIxVAIcxkEcVAq07x20xvEncxI r21l5I8CrVACY4xI64kE6c02F40Ex7xfMcIj6xIIjxv20xvE14v26r1j6r18McIj6I8E87 Iv67AKxVWUJVW8JwAm72CE4IkC6x0Yz7v_Jr0_Gr1lF7xvr2IYc2Ij64vIr41l42xK82IY c2Ij64vIr41l4I8I3I0E4IkC6x0Yz7v_Jr0_Gr1lx2IqxVAqx4xG67AKxVWUJVWUGwC20s 026x8GjcxK67AKxVWUGVWUWwC2zVAF1VAY17CE14v26r126r1DMIIYrxkI7VAKI48JMIIF 0xvE2Ix0cI8IcVAFwI0_Jr0_JF4lIxAIcVC0I7IYx2IY6xkF7I0E14v26r1j6r4UMIIF0x vE42xK8VAvwI8IcIk0rVWrZr1j6s0DMIIF0xvEx4A2jsIE14v26r1j6r4UMIIF0xvEx4A2 jsIEc7CjxVAFwI0_Jr0_GrUvcSsGvfC2KfnxnUUI43ZEXa7IU0miiDUUUUU== X-CM-SenderInfo: 5vklyvpphqwq5kxd4v5lfo033gof0z/ X-CFilter-Loop: Reflected Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (howler.vger.email [0.0.0.0]); Thu, 14 Sep 2023 00:46:50 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1777008351876499763 X-GMAIL-MSGID: 1777008351876499763 |
Series |
fuse: remove unneeded lock which protecting update of congestion_threshold
|
|
Commit Message
Kemeng Shi
Sept. 14, 2023, 3:45 p.m. UTC
Commit 670d21c6e17f6 ("fuse: remove reliance on bdi congestion") change how
congestion_threshold is used and lock in
fuse_conn_congestion_threshold_write is not needed anymore.
1. Access to supe_block is removed along with removing of bdi congestion.
Then down_read(&fc->killsb) which protecting access to super_block is no
needed.
2. Compare num_background and congestion_threshold without holding
bg_lock. Then there is no need to hold bg_lock to update
congestion_threshold.
Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com>
---
fs/fuse/control.c | 4 ----
1 file changed, 4 deletions(-)
Comments
On 9/14/23 17:45, Kemeng Shi wrote: > Commit 670d21c6e17f6 ("fuse: remove reliance on bdi congestion") change how > congestion_threshold is used and lock in > fuse_conn_congestion_threshold_write is not needed anymore. > 1. Access to supe_block is removed along with removing of bdi congestion. > Then down_read(&fc->killsb) which protecting access to super_block is no > needed. > 2. Compare num_background and congestion_threshold without holding > bg_lock. Then there is no need to hold bg_lock to update > congestion_threshold. > > Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com> > --- > fs/fuse/control.c | 4 ---- > 1 file changed, 4 deletions(-) > > diff --git a/fs/fuse/control.c b/fs/fuse/control.c > index 247ef4f76761..c5d7bf80efed 100644 > --- a/fs/fuse/control.c > +++ b/fs/fuse/control.c > @@ -174,11 +174,7 @@ static ssize_t fuse_conn_congestion_threshold_write(struct file *file, > if (!fc) > goto out; > > - down_read(&fc->killsb); > - spin_lock(&fc->bg_lock); > fc->congestion_threshold = val; > - spin_unlock(&fc->bg_lock); > - up_read(&fc->killsb); > fuse_conn_put(fc); > out: > return ret; Yeah, I don't see readers holding any of these locks. I just wonder if it wouldn't be better to use WRITE_ONCE to ensure a single atomic operation to store the value. Thanks, Bernd
on 9/16/2023 7:06 PM, Bernd Schubert wrote: > > > On 9/14/23 17:45, Kemeng Shi wrote: >> Commit 670d21c6e17f6 ("fuse: remove reliance on bdi congestion") change how >> congestion_threshold is used and lock in >> fuse_conn_congestion_threshold_write is not needed anymore. >> 1. Access to supe_block is removed along with removing of bdi congestion. >> Then down_read(&fc->killsb) which protecting access to super_block is no >> needed. >> 2. Compare num_background and congestion_threshold without holding >> bg_lock. Then there is no need to hold bg_lock to update >> congestion_threshold. >> >> Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com> >> --- >> fs/fuse/control.c | 4 ---- >> 1 file changed, 4 deletions(-) >> >> diff --git a/fs/fuse/control.c b/fs/fuse/control.c >> index 247ef4f76761..c5d7bf80efed 100644 >> --- a/fs/fuse/control.c >> +++ b/fs/fuse/control.c >> @@ -174,11 +174,7 @@ static ssize_t fuse_conn_congestion_threshold_write(struct file *file, >> if (!fc) >> goto out; >> - down_read(&fc->killsb); >> - spin_lock(&fc->bg_lock); >> fc->congestion_threshold = val; >> - spin_unlock(&fc->bg_lock); >> - up_read(&fc->killsb); >> fuse_conn_put(fc); >> out: >> return ret; > > Yeah, I don't see readers holding any of these locks. > I just wonder if it wouldn't be better to use WRITE_ONCE to ensure a single atomic operation to store the value. Sure, WRITE_ONCE looks better. I wonder if we should use READ_ONCE from reader. Would like to get any advice. Thanks! > > > Thanks, > Bernd >
On 9/19/23 08:11, Kemeng Shi wrote: > > > on 9/16/2023 7:06 PM, Bernd Schubert wrote: >> >> >> On 9/14/23 17:45, Kemeng Shi wrote: >>> Commit 670d21c6e17f6 ("fuse: remove reliance on bdi congestion") change how >>> congestion_threshold is used and lock in >>> fuse_conn_congestion_threshold_write is not needed anymore. >>> 1. Access to supe_block is removed along with removing of bdi congestion. >>> Then down_read(&fc->killsb) which protecting access to super_block is no >>> needed. >>> 2. Compare num_background and congestion_threshold without holding >>> bg_lock. Then there is no need to hold bg_lock to update >>> congestion_threshold. >>> >>> Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com> >>> --- >>> fs/fuse/control.c | 4 ---- >>> 1 file changed, 4 deletions(-) >>> >>> diff --git a/fs/fuse/control.c b/fs/fuse/control.c >>> index 247ef4f76761..c5d7bf80efed 100644 >>> --- a/fs/fuse/control.c >>> +++ b/fs/fuse/control.c >>> @@ -174,11 +174,7 @@ static ssize_t fuse_conn_congestion_threshold_write(struct file *file, >>> if (!fc) >>> goto out; >>> - down_read(&fc->killsb); >>> - spin_lock(&fc->bg_lock); >>> fc->congestion_threshold = val; >>> - spin_unlock(&fc->bg_lock); >>> - up_read(&fc->killsb); >>> fuse_conn_put(fc); >>> out: >>> return ret; >> >> Yeah, I don't see readers holding any of these locks. >> I just wonder if it wouldn't be better to use WRITE_ONCE to ensure a single atomic operation to store the value. > Sure, WRITE_ONCE looks better. I wonder if we should use READ_ONCE from reader. > Would like to get any advice. Thanks! I'm not entirely sure either, but I _think_ the compiler is free to store a 32 bit value with multiple operations (like 2 x 16 bit). In that case a competing reading thread might read garbage... Although I don't see this documented here https://www.kernel.org/doc/Documentation/memory-barriers.txt Though documented there is that the compile is free to optimize out the storage at all, see "(*) Similarly, the compiler is within its rights to omit a store entirely" Regarding READ_ONCE, I don't have a strong opinion, if the compiler makes some optimizations and the value would be wrong for a few cycles, would that matter for that variable? Unless the compiler would be really creative and the variable would get never updated... For sure READ_ONCE would be safer, but I don't know if it is needed SSee section "The compiler is within its rights to omit a load entirely if it know" in the document above. Thanks, Bernd
on 9/19/2023 9:12 PM, Bernd Schubert wrote: > > > On 9/19/23 08:11, Kemeng Shi wrote: >> >> >> on 9/16/2023 7:06 PM, Bernd Schubert wrote: >>> >>> >>> On 9/14/23 17:45, Kemeng Shi wrote: >>>> Commit 670d21c6e17f6 ("fuse: remove reliance on bdi congestion") change how >>>> congestion_threshold is used and lock in >>>> fuse_conn_congestion_threshold_write is not needed anymore. >>>> 1. Access to supe_block is removed along with removing of bdi congestion. >>>> Then down_read(&fc->killsb) which protecting access to super_block is no >>>> needed. >>>> 2. Compare num_background and congestion_threshold without holding >>>> bg_lock. Then there is no need to hold bg_lock to update >>>> congestion_threshold. >>>> >>>> Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com> >>>> --- >>>> fs/fuse/control.c | 4 ---- >>>> 1 file changed, 4 deletions(-) >>>> >>>> diff --git a/fs/fuse/control.c b/fs/fuse/control.c >>>> index 247ef4f76761..c5d7bf80efed 100644 >>>> --- a/fs/fuse/control.c >>>> +++ b/fs/fuse/control.c >>>> @@ -174,11 +174,7 @@ static ssize_t fuse_conn_congestion_threshold_write(struct file *file, >>>> if (!fc) >>>> goto out; >>>> - down_read(&fc->killsb); >>>> - spin_lock(&fc->bg_lock); >>>> fc->congestion_threshold = val; >>>> - spin_unlock(&fc->bg_lock); >>>> - up_read(&fc->killsb); >>>> fuse_conn_put(fc); >>>> out: >>>> return ret; >>> >>> Yeah, I don't see readers holding any of these locks. >>> I just wonder if it wouldn't be better to use WRITE_ONCE to ensure a single atomic operation to store the value. >> Sure, WRITE_ONCE looks better. I wonder if we should use READ_ONCE from reader. >> Would like to get any advice. Thanks! > Sorry for the dealy - it toke me some time to go through the barrier documents. > I'm not entirely sure either, but I _think_ the compiler is free to store a 32 bit value with multiple operations (like 2 x 16 bit). In that case a competing reading thread might read garbage... > Although I don't see this documented here > https://www.kernel.org/doc/Documentation/memory-barriers.txt I found this is documented in section "(*) For aligned memory locations whose size allows them to be accessed..." Then WRITE_ONCE is absolutely needed now as you menthioned before. > Though documented there is that the compile is free to optimize out the storage at all, see > "(*) Similarly, the compiler is within its rights to omit a store entirely" > > > Regarding READ_ONCE, I don't have a strong opinion, if the compiler makes some optimizations and the value would be wrong for a few cycles, would that matter for that variable? Unless the compiler would be really creative and the variable would get never updated... For sure READ_ONCE would be safer, but I don't know if it is needed > SSee section > "The compiler is within its rights to omit a load entirely if it know" > in the document above. I go through all examples of optimizations in document and congestion_threshold has no same trouble descripted in document. For specifc case you menthioned that "The compiler is within its rights to omit a load entirely if it know". The compiler will keep the first load and only omit successive loads from same variable in loop. As there is no repeat loading from congestion_threshold in loop, congestion_threshold is out of this trouble. Anyway, congestion_threshold is more like a hint and the worst case is that it is stale for a few cycles. I prefer to keep reading congestion_threshold without READ_ONCE and will do it in next version if it's fine to you. Thanks! > > Thanks, > Bernd > > > >
On 9/27/23 05:04, Kemeng Shi wrote: > > > on 9/19/2023 9:12 PM, Bernd Schubert wrote: >> >> >> On 9/19/23 08:11, Kemeng Shi wrote: >>> >>> >>> on 9/16/2023 7:06 PM, Bernd Schubert wrote: >>>> >>>> >>>> On 9/14/23 17:45, Kemeng Shi wrote: >>>>> Commit 670d21c6e17f6 ("fuse: remove reliance on bdi congestion") change how >>>>> congestion_threshold is used and lock in >>>>> fuse_conn_congestion_threshold_write is not needed anymore. >>>>> 1. Access to supe_block is removed along with removing of bdi congestion. >>>>> Then down_read(&fc->killsb) which protecting access to super_block is no >>>>> needed. >>>>> 2. Compare num_background and congestion_threshold without holding >>>>> bg_lock. Then there is no need to hold bg_lock to update >>>>> congestion_threshold. >>>>> >>>>> Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com> >>>>> --- >>>>> fs/fuse/control.c | 4 ---- >>>>> 1 file changed, 4 deletions(-) >>>>> >>>>> diff --git a/fs/fuse/control.c b/fs/fuse/control.c >>>>> index 247ef4f76761..c5d7bf80efed 100644 >>>>> --- a/fs/fuse/control.c >>>>> +++ b/fs/fuse/control.c >>>>> @@ -174,11 +174,7 @@ static ssize_t fuse_conn_congestion_threshold_write(struct file *file, >>>>> if (!fc) >>>>> goto out; >>>>> - down_read(&fc->killsb); >>>>> - spin_lock(&fc->bg_lock); >>>>> fc->congestion_threshold = val; >>>>> - spin_unlock(&fc->bg_lock); >>>>> - up_read(&fc->killsb); >>>>> fuse_conn_put(fc); >>>>> out: >>>>> return ret; >>>> >>>> Yeah, I don't see readers holding any of these locks. >>>> I just wonder if it wouldn't be better to use WRITE_ONCE to ensure a single atomic operation to store the value. >>> Sure, WRITE_ONCE looks better. I wonder if we should use READ_ONCE from reader. >>> Would like to get any advice. Thanks! >> > Sorry for the dealy - it toke me some time to go through the barrier documents. >> I'm not entirely sure either, but I _think_ the compiler is free to store a 32 bit value with multiple operations (like 2 x 16 bit). In that case a competing reading thread might read garbage... >> Although I don't see this documented here >> https://www.kernel.org/doc/Documentation/memory-barriers.txt > I found this is documented in section > "(*) For aligned memory locations whose size allows them to be accessed..." > Then WRITE_ONCE is absolutely needed now as you menthioned before. >> Though documented there is that the compile is free to optimize out the storage at all, see >> "(*) Similarly, the compiler is within its rights to omit a store entirely" >> >> >> Regarding READ_ONCE, I don't have a strong opinion, if the compiler makes some optimizations and the value would be wrong for a few cycles, would that matter for that variable? Unless the compiler would be really creative and the variable would get never updated... For sure READ_ONCE would be safer, but I don't know if it is needed >> SSee section >> "The compiler is within its rights to omit a load entirely if it know" >> in the document above. > I go through all examples of optimizations in document and congestion_threshold > has no same trouble descripted in document. > For specifc case you menthioned that "The compiler is within its rights to omit > a load entirely if it know". The compiler will keep the first load and only omit > successive loads from same variable in loop. As there is no repeat loading from > congestion_threshold in loop, congestion_threshold is out of this trouble. > Anyway, congestion_threshold is more like a hint and the worst case is that it is > stale for a few cycles. I prefer to keep reading congestion_threshold without > READ_ONCE and will do it in next version if it's fine to you. Thanks! Sounds good to me, thanks for reading the document carefully! Bernd
diff --git a/fs/fuse/control.c b/fs/fuse/control.c index 247ef4f76761..c5d7bf80efed 100644 --- a/fs/fuse/control.c +++ b/fs/fuse/control.c @@ -174,11 +174,7 @@ static ssize_t fuse_conn_congestion_threshold_write(struct file *file, if (!fc) goto out; - down_read(&fc->killsb); - spin_lock(&fc->bg_lock); fc->congestion_threshold = val; - spin_unlock(&fc->bg_lock); - up_read(&fc->killsb); fuse_conn_put(fc); out: return ret;