[-next,3/3] md: use interruptible apis in idle/frozen_sync_thread()

Message ID 20231228125553.2697765-4-yukuai1@huaweicloud.com
State New
Headers
Series md: some minor cleanups |

Commit Message

Yu Kuai Dec. 28, 2023, 12:55 p.m. UTC
  From: Yu Kuai <yukuai3@huawei.com>

Before refactoring idle and frozen from action_store, interruptible apis
is used so that hungtask warning won't be triggered if it takes too long
to finish idle/frozen sync_thread. So change to use interruptible apis.

In order not to make stop_sync_thread() more complicated, factor out a
helper prepare_to_stop_sync_thread() to replace stop_sync_thread().

Also return error to user if idle/frozen_sync_thread() failed, otherwise
user will be misleaded.

Fixes: 130443d60b1b ("md: refactor idle/frozen_sync_thread() to fix deadlock")
Fixes: 8e8e2518fcec ("md: Close race when setting 'action' to 'idle'.")
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
---
 drivers/md/md.c | 105 ++++++++++++++++++++++++++++++------------------
 1 file changed, 67 insertions(+), 38 deletions(-)
  

Comments

Song Liu Jan. 30, 2024, 6:37 a.m. UTC | #1
Hi,

Sorry for the late reply.

The first two patches of the set look good, so I applied them to
md-tmp-6.9 branch. However, this one needs a respin.

On Thu, Dec 28, 2023 at 4:58 AM Yu Kuai <yukuai1@huaweicloud.com> wrote:
>
> From: Yu Kuai <yukuai3@huawei.com>
>
> Before refactoring idle and frozen from action_store, interruptible apis
> is used so that hungtask warning won't be triggered if it takes too long
> to finish idle/frozen sync_thread. So change to use interruptible apis.

This paragraph is confusing. Please rephrase it.

>
> In order not to make stop_sync_thread() more complicated, factor out a
> helper prepare_to_stop_sync_thread() to replace stop_sync_thread().
>
> Also return error to user if idle/frozen_sync_thread() failed, otherwise
> user will be misleaded.

s/misleaded/misled/

>
> Fixes: 130443d60b1b ("md: refactor idle/frozen_sync_thread() to fix deadlock")
> Fixes: 8e8e2518fcec ("md: Close race when setting 'action' to 'idle'.")

Please add more information about what is being fixed here, so that
we can make a clear decision on whether the fix needs to be back
ported to stable kernels.

> Signed-off-by: Yu Kuai <yukuai3@huawei.com>
> ---
>  drivers/md/md.c | 105 ++++++++++++++++++++++++++++++------------------
>  1 file changed, 67 insertions(+), 38 deletions(-)
>
> diff --git a/drivers/md/md.c b/drivers/md/md.c
> index 60f99768a1a9..9ea05de79fe4 100644
> --- a/drivers/md/md.c
> +++ b/drivers/md/md.c
> @@ -4846,26 +4846,34 @@ action_show(struct mddev *mddev, char *page)
>         return sprintf(page, "%s\n", type);
>  }
>
> +static bool sync_thread_stopped(struct mddev *mddev, int *sync_seq)

I think we need a comment for this.

> +{
> +       if (!test_bit(MD_RECOVERY_RUNNING, &mddev->recovery))
> +               return true;
> +
> +       if (sync_seq && *sync_seq != atomic_read(&mddev->sync_seq))
> +               return true;
> +
> +       return false;
> +}
> +
>  /**
> - * stop_sync_thread() - wait for sync_thread to stop if it's running.
> + * prepare_to_stop_sync_thread() - prepare to stop sync_thread if it's running.
>   * @mddev:     the array.
> - * @locked:    if set, reconfig_mutex will still be held after this function
> - *             return; if not set, reconfig_mutex will be released after this
> - *             function return.
> - * @check_seq: if set, only wait for curent running sync_thread to stop, noted
> - *             that new sync_thread can still start.
> + * @unlock:    whether or not caller want to release reconfig_mutex if
> + *             sync_thread is not running.
> + *
> + * Return true if sync_thread is running, release reconfig_mutex and do
> + * preparatory work to stop sync_thread, caller should wait for
> + * sync_thread_stopped() to return true. Return false if sync_thread is not
> + * running, reconfig_mutex will be released if @unlock is set.
>   */

I found prepare_to_stop_sync_thread very hard to reason. Please try to
rephrase the comment or refactor the code. Maybe it makes sense to put
the following logic and its variations to a separate function:

        if (prepare_to_stop_sync_thread(mddev, false)) {
                wait_event(resync_wait, sync_thread_stopped(mddev, NULL));
                mddev_lock_nointr(mddev);
        }

Thanks,
Song

> -static void stop_sync_thread(struct mddev *mddev, bool locked, bool check_seq)
> +static bool prepare_to_stop_sync_thread(struct mddev *mddev, bool unlock)
>  {
> -       int sync_seq;

[...]
  
Yu Kuai Jan. 30, 2024, 7:04 a.m. UTC | #2
Hi,

在 2024/01/30 14:37, Song Liu 写道:
> Hi,
> 
> Sorry for the late reply.
> 
> The first two patches of the set look good, so I applied them to
> md-tmp-6.9 branch. However, this one needs a respin.

We are fixing dm-raid regressions, so I'll not send a new version until
that work is done. :)
> 
> On Thu, Dec 28, 2023 at 4:58 AM Yu Kuai <yukuai1@huaweicloud.com> wrote:
>>
>> From: Yu Kuai <yukuai3@huawei.com>
>>
>> Before refactoring idle and frozen from action_store, interruptible apis
>> is used so that hungtask warning won't be triggered if it takes too long
>> to finish idle/frozen sync_thread. So change to use interruptible apis.
> 
> This paragraph is confusing. Please rephrase it.
> 
>>
>> In order not to make stop_sync_thread() more complicated, factor out a
>> helper prepare_to_stop_sync_thread() to replace stop_sync_thread().
>>
>> Also return error to user if idle/frozen_sync_thread() failed, otherwise
>> user will be misleaded.
> 
> s/misleaded/misled/
> 
>>
>> Fixes: 130443d60b1b ("md: refactor idle/frozen_sync_thread() to fix deadlock")
>> Fixes: 8e8e2518fcec ("md: Close race when setting 'action' to 'idle'.")
> 
> Please add more information about what is being fixed here, so that
> we can make a clear decision on whether the fix needs to be back
> ported to stable kernels.

8e8e2518fcec added the interruptible apis first, however, it doesn't
return error to user if it's interrupted.

130443d60b1b deleted the interruptible apis.

Perhaps I should split this patch into two minor patches, one for each
fixed tag.

> 
>> Signed-off-by: Yu Kuai <yukuai3@huawei.com>
>> ---
>>   drivers/md/md.c | 105 ++++++++++++++++++++++++++++++------------------
>>   1 file changed, 67 insertions(+), 38 deletions(-)
>>
>> diff --git a/drivers/md/md.c b/drivers/md/md.c
>> index 60f99768a1a9..9ea05de79fe4 100644
>> --- a/drivers/md/md.c
>> +++ b/drivers/md/md.c
>> @@ -4846,26 +4846,34 @@ action_show(struct mddev *mddev, char *page)
>>          return sprintf(page, "%s\n", type);
>>   }
>>
>> +static bool sync_thread_stopped(struct mddev *mddev, int *sync_seq)
> 
> I think we need a comment for this.
> 
>> +{
>> +       if (!test_bit(MD_RECOVERY_RUNNING, &mddev->recovery))
>> +               return true;
>> +
>> +       if (sync_seq && *sync_seq != atomic_read(&mddev->sync_seq))
>> +               return true;
>> +
>> +       return false;
>> +}
>> +
>>   /**
>> - * stop_sync_thread() - wait for sync_thread to stop if it's running.
>> + * prepare_to_stop_sync_thread() - prepare to stop sync_thread if it's running.
>>    * @mddev:     the array.
>> - * @locked:    if set, reconfig_mutex will still be held after this function
>> - *             return; if not set, reconfig_mutex will be released after this
>> - *             function return.
>> - * @check_seq: if set, only wait for curent running sync_thread to stop, noted
>> - *             that new sync_thread can still start.
>> + * @unlock:    whether or not caller want to release reconfig_mutex if
>> + *             sync_thread is not running.
>> + *
>> + * Return true if sync_thread is running, release reconfig_mutex and do
>> + * preparatory work to stop sync_thread, caller should wait for
>> + * sync_thread_stopped() to return true. Return false if sync_thread is not
>> + * running, reconfig_mutex will be released if @unlock is set.
>>    */
> 
> I found prepare_to_stop_sync_thread very hard to reason. Please try to
> rephrase the comment or refactor the code. Maybe it makes sense to put
> the following logic and its variations to a separate function:
> 
>          if (prepare_to_stop_sync_thread(mddev, false)) {
>                  wait_event(resync_wait, sync_thread_stopped(mddev, NULL));
>                  mddev_lock_nointr(mddev);
>          }

I can do this, but there are 5 callers and only two of them can use the
separate caller. Pehaps something like this?

void stop_sync_thread(struct mddev *mddev, bool wait_sb)
{
	if (prepare_to_stop_sync_thread(mddev, wait_sb)) {
		wait_event(resync_wait, ...);
		if (!wait_sb) {
			mddev_lock_nointr(mddev);
			return;
		}
	}

	if (wait_sb) {
		wait_event(sb_wait, ...);
		mddev_lock_nointr(mddev);
	}
}

int stop_sync_thread_interruptible(struct mddev *mddev, bool check_sync_seq)
{
..
}

Thanks,
Kuai

> 
> Thanks,
> Song
> 
>> -static void stop_sync_thread(struct mddev *mddev, bool locked, bool check_seq)
>> +static bool prepare_to_stop_sync_thread(struct mddev *mddev, bool unlock)
>>   {
>> -       int sync_seq;
> 
> [...]
> .
>
  
Song Liu Jan. 30, 2024, 7:34 a.m. UTC | #3
On Mon, Jan 29, 2024 at 11:04 PM Yu Kuai <yukuai1@huaweicloud.com> wrote:
>
> Hi,
>
> 在 2024/01/30 14:37, Song Liu 写道:
> > Hi,
> >
> > Sorry for the late reply.
> >
> > The first two patches of the set look good, so I applied them to
> > md-tmp-6.9 branch. However, this one needs a respin.
>
> We are fixing dm-raid regressions, so I'll not send a new version until
> that work is done. :)

Sure. Fixing the regression is more urgent.

> >
> > On Thu, Dec 28, 2023 at 4:58 AM Yu Kuai <yukuai1@huaweicloud.com> wrote:
> >>
> >> From: Yu Kuai <yukuai3@huawei.com>
[...]
> > I found prepare_to_stop_sync_thread very hard to reason. Please try to
> > rephrase the comment or refactor the code. Maybe it makes sense to put
> > the following logic and its variations to a separate function:
> >
> >          if (prepare_to_stop_sync_thread(mddev, false)) {
> >                  wait_event(resync_wait, sync_thread_stopped(mddev, NULL));
> >                  mddev_lock_nointr(mddev);
> >          }
>
> I can do this, but there are 5 callers and only two of them can use the
> separate caller. Pehaps something like this?
>
> void stop_sync_thread(struct mddev *mddev, bool wait_sb)
> {
>         if (prepare_to_stop_sync_thread(mddev, wait_sb)) {
>                 wait_event(resync_wait, ...);
>                 if (!wait_sb) {
>                         mddev_lock_nointr(mddev);
>                         return;
>                 }
>         }
>
>         if (wait_sb) {
>                 wait_event(sb_wait, ...);
>                 mddev_lock_nointr(mddev);
>         }
> }

I don't really like this version either. Let's think more about this
after fixing the dm-raid regressions.

Thanks,
Song

>
> int stop_sync_thread_interruptible(struct mddev *mddev, bool check_sync_seq)
> {
> ...
> }
  

Patch

diff --git a/drivers/md/md.c b/drivers/md/md.c
index 60f99768a1a9..9ea05de79fe4 100644
--- a/drivers/md/md.c
+++ b/drivers/md/md.c
@@ -4846,26 +4846,34 @@  action_show(struct mddev *mddev, char *page)
 	return sprintf(page, "%s\n", type);
 }
 
+static bool sync_thread_stopped(struct mddev *mddev, int *sync_seq)
+{
+	if (!test_bit(MD_RECOVERY_RUNNING, &mddev->recovery))
+		return true;
+
+	if (sync_seq && *sync_seq != atomic_read(&mddev->sync_seq))
+		return true;
+
+	return false;
+}
+
 /**
- * stop_sync_thread() - wait for sync_thread to stop if it's running.
+ * prepare_to_stop_sync_thread() - prepare to stop sync_thread if it's running.
  * @mddev:	the array.
- * @locked:	if set, reconfig_mutex will still be held after this function
- *		return; if not set, reconfig_mutex will be released after this
- *		function return.
- * @check_seq:	if set, only wait for curent running sync_thread to stop, noted
- *		that new sync_thread can still start.
+ * @unlock:	whether or not caller want to release reconfig_mutex if
+ *		sync_thread is not running.
+ *
+ * Return true if sync_thread is running, release reconfig_mutex and do
+ * preparatory work to stop sync_thread, caller should wait for
+ * sync_thread_stopped() to return true. Return false if sync_thread is not
+ * running, reconfig_mutex will be released if @unlock is set.
  */
-static void stop_sync_thread(struct mddev *mddev, bool locked, bool check_seq)
+static bool prepare_to_stop_sync_thread(struct mddev *mddev, bool unlock)
 {
-	int sync_seq;
-
-	if (check_seq)
-		sync_seq = atomic_read(&mddev->sync_seq);
-
 	if (!test_bit(MD_RECOVERY_RUNNING, &mddev->recovery)) {
-		if (!locked)
+		if (unlock)
 			mddev_unlock(mddev);
-		return;
+		return false;
 	}
 
 	mddev_unlock(mddev);
@@ -4879,53 +4887,67 @@  static void stop_sync_thread(struct mddev *mddev, bool locked, bool check_seq)
 	if (work_pending(&mddev->sync_work))
 		flush_work(&mddev->sync_work);
 
-	wait_event(resync_wait,
-		   !test_bit(MD_RECOVERY_RUNNING, &mddev->recovery) ||
-		   (check_seq && sync_seq != atomic_read(&mddev->sync_seq)));
-
-	if (locked)
-		mddev_lock_nointr(mddev);
+	return true;
 }
 
-static void idle_sync_thread(struct mddev *mddev)
+static int idle_sync_thread(struct mddev *mddev)
 {
-	mutex_lock(&mddev->sync_mutex);
+	int sync_seq = atomic_read(&mddev->sync_seq);
+	int err = mutex_lock_interruptible(&mddev->sync_mutex);
+
+	if (err)
+		return err;
 	clear_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
 
-	if (mddev_lock(mddev)) {
+	err = mddev_lock(mddev);
+	if (err) {
 		mutex_unlock(&mddev->sync_mutex);
-		return;
+		return err;
 	}
 
-	stop_sync_thread(mddev, false, true);
+	if (prepare_to_stop_sync_thread(mddev, true))
+		err = wait_event_interruptible(resync_wait,
+			   sync_thread_stopped(mddev, &sync_seq));
+
 	mutex_unlock(&mddev->sync_mutex);
+
+	return err;
 }
 
-static void frozen_sync_thread(struct mddev *mddev)
+static int frozen_sync_thread(struct mddev *mddev)
 {
-	mutex_lock(&mddev->sync_mutex);
+	int err = mutex_lock_interruptible(&mddev->sync_mutex);
+
+	if (err)
+		return err;
 	set_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
 
-	if (mddev_lock(mddev)) {
+	err = mddev_lock(mddev);
+	if (err) {
 		mutex_unlock(&mddev->sync_mutex);
-		return;
+		return err;
 	}
 
-	stop_sync_thread(mddev, false, false);
+	if (prepare_to_stop_sync_thread(mddev, true))
+		err = wait_event_interruptible(resync_wait,
+			   sync_thread_stopped(mddev, NULL));
 	mutex_unlock(&mddev->sync_mutex);
+
+	return err;
 }
 
 static ssize_t
 action_store(struct mddev *mddev, const char *page, size_t len)
 {
+	int err = 0;
+
 	if (!mddev->pers || !mddev->pers->sync_request)
 		return -EINVAL;
 
-
 	if (cmd_match(page, "idle"))
-		idle_sync_thread(mddev);
+		err = idle_sync_thread(mddev);
 	else if (cmd_match(page, "frozen"))
-		frozen_sync_thread(mddev);
+		err = frozen_sync_thread(mddev);
 	else if (test_bit(MD_RECOVERY_RUNNING, &mddev->recovery))
 		return -EBUSY;
 	else if (cmd_match(page, "resync"))
@@ -4934,7 +4956,6 @@  action_store(struct mddev *mddev, const char *page, size_t len)
 		clear_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
 		set_bit(MD_RECOVERY_RECOVER, &mddev->recovery);
 	} else if (cmd_match(page, "reshape")) {
-		int err;
 		if (mddev->pers->start_reshape == NULL)
 			return -EINVAL;
 		err = mddev_lock(mddev);
@@ -4980,7 +5001,7 @@  action_store(struct mddev *mddev, const char *page, size_t len)
 	set_bit(MD_RECOVERY_NEEDED, &mddev->recovery);
 	md_wakeup_thread(mddev->thread);
 	sysfs_notify_dirent_safe(mddev->sysfs_action);
-	return len;
+	return err ? err : len;
 }
 
 static struct md_sysfs_entry md_scan_mode =
@@ -6280,7 +6301,11 @@  static void md_clean(struct mddev *mddev)
 
 static void __md_stop_writes(struct mddev *mddev)
 {
-	stop_sync_thread(mddev, true, false);
+	if (prepare_to_stop_sync_thread(mddev, false)) {
+		wait_event(resync_wait, sync_thread_stopped(mddev, NULL));
+		mddev_lock_nointr(mddev);
+	}
+
 	del_timer_sync(&mddev->safemode_timer);
 
 	if (mddev->pers && mddev->pers->quiesce) {
@@ -6369,7 +6394,8 @@  static int md_set_readonly(struct mddev *mddev, struct block_device *bdev)
 		set_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
 	}
 
-	stop_sync_thread(mddev, false, false);
+	if (prepare_to_stop_sync_thread(mddev, true))
+		wait_event(resync_wait, sync_thread_stopped(mddev, NULL));
 	wait_event(mddev->sb_wait,
 		   !test_bit(MD_SB_CHANGE_PENDING, &mddev->sb_flags));
 	mddev_lock_nointr(mddev);
@@ -6421,7 +6447,10 @@  static int do_md_stop(struct mddev *mddev, int mode,
 		set_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
 	}
 
-	stop_sync_thread(mddev, true, false);
+	if (prepare_to_stop_sync_thread(mddev, false)) {
+		wait_event(resync_wait, sync_thread_stopped(mddev, NULL));
+		mddev_lock_nointr(mddev);
+	}
 
 	mutex_lock(&mddev->open_mutex);
 	if ((mddev->pers && atomic_read(&mddev->openers) > !!bdev) ||