[-next,v2,0/7] limit the number of plugged bio

Message ID 20230426082031.1299149-1-yukuai1@huaweicloud.com
Headers
Series limit the number of plugged bio |

Message

Yu Kuai April 26, 2023, 8:20 a.m. UTC
  From: Yu Kuai <yukuai3@huawei.com>

Changes in v2:
 - remove the patch to rename raid1-10.c

This patchset tries to limit the number of plugged bio for raid1 and
raid10, which is done in the last patch, other patches are some refactor
and optimizations.

This patchset is tested with a new test [1], this test triggers dirty
pages writeback for 10s, and in the meantime checks disk inflight.

Before this patchset, test will fail because inflight exceed threshold
(threshold is set to 4096 in the test, in theory this can be mutch
 greater as long as there are enough dirty pages and memory).

After this patchset, inflight is within 96 (MAX_PLUG_BIO * copies).

[1] https://lore.kernel.org/linux-raid/20230426073447.1294916-1-yukuai1@huaweicloud.com/

Yu Kuai (7):
  md/raid10: prevent soft lockup while flush writes
  md/raid1-10: factor out a helper to add bio to plug
  md/raid1-10: factor out a helper to submit normal write
  md/raid1-10: submit write io directly if bitmap is not enabled
  md/md-bitmap: add a new helper to unplug bitmap asynchrously
  md/raid1-10: don't handle pluged bio by daemon thread
  md/raid1-10: limit the number of plugged bio

 drivers/md/md-bitmap.c | 55 +++++++++++++++++++++++++++++++++----
 drivers/md/md-bitmap.h | 10 +++++++
 drivers/md/raid1-10.c  | 62 ++++++++++++++++++++++++++++++++++++++++++
 drivers/md/raid1.c     | 29 ++++----------------
 drivers/md/raid10.c    | 47 +++++++-------------------------
 5 files changed, 136 insertions(+), 67 deletions(-)
  

Comments

Yu Kuai May 12, 2023, 9:42 a.m. UTC | #1
Hi,

在 2023/04/26 16:20, Yu Kuai 写道:
> From: Yu Kuai <yukuai3@huawei.com>
> 
> Changes in v2:
>   - remove the patch to rename raid1-10.c
> 
> This patchset tries to limit the number of plugged bio for raid1 and
> raid10, which is done in the last patch, other patches are some refactor
> and optimizations.
> 
> This patchset is tested with a new test [1], this test triggers dirty
> pages writeback for 10s, and in the meantime checks disk inflight.
> 
> Before this patchset, test will fail because inflight exceed threshold
> (threshold is set to 4096 in the test, in theory this can be mutch
>   greater as long as there are enough dirty pages and memory).
> 
> After this patchset, inflight is within 96 (MAX_PLUG_BIO * copies).
> 
> [1] https://lore.kernel.org/linux-raid/20230426073447.1294916-1-yukuai1@huaweicloud.com/

Friendly ping...

Thanks,
Kuai
> 
> Yu Kuai (7):
>    md/raid10: prevent soft lockup while flush writes
>    md/raid1-10: factor out a helper to add bio to plug
>    md/raid1-10: factor out a helper to submit normal write
>    md/raid1-10: submit write io directly if bitmap is not enabled
>    md/md-bitmap: add a new helper to unplug bitmap asynchrously
>    md/raid1-10: don't handle pluged bio by daemon thread
>    md/raid1-10: limit the number of plugged bio
> 
>   drivers/md/md-bitmap.c | 55 +++++++++++++++++++++++++++++++++----
>   drivers/md/md-bitmap.h | 10 +++++++
>   drivers/md/raid1-10.c  | 62 ++++++++++++++++++++++++++++++++++++++++++
>   drivers/md/raid1.c     | 29 ++++----------------
>   drivers/md/raid10.c    | 47 +++++++-------------------------
>   5 files changed, 136 insertions(+), 67 deletions(-)
>
  
Song Liu May 13, 2023, 12:50 a.m. UTC | #2
On Fri, May 12, 2023 at 2:43 AM Yu Kuai <yukuai1@huaweicloud.com> wrote:
>
> Hi,
>
> 在 2023/04/26 16:20, Yu Kuai 写道:
> > From: Yu Kuai <yukuai3@huawei.com>
> >
> > Changes in v2:
> >   - remove the patch to rename raid1-10.c
> >
> > This patchset tries to limit the number of plugged bio for raid1 and
> > raid10, which is done in the last patch, other patches are some refactor
> > and optimizations.
> >
> > This patchset is tested with a new test [1], this test triggers dirty
> > pages writeback for 10s, and in the meantime checks disk inflight.
> >
> > Before this patchset, test will fail because inflight exceed threshold
> > (threshold is set to 4096 in the test, in theory this can be mutch
> >   greater as long as there are enough dirty pages and memory).
> >
> > After this patchset, inflight is within 96 (MAX_PLUG_BIO * copies).
> >
> > [1] https://lore.kernel.org/linux-raid/20230426073447.1294916-1-yukuai1@huaweicloud.com/
>
> Friendly ping...

I am sorry for the delay.

The set looks good overall, but I will need some more time to take a closer
look. A few comments/questions:

1. For functions in raid1-10.c, let's prefix them with raid1_ instead of md_*.
2. Do we need unplug_wq to be per-bitmap? Would a shared queue work?

Thanks,
Song
  
Yu Kuai May 13, 2023, 2:03 a.m. UTC | #3
Hi,

在 2023/05/13 8:50, Song Liu 写道:
> On Fri, May 12, 2023 at 2:43 AM Yu Kuai <yukuai1@huaweicloud.com> wrote:
>>
>> Hi,
>>
>> 在 2023/04/26 16:20, Yu Kuai 写道:
>>> From: Yu Kuai <yukuai3@huawei.com>
>>>
>>> Changes in v2:
>>>    - remove the patch to rename raid1-10.c
>>>
>>> This patchset tries to limit the number of plugged bio for raid1 and
>>> raid10, which is done in the last patch, other patches are some refactor
>>> and optimizations.
>>>
>>> This patchset is tested with a new test [1], this test triggers dirty
>>> pages writeback for 10s, and in the meantime checks disk inflight.
>>>
>>> Before this patchset, test will fail because inflight exceed threshold
>>> (threshold is set to 4096 in the test, in theory this can be mutch
>>>    greater as long as there are enough dirty pages and memory).
>>>
>>> After this patchset, inflight is within 96 (MAX_PLUG_BIO * copies).
>>>
>>> [1] https://lore.kernel.org/linux-raid/20230426073447.1294916-1-yukuai1@huaweicloud.com/
>>
>> Friendly ping...
> 
> I am sorry for the delay.
> 
> The set looks good overall, but I will need some more time to take a closer
> look. A few comments/questions:
> 
> 1. For functions in raid1-10.c, let's prefix them with raid1_ instead of md_*.
Ok, I'll change that in v3.

> 2. Do we need unplug_wq to be per-bitmap? Would a shared queue work?

I think this can work, the limitation is that global workqueue can
support 256 queued work at a time, but this should be enough.

Thanks,
Kuai
> 
> Thanks,
> Song
> .
>