[RFC,v2,1/5] zram: remove zram_page_end_io function

Message ID 20230322135013.197076-2-p.raghav@samsung.com
State New
Headers
Series remove page_endio() |

Commit Message

Pankaj Raghav March 22, 2023, 1:50 p.m. UTC
  zram_page_end_io function is called when alloc_page is used (for
partial IO) to trigger writeback from the user space. The pages used for
this operation is never locked or have the writeback set. So, it is safe
to remove zram_page_end_io function that unlocks or marks writeback end
on the page.

Signed-off-by: Pankaj Raghav <p.raghav@samsung.com>
---
 drivers/block/zram/zram_drv.c | 13 +------------
 1 file changed, 1 insertion(+), 12 deletions(-)
  

Comments

Christoph Hellwig March 23, 2023, 10:35 a.m. UTC | #1
On Wed, Mar 22, 2023 at 02:50:09PM +0100, Pankaj Raghav wrote:
> -	if (!parent)
> -		bio->bi_end_io = zram_page_end_io;
> -	else
> +	if (parent)

I don't think a non-chained bio without and end_io handler can work.
This !parent case seems to come from writeback_store, and as far as
I can tell is broken already in the current code as it just fires
off an async read without ever waiting for it, using an on-stack bio
just to make things complicated.

The bvec reading code in zram is a mess, but I have an idea how
to clean it up with a little series that should also help with
this issue.
  
Pankaj Raghav March 23, 2023, 3:50 p.m. UTC | #2
On 2023-03-23 11:35, Christoph Hellwig wrote:
> On Wed, Mar 22, 2023 at 02:50:09PM +0100, Pankaj Raghav wrote:
>> -	if (!parent)
>> -		bio->bi_end_io = zram_page_end_io;
>> -	else
>> +	if (parent)
> 
> I don't think a non-chained bio without and end_io handler can work.

Hmm. Is it because in the case of non-chained bio, zram driver owns the bio,
and it is the responsibility of the driver to call bio_put in the end_io handler?

> This !parent case seems to come from writeback_store, and as far as
> I can tell is broken already in the current code as it just fires
> off an async read without ever waiting for it, using an on-stack bio
> just to make things complicated.
> 
> The bvec reading code in zram is a mess, but I have an idea how
> to clean it up with a little series that should also help with
> this issue.
Sounds good.

As a part of this series, should I just have an end_io which has a
call to bio_put then?

diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
index b7bb52f8dfbd..faa78fce327e 100644
--- a/drivers/block/zram/zram_drv.c
+++ b/drivers/block/zram/zram_drv.c
@@ -608,10 +608,6 @@ static void free_block_bdev(struct zram *zram, unsigned long blk_idx)

 static void zram_page_end_io(struct bio *bio)
 {
-       struct page *page = bio_first_page_all(bio);
-
-       page_endio(page, op_is_write(bio_op(bio)),
-                       blk_status_to_errno(bio->bi_status));
        bio_put(bio);
 }
  

Patch

diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
index b7bb52f8dfbd..2341f4009b0f 100644
--- a/drivers/block/zram/zram_drv.c
+++ b/drivers/block/zram/zram_drv.c
@@ -606,15 +606,6 @@  static void free_block_bdev(struct zram *zram, unsigned long blk_idx)
 	atomic64_dec(&zram->stats.bd_count);
 }
 
-static void zram_page_end_io(struct bio *bio)
-{
-	struct page *page = bio_first_page_all(bio);
-
-	page_endio(page, op_is_write(bio_op(bio)),
-			blk_status_to_errno(bio->bi_status));
-	bio_put(bio);
-}
-
 /*
  * Returns 1 if the submission is successful.
  */
@@ -634,9 +625,7 @@  static int read_from_bdev_async(struct zram *zram, struct bio_vec *bvec,
 		return -EIO;
 	}
 
-	if (!parent)
-		bio->bi_end_io = zram_page_end_io;
-	else
+	if (parent)
 		bio_chain(bio, parent);
 
 	submit_bio(bio);