[07/12] writeback: Factor writeback_iter_init() out of write_cache_pages()

Message ID 20230626173521.459345-8-willy@infradead.org
State New
Headers
Series Convert write_cache_pages() to an iterator |

Commit Message

Matthew Wilcox June 26, 2023, 5:35 p.m. UTC
  Make it return the first folio in the batch so that we can use it
in a typical for() pattern.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 mm/page-writeback.c | 48 ++++++++++++++++++++++++---------------------
 1 file changed, 26 insertions(+), 22 deletions(-)
  

Comments

Christoph Hellwig June 27, 2023, 4:30 a.m. UTC | #1
On Mon, Jun 26, 2023 at 06:35:16PM +0100, Matthew Wilcox (Oracle) wrote:
> +	for (folio = writeback_iter_init(mapping, wbc);
> +	     folio;
> +	     folio = writeback_get_next(mapping, wbc)) {

Ok that's another way to structure it.  Guess I should look over the
whole series first..
  
Christoph Hellwig June 27, 2023, 4:31 a.m. UTC | #2
On Mon, Jun 26, 2023 at 09:30:07PM -0700, Christoph Hellwig wrote:
> On Mon, Jun 26, 2023 at 06:35:16PM +0100, Matthew Wilcox (Oracle) wrote:
> > +	for (folio = writeback_iter_init(mapping, wbc);
> > +	     folio;
> > +	     folio = writeback_get_next(mapping, wbc)) {
> 
> Ok that's another way to structure it.  Guess I should look over the
> whole series first..

That beeing said.  Given that writeback_iter_init calls 
writeback_get_next anyway,

	writeback_iter_init(mapping, wbc);
	while ((folio = writeback_get_next(mapping, wbc)))

still feels a little easier to follow to be.  No hard feelings either
way, just an observation.
  
Matthew Wilcox June 27, 2023, 11:08 a.m. UTC | #3
On Mon, Jun 26, 2023 at 09:31:56PM -0700, Christoph Hellwig wrote:
> On Mon, Jun 26, 2023 at 09:30:07PM -0700, Christoph Hellwig wrote:
> > On Mon, Jun 26, 2023 at 06:35:16PM +0100, Matthew Wilcox (Oracle) wrote:
> > > +	for (folio = writeback_iter_init(mapping, wbc);
> > > +	     folio;
> > > +	     folio = writeback_get_next(mapping, wbc)) {
> > 
> > Ok that's another way to structure it.  Guess I should look over the
> > whole series first..

Perhaps ... it's a little hard to decide which of your comments
are worth replying to, and which are obviated by later realisations.

> That beeing said.  Given that writeback_iter_init calls 
> writeback_get_next anyway,
> 
> 	writeback_iter_init(mapping, wbc);
> 	while ((folio = writeback_get_next(mapping, wbc)))
> 
> still feels a little easier to follow to be.  No hard feelings either
> way, just an observation.

I had it structured that way originally, but we need to pass in 'error'
to the get_next, and it's better if we also pass in 'folio', which means
that the user then needs to initialise error to 0 and folio to NULL
before using the macro, and that all felt a bit "You're holding it wrong".
  

Patch

diff --git a/mm/page-writeback.c b/mm/page-writeback.c
index f782b48c5b0c..18f332611a52 100644
--- a/mm/page-writeback.c
+++ b/mm/page-writeback.c
@@ -2430,6 +2430,28 @@  static bool should_writeback_folio(struct address_space *mapping,
 	return true;
 }
 
+static struct folio *writeback_iter_init(struct address_space *mapping,
+		struct writeback_control *wbc)
+{
+	if (wbc->range_cyclic) {
+		wbc->index = mapping->writeback_index; /* prev offset */
+		wbc->end = -1;
+	} else {
+		wbc->index = wbc->range_start >> PAGE_SHIFT;
+		wbc->end = wbc->range_end >> PAGE_SHIFT;
+		if (wbc->range_start == 0 && wbc->range_end == LLONG_MAX)
+			wbc->range_whole = 1;
+	}
+	if (wbc->sync_mode == WB_SYNC_ALL || wbc->tagged_writepages)
+		tag_pages_for_writeback(mapping, wbc->index, wbc->end);
+
+	wbc->done_index = wbc->index;
+	folio_batch_init(&wbc->fbatch);
+	wbc->err = 0;
+
+	return writeback_get_next(mapping, wbc);
+}
+
 /**
  * write_cache_pages - walk the list of dirty pages of the given address space and write all of them.
  * @mapping: address space structure to write
@@ -2465,30 +2487,12 @@  int write_cache_pages(struct address_space *mapping,
 		      struct writeback_control *wbc, writepage_t writepage,
 		      void *data)
 {
+	struct folio *folio;
 	int error;
 
-	if (wbc->range_cyclic) {
-		wbc->index = mapping->writeback_index; /* prev offset */
-		wbc->end = -1;
-	} else {
-		wbc->index = wbc->range_start >> PAGE_SHIFT;
-		wbc->end = wbc->range_end >> PAGE_SHIFT;
-		if (wbc->range_start == 0 && wbc->range_end == LLONG_MAX)
-			wbc->range_whole = 1;
-	}
-	if (wbc->sync_mode == WB_SYNC_ALL || wbc->tagged_writepages)
-		tag_pages_for_writeback(mapping, wbc->index, wbc->end);
-
-	wbc->done_index = wbc->index;
-	folio_batch_init(&wbc->fbatch);
-	wbc->err = 0;
-
-	for (;;) {
-		struct folio *folio = writeback_get_next(mapping, wbc);
-
-		if (!folio)
-			break;
-
+	for (folio = writeback_iter_init(mapping, wbc);
+	     folio;
+	     folio = writeback_get_next(mapping, wbc)) {
 		wbc->done_index = folio->index;
 
 		folio_lock(folio);