[2/2] mm/page_alloc: remove unnecessary parameter batch of nr_pcp_free

Message ID 20230809100754.3094517-3-shikemeng@huaweicloud.com
State New
Headers
Series Two minor cleanups for pcp list in page_alloc |

Commit Message

Kemeng Shi Aug. 9, 2023, 10:07 a.m. UTC
  We get batch from pcp and just pass it to nr_pcp_free immediately. Get
batch from pcp inside nr_pcp_free to remove unnecessary parameter batch.

Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com>
---
 mm/page_alloc.c | 8 +++-----
 1 file changed, 3 insertions(+), 5 deletions(-)
  

Comments

Chris Li Aug. 15, 2023, 5:46 p.m. UTC | #1
Hi Kemeng,

Since I am discussing the other patch in this series, I might just commend on this one
as well.

On Wed, Aug 09, 2023 at 06:07:54PM +0800, Kemeng Shi wrote:
> We get batch from pcp and just pass it to nr_pcp_free immediately. Get
> batch from pcp inside nr_pcp_free to remove unnecessary parameter batch.
> 
> Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com>
> ---
>  mm/page_alloc.c | 8 +++-----
>  1 file changed, 3 insertions(+), 5 deletions(-)
> 
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 1ddcb2707d05..bb1d14e806ad 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -2376,10 +2376,10 @@ static bool free_unref_page_prepare(struct page *page, unsigned long pfn,
>  	return true;
>  }
>  
> -static int nr_pcp_free(struct per_cpu_pages *pcp, int high, int batch,
> -		       bool free_high)
> +static int nr_pcp_free(struct per_cpu_pages *pcp, int high, bool free_high)
>  {
>  	int min_nr_free, max_nr_free;
> +	int batch = READ_ONCE(pcp->batch);

Because nr_pcp_free is static and has only one caller. This function getsĀ inlined
at the caller's side. I verified that on X86_64 compiled code.

So this fix in my opinion is not worthwhile to fix. It will produce the same
machine code. One minor side effect is that it will hide the commit under it
in "git blame".

Chris
  

Patch

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 1ddcb2707d05..bb1d14e806ad 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2376,10 +2376,10 @@  static bool free_unref_page_prepare(struct page *page, unsigned long pfn,
 	return true;
 }
 
-static int nr_pcp_free(struct per_cpu_pages *pcp, int high, int batch,
-		       bool free_high)
+static int nr_pcp_free(struct per_cpu_pages *pcp, int high, bool free_high)
 {
 	int min_nr_free, max_nr_free;
+	int batch = READ_ONCE(pcp->batch);
 
 	/* Free everything if batch freeing high-order pages. */
 	if (unlikely(free_high))
@@ -2446,9 +2446,7 @@  static void free_unref_page_commit(struct zone *zone, struct per_cpu_pages *pcp,
 
 	high = nr_pcp_high(pcp, zone, free_high);
 	if (pcp->count >= high) {
-		int batch = READ_ONCE(pcp->batch);
-
-		free_pcppages_bulk(zone, nr_pcp_free(pcp, high, batch, free_high), pcp, pindex);
+		free_pcppages_bulk(zone, nr_pcp_free(pcp, high, free_high), pcp, pindex);
 	}
 }