[2/3] mm/page_ext: remove rollback for untouched mem_section in online_page_ext

Message ID 20230714114749.1743032-3-shikemeng@huaweicloud.com
State New
Headers
Series minor cleanups for page_ext |

Commit Message

Kemeng Shi July 14, 2023, 11:47 a.m. UTC
  If init_section_page_ext failed, we only need rollback for mem_section
before failed mem_section. Make rollback end point to failed mem_section
to remove unnecessary rollback.

As pfn += PAGES_PER_SECTION will be executed even if init_section_page_ext
failed. So pfn points to mem_section after failed mem_section. Subtract
one mem_section from pfn to get failed mem_section.

Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com>
---
 mm/page_ext.c | 1 +
 1 file changed, 1 insertion(+)
  

Comments

Andrew Morton July 14, 2023, 5:54 p.m. UTC | #1
On Fri, 14 Jul 2023 19:47:48 +0800 Kemeng Shi <shikemeng@huaweicloud.com> wrote:

> If init_section_page_ext failed, we only need rollback for mem_section
> before failed mem_section. Make rollback end point to failed mem_section
> to remove unnecessary rollback.
> 
> As pfn += PAGES_PER_SECTION will be executed even if init_section_page_ext
> failed. So pfn points to mem_section after failed mem_section. Subtract
> one mem_section from pfn to get failed mem_section.
> 
> ...
>
> --- a/mm/page_ext.c
> +++ b/mm/page_ext.c
> @@ -424,6 +424,7 @@ static int __meminit online_page_ext(unsigned long start_pfn,
>  		return 0;
>  
>  	/* rollback */
> +	end = pfn - PAGES_PER_SECTION;
>  	for (pfn = start; pfn < end; pfn += PAGES_PER_SECTION)
>  		__free_page_ext(pfn);
>  

This is a bugfix, yes?

I guess init_section_page_ext() never fails for anyone...
  
Kemeng Shi July 17, 2023, 1:47 a.m. UTC | #2
on 7/15/2023 1:54 AM, Andrew Morton wrote:
> On Fri, 14 Jul 2023 19:47:48 +0800 Kemeng Shi <shikemeng@huaweicloud.com> wrote:
> 
>> If init_section_page_ext failed, we only need rollback for mem_section
>> before failed mem_section. Make rollback end point to failed mem_section
>> to remove unnecessary rollback.
>>
>> As pfn += PAGES_PER_SECTION will be executed even if init_section_page_ext
>> failed. So pfn points to mem_section after failed mem_section. Subtract
>> one mem_section from pfn to get failed mem_section.
>>
>> ...
>>
>> --- a/mm/page_ext.c
>> +++ b/mm/page_ext.c
>> @@ -424,6 +424,7 @@ static int __meminit online_page_ext(unsigned long start_pfn,
>>  		return 0;
>>  
>>  	/* rollback */
>> +	end = pfn - PAGES_PER_SECTION;
>>  	for (pfn = start; pfn < end; pfn += PAGES_PER_SECTION)
>>  		__free_page_ext(pfn);
>>  
> 
> This is a bugfix, yes?
> 
> I guess init_section_page_ext() never fails for anyone...
I marked this as cleanup because __free_page_ext can handle NULL page_ext
from uninitialized mem_section. Then no real bug will be triggered even
if init_section_page_ext failed.
  

Patch

diff --git a/mm/page_ext.c b/mm/page_ext.c
index 096451df1c87..f052397dc70f 100644
--- a/mm/page_ext.c
+++ b/mm/page_ext.c
@@ -424,6 +424,7 @@  static int __meminit online_page_ext(unsigned long start_pfn,
 		return 0;
 
 	/* rollback */
+	end = pfn - PAGES_PER_SECTION;
 	for (pfn = start; pfn < end; pfn += PAGES_PER_SECTION)
 		__free_page_ext(pfn);