[1/2] mm: remove redundant check in page_vma_mapped_walk

Message ID 20230704213932.1339204-2-shikemeng@huaweicloud.com
State New
Headers
Series Two minor cleanups for page_vma_mapped.c |

Commit Message

Kemeng Shi July 4, 2023, 9:39 p.m. UTC
  For PVMW_SYNC case, we always take pte lock when get first pte of
PTE-mapped THP in map_pte and hold it until:
1. scan of pmd range finished or
2. scan of user input range finished or
3. user stop walk with page_vma_mapped_walk_done.
In each case. pte lock will not be freed during middle scan of PTE-mapped
THP.

Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com>
---
 mm/page_vma_mapped.c | 4 ----
 1 file changed, 4 deletions(-)
  

Comments

Andrew Morton July 4, 2023, 5:05 p.m. UTC | #1
On Wed,  5 Jul 2023 05:39:31 +0800 Kemeng Shi <shikemeng@huaweicloud.com> wrote:

> For PVMW_SYNC case, we always take pte lock when get first pte of
> PTE-mapped THP in map_pte and hold it until:
> 1. scan of pmd range finished or
> 2. scan of user input range finished or
> 3. user stop walk with page_vma_mapped_walk_done.
> In each case. pte lock will not be freed during middle scan of PTE-mapped
> THP.
> 
> ...
>
> --- a/mm/page_vma_mapped.c
> +++ b/mm/page_vma_mapped.c
> @@ -275,10 +275,6 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)
>  				goto restart;
>  			}
>  			pvmw->pte++;
> -			if ((pvmw->flags & PVMW_SYNC) && !pvmw->ptl) {
> -				pvmw->ptl = pte_lockptr(mm, pvmw->pmd);
> -				spin_lock(pvmw->ptl);
> -			}
>  		} while (pte_none(*pvmw->pte));
>  
>  		if (!pvmw->ptl) {

This code has changed significantly since 6.4.  Please develop against
the mm-unstable branch at
git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm, thanks.
  
Kemeng Shi July 6, 2023, 2:37 a.m. UTC | #2
on 7/5/2023 1:05 AM, Andrew Morton wrote:
> On Wed,  5 Jul 2023 05:39:31 +0800 Kemeng Shi <shikemeng@huaweicloud.com> wrote:
> 
>> For PVMW_SYNC case, we always take pte lock when get first pte of
>> PTE-mapped THP in map_pte and hold it until:
>> 1. scan of pmd range finished or
>> 2. scan of user input range finished or
>> 3. user stop walk with page_vma_mapped_walk_done.
>> In each case. pte lock will not be freed during middle scan of PTE-mapped
>> THP.
>>
>> ...
>>
>> --- a/mm/page_vma_mapped.c
>> +++ b/mm/page_vma_mapped.c
>> @@ -275,10 +275,6 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)
>>  				goto restart;
>>  			}
>>  			pvmw->pte++;
>> -			if ((pvmw->flags & PVMW_SYNC) && !pvmw->ptl) {
>> -				pvmw->ptl = pte_lockptr(mm, pvmw->pmd);
>> -				spin_lock(pvmw->ptl);
>> -			}
>>  		} while (pte_none(*pvmw->pte));
>>  
>>  		if (!pvmw->ptl) {
> 
> This code has changed significantly since 6.4.  Please develop against
> the mm-unstable branch at
> git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm, thanks.
> 
> 
Thanks for reminding me of this, I will check my changes in updated code.
  

Patch

diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c
index 4e448cfbc6ef..83858758e239 100644
--- a/mm/page_vma_mapped.c
+++ b/mm/page_vma_mapped.c
@@ -275,10 +275,6 @@  bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)
 				goto restart;
 			}
 			pvmw->pte++;
-			if ((pvmw->flags & PVMW_SYNC) && !pvmw->ptl) {
-				pvmw->ptl = pte_lockptr(mm, pvmw->pmd);
-				spin_lock(pvmw->ptl);
-			}
 		} while (pte_none(*pvmw->pte));
 
 		if (!pvmw->ptl) {