[v3,4/6] gfs2: Replace kmap_atomic() by kmap_local_page() in lops.c
Commit Message
kmap_atomic() is deprecated in favor of kmap_local_{folio,page}().
Therefore, replace kmap_atomic() with kmap_local_page() in following
functions of lops.c:
- gfs2_jhead_pg_srch()
- gfs2_check_magic()
- gfs2_before_commit()
kmap_atomic() disables page-faults and preemption (the latter only for
!PREEMPT_RT kernels), However, the code within the mapping/un-mapping in
stuffed_readpage() does not depend on the above-mentioned side effects.
Therefore, a mere replacement of the old API with the new one is all that
is required (i.e., there is no need to explicitly add any calls to
pagefault_disable() and/or preempt_disable()).
Suggested-by: Fabio M. De Francesco <fmdefrancesco@gmail.com>
Signed-off-by: Deepak R Varma <drv@mailo.com>
---
Changes in v3:
- Patch included in patch series
Changes in v2:
- None
fs/gfs2/lops.c | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
Comments
On giovedì 29 giugno 2023 23:51:17 CEST Deepak R Varma wrote:
> kmap_atomic() is deprecated in favor of kmap_local_{folio,page}().
Deepak,
Can you please add a reference to the highmem documentation and to the patch
from Ira that added a deprecation check for kmap() and kmap_atomic() in his
commit regarding checkpatch.pl?
There may be maintainers / reviewers who are still unaware of this
information. It would surely help them with reviewing. Furthermore it might
suggest maintainers to convert their subsystem / driver to the new API or
remove and use plain page_address() (if it is possible to prove that pages
can't come from ZONE_HIGHMEM).
>
> Therefore, replace kmap_atomic() with kmap_local_page() in following
> functions of lops.c:
> - gfs2_jhead_pg_srch()
> - gfs2_check_magic()
> - gfs2_before_commit()
>
> kmap_atomic() disables page-faults and preemption (the latter only for
> !PREEMPT_RT kernels), However, the code within the mapping/un-mapping in
> stuffed_readpage() does not depend on the above-mentioned side effects.
>
> Therefore, a mere replacement of the old API with the new one is all that
> is required (i.e., there is no need to explicitly add any calls to
> pagefault_disable() and/or preempt_disable()).
>
> Suggested-by: Fabio M. De Francesco <fmdefrancesco@gmail.com>
> Signed-off-by: Deepak R Varma <drv@mailo.com>
> ---
> Changes in v3:
> - Patch included in patch series
>
> Changes in v2:
> - None
>
>
> fs/gfs2/lops.c | 12 ++++++------
> 1 file changed, 6 insertions(+), 6 deletions(-)
>
> diff --git a/fs/gfs2/lops.c b/fs/gfs2/lops.c
> index 1902413d5d12..a7c2296cb3c6 100644
> --- a/fs/gfs2/lops.c
> +++ b/fs/gfs2/lops.c
> @@ -427,7 +427,7 @@ static bool gfs2_jhead_pg_srch(struct gfs2_jdesc *jd,
> {
> struct gfs2_sbd *sdp = GFS2_SB(jd->jd_inode);
> struct gfs2_log_header_host lh;
> - void *kaddr = kmap_atomic(page);
> + void *kaddr = kmap_local_page(page);
> unsigned int offset;
> bool ret = false;
>
Deepak,
Are we mixing declarations with functions calls? Is it good practice? If not,
I'd suggest to move the mapping to a better suited place.
>
> @@ -441,7 +441,7 @@ static bool gfs2_jhead_pg_srch(struct gfs2_jdesc *jd,
> }
> }
> }
> - kunmap_atomic(kaddr);
> + kunmap_local(kaddr);
> return ret;
> }
>
> @@ -626,11 +626,11 @@ static void gfs2_check_magic(struct buffer_head *bh)
> __be32 *ptr;
>
> clear_buffer_escaped(bh);
> - kaddr = kmap_atomic(bh->b_page);
> + kaddr = kmap_local_page(bh->b_page);
> ptr = kaddr + bh_offset(bh);
> if (*ptr == cpu_to_be32(GFS2_MAGIC))
> set_buffer_escaped(bh);
> - kunmap_atomic(kaddr);
> + kunmap_local(kaddr);
> }
>
> static int blocknr_cmp(void *priv, const struct list_head *a,
> @@ -699,10 +699,10 @@ static void gfs2_before_commit(struct gfs2_sbd *sdp,
> unsigned int limit, void *kaddr;
> page = mempool_alloc(gfs2_page_pool,
GFP_NOIO);
> ptr = page_address(page);
> - kaddr = kmap_atomic(bd2->bd_bh-
>b_page);
> + kaddr = kmap_local_page(bd2->bd_bh-
>b_page);
> memcpy(ptr, kaddr + bh_offset(bd2-
>bd_bh),
> bd2->bd_bh->b_size);
>
Deepak,
How about memcpy_from_page()?
Thanks,
Fabio
>
> - kunmap_atomic(kaddr);
> + kunmap_local(kaddr);
> *(__be32 *)ptr = 0;
> clear_buffer_escaped(bd2->bd_bh);
> unlock_buffer(bd2->bd_bh);
> --
> 2.34.1
@@ -427,7 +427,7 @@ static bool gfs2_jhead_pg_srch(struct gfs2_jdesc *jd,
{
struct gfs2_sbd *sdp = GFS2_SB(jd->jd_inode);
struct gfs2_log_header_host lh;
- void *kaddr = kmap_atomic(page);
+ void *kaddr = kmap_local_page(page);
unsigned int offset;
bool ret = false;
@@ -441,7 +441,7 @@ static bool gfs2_jhead_pg_srch(struct gfs2_jdesc *jd,
}
}
}
- kunmap_atomic(kaddr);
+ kunmap_local(kaddr);
return ret;
}
@@ -626,11 +626,11 @@ static void gfs2_check_magic(struct buffer_head *bh)
__be32 *ptr;
clear_buffer_escaped(bh);
- kaddr = kmap_atomic(bh->b_page);
+ kaddr = kmap_local_page(bh->b_page);
ptr = kaddr + bh_offset(bh);
if (*ptr == cpu_to_be32(GFS2_MAGIC))
set_buffer_escaped(bh);
- kunmap_atomic(kaddr);
+ kunmap_local(kaddr);
}
static int blocknr_cmp(void *priv, const struct list_head *a,
@@ -699,10 +699,10 @@ static void gfs2_before_commit(struct gfs2_sbd *sdp, unsigned int limit,
void *kaddr;
page = mempool_alloc(gfs2_page_pool, GFP_NOIO);
ptr = page_address(page);
- kaddr = kmap_atomic(bd2->bd_bh->b_page);
+ kaddr = kmap_local_page(bd2->bd_bh->b_page);
memcpy(ptr, kaddr + bh_offset(bd2->bd_bh),
bd2->bd_bh->b_size);
- kunmap_atomic(kaddr);
+ kunmap_local(kaddr);
*(__be32 *)ptr = 0;
clear_buffer_escaped(bd2->bd_bh);
unlock_buffer(bd2->bd_bh);