[v2,3/8] vringh: replace kmap_atomic() with kmap_local_page()

Message ID 20230302113421.174582-4-sgarzare@redhat.com
State New
Headers
Series vdpa_sim: add support for user VA |

Commit Message

Stefano Garzarella March 2, 2023, 11:34 a.m. UTC
  kmap_atomic() is deprecated in favor of kmap_local_page().

With kmap_local_page() the mappings are per thread, CPU local, can take
page-faults, and can be called from any context (including interrupts).
Furthermore, the tasks can be preempted and, when they are scheduled to
run again, the kernel virtual addresses are restored and still valid.

kmap_atomic() is implemented like a kmap_local_page() which also disables
page-faults and preemption (the latter only for !PREEMPT_RT kernels,
otherwise it only disables migration).

The code within the mappings/un-mappings in getu16_iotlb() and
putu16_iotlb() don't depend on the above-mentioned side effects of
kmap_atomic(), so that mere replacements of the old API with the new one
is all that is required (i.e., there is no need to explicitly add calls
to pagefault_disable() and/or preempt_disable()).

Signed-off-by: Stefano Garzarella <sgarzare@redhat.com>
---

Notes:
    v2:
    - added this patch since checkpatch.pl complained about deprecation
      of kmap_atomic() touched by next patch

 drivers/vhost/vringh.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)
  

Comments

Jason Wang March 14, 2023, 3:56 a.m. UTC | #1
On Thu, Mar 2, 2023 at 7:34 PM Stefano Garzarella <sgarzare@redhat.com> wrote:
>
> kmap_atomic() is deprecated in favor of kmap_local_page().

It's better to mention the commit or code that introduces this.

>
> With kmap_local_page() the mappings are per thread, CPU local, can take
> page-faults, and can be called from any context (including interrupts).
> Furthermore, the tasks can be preempted and, when they are scheduled to
> run again, the kernel virtual addresses are restored and still valid.
>
> kmap_atomic() is implemented like a kmap_local_page() which also disables
> page-faults and preemption (the latter only for !PREEMPT_RT kernels,
> otherwise it only disables migration).
>
> The code within the mappings/un-mappings in getu16_iotlb() and
> putu16_iotlb() don't depend on the above-mentioned side effects of
> kmap_atomic(),

Note we used to use spinlock to protect simulators (at least until
patch 7, so we probably need to re-order the patches at least) so I
think this is only valid when:

The vringh IOTLB helpers are not used in atomic context (e.g spinlock,
interrupts).

If yes, should we document this? (Or should we introduce a boolean to
say whether an IOTLB variant can be used in an atomic context)?

Thanks

> so that mere replacements of the old API with the new one
> is all that is required (i.e., there is no need to explicitly add calls
> to pagefault_disable() and/or preempt_disable()).
>
> Signed-off-by: Stefano Garzarella <sgarzare@redhat.com>
> ---
>
> Notes:
>     v2:
>     - added this patch since checkpatch.pl complained about deprecation
>       of kmap_atomic() touched by next patch
>
>  drivers/vhost/vringh.c | 8 ++++----
>  1 file changed, 4 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/vhost/vringh.c b/drivers/vhost/vringh.c
> index a1e27da54481..0ba3ef809e48 100644
> --- a/drivers/vhost/vringh.c
> +++ b/drivers/vhost/vringh.c
> @@ -1220,10 +1220,10 @@ static inline int getu16_iotlb(const struct vringh *vrh,
>         if (ret < 0)
>                 return ret;
>
> -       kaddr = kmap_atomic(iov.bv_page);
> +       kaddr = kmap_local_page(iov.bv_page);
>         from = kaddr + iov.bv_offset;
>         *val = vringh16_to_cpu(vrh, READ_ONCE(*(__virtio16 *)from));
> -       kunmap_atomic(kaddr);
> +       kunmap_local(kaddr);
>
>         return 0;
>  }
> @@ -1241,10 +1241,10 @@ static inline int putu16_iotlb(const struct vringh *vrh,
>         if (ret < 0)
>                 return ret;
>
> -       kaddr = kmap_atomic(iov.bv_page);
> +       kaddr = kmap_local_page(iov.bv_page);
>         to = kaddr + iov.bv_offset;
>         WRITE_ONCE(*(__virtio16 *)to, cpu_to_vringh16(vrh, val));
> -       kunmap_atomic(kaddr);
> +       kunmap_local(kaddr);
>
>         return 0;
>  }
> --
> 2.39.2
>
  
Fabio M. De Francesco March 15, 2023, 9:12 p.m. UTC | #2
On martedì 14 marzo 2023 04:56:08 CET Jason Wang wrote:
> On Thu, Mar 2, 2023 at 7:34 PM Stefano Garzarella <sgarzare@redhat.com> 
wrote:
> > kmap_atomic() is deprecated in favor of kmap_local_page().
> 
> It's better to mention the commit or code that introduces this.
> 
> > With kmap_local_page() the mappings are per thread, CPU local, can take
> > page-faults, and can be called from any context (including interrupts).
> > Furthermore, the tasks can be preempted and, when they are scheduled to
> > run again, the kernel virtual addresses are restored and still valid.
> > 
> > kmap_atomic() is implemented like a kmap_local_page() which also disables
> > page-faults and preemption (the latter only for !PREEMPT_RT kernels,
> > otherwise it only disables migration).
> > 
> > The code within the mappings/un-mappings in getu16_iotlb() and
> > putu16_iotlb() don't depend on the above-mentioned side effects of
> > kmap_atomic(),
> 
> Note we used to use spinlock to protect simulators (at least until
> patch 7, so we probably need to re-order the patches at least) so I
> think this is only valid when:
> 
> The vringh IOTLB helpers are not used in atomic context (e.g spinlock,
> interrupts).

I'm probably missing some context but it looks that you are saying that 
kmap_local_page() is not suited for any use in atomic context (you are 
mentioning spinlocks).

The commit message (that I know pretty well since it's the exact copy, word by 
word, of my boiler plate commits) explains that kmap_local_page() is perfectly 
usable in atomic context (including interrupts).

I don't know this code, however I am not able to see why these vringh IOTLB 
helpers cannot work if used under spinlocks. Can you please elaborate a little 
more?

> If yes, should we document this? (Or should we introduce a boolean to
> say whether an IOTLB variant can be used in an atomic context)?

Again, you'll have no problems from the use of kmap_local_page() and so you 
don't need any boolean to tell whether or not the code is running in atomic 
context. 

Please take a look at the Highmem documentation which has been recently 
reworked and extended by me: https://docs.kernel.org/mm/highmem.html

Anyway, I have been ATK 12 or 13 hours in a row. So I'm probably missing the 
whole picture.

Thanks, 

Fabio

> Thanks
> 
> > so that mere replacements of the old API with the new one
> > is all that is required (i.e., there is no need to explicitly add calls
> > to pagefault_disable() and/or preempt_disable()).
> > 
> > Signed-off-by: Stefano Garzarella <sgarzare@redhat.com>
> > ---
> > 
> > Notes:
> >     v2:
> >     - added this patch since checkpatch.pl complained about deprecation
> >     
> >       of kmap_atomic() touched by next patch
> >  
> >  drivers/vhost/vringh.c | 8 ++++----
> >  1 file changed, 4 insertions(+), 4 deletions(-)
> > 
> > diff --git a/drivers/vhost/vringh.c b/drivers/vhost/vringh.c
> > index a1e27da54481..0ba3ef809e48 100644
> > --- a/drivers/vhost/vringh.c
> > +++ b/drivers/vhost/vringh.c
> > @@ -1220,10 +1220,10 @@ static inline int getu16_iotlb(const struct vringh
> > *vrh,> 
> >         if (ret < 0)
> >         
> >                 return ret;
> > 
> > -       kaddr = kmap_atomic(iov.bv_page);
> > +       kaddr = kmap_local_page(iov.bv_page);
> > 
> >         from = kaddr + iov.bv_offset;
> >         *val = vringh16_to_cpu(vrh, READ_ONCE(*(__virtio16 *)from));
> > 
> > -       kunmap_atomic(kaddr);
> > +       kunmap_local(kaddr);
> > 
> >         return 0;
> >  
> >  }
> > 
> > @@ -1241,10 +1241,10 @@ static inline int putu16_iotlb(const struct vringh
> > *vrh,> 
> >         if (ret < 0)
> >         
> >                 return ret;
> > 
> > -       kaddr = kmap_atomic(iov.bv_page);
> > +       kaddr = kmap_local_page(iov.bv_page);
> > 
> >         to = kaddr + iov.bv_offset;
> >         WRITE_ONCE(*(__virtio16 *)to, cpu_to_vringh16(vrh, val));
> > 
> > -       kunmap_atomic(kaddr);
> > +       kunmap_local(kaddr);
> > 
> >         return 0;
> >  
> >  }
> > 
> > --
> > 2.39.2
  
Jason Wang March 16, 2023, 2:53 a.m. UTC | #3
On Thu, Mar 16, 2023 at 5:12 AM Fabio M. De Francesco
<fmdefrancesco@gmail.com> wrote:
>
> On martedì 14 marzo 2023 04:56:08 CET Jason Wang wrote:
> > On Thu, Mar 2, 2023 at 7:34 PM Stefano Garzarella <sgarzare@redhat.com>
> wrote:
> > > kmap_atomic() is deprecated in favor of kmap_local_page().
> >
> > It's better to mention the commit or code that introduces this.
> >
> > > With kmap_local_page() the mappings are per thread, CPU local, can take
> > > page-faults, and can be called from any context (including interrupts).
> > > Furthermore, the tasks can be preempted and, when they are scheduled to
> > > run again, the kernel virtual addresses are restored and still valid.
> > >
> > > kmap_atomic() is implemented like a kmap_local_page() which also disables
> > > page-faults and preemption (the latter only for !PREEMPT_RT kernels,
> > > otherwise it only disables migration).
> > >
> > > The code within the mappings/un-mappings in getu16_iotlb() and
> > > putu16_iotlb() don't depend on the above-mentioned side effects of
> > > kmap_atomic(),
> >
> > Note we used to use spinlock to protect simulators (at least until
> > patch 7, so we probably need to re-order the patches at least) so I
> > think this is only valid when:
> >
> > The vringh IOTLB helpers are not used in atomic context (e.g spinlock,
> > interrupts).
>
> I'm probably missing some context but it looks that you are saying that
> kmap_local_page() is not suited for any use in atomic context (you are
> mentioning spinlocks).
>
> The commit message (that I know pretty well since it's the exact copy, word by
> word, of my boiler plate commits) explains that kmap_local_page() is perfectly
> usable in atomic context (including interrupts).

Thanks for the confirmation, I misread the change log and thought it
said it can't be used in interrupts.

>
> I don't know this code, however I am not able to see why these vringh IOTLB
> helpers cannot work if used under spinlocks. Can you please elaborate a little
> more?

My fault, see above.

>
> > If yes, should we document this? (Or should we introduce a boolean to
> > say whether an IOTLB variant can be used in an atomic context)?
>
> Again, you'll have no problems from the use of kmap_local_page() and so you
> don't need any boolean to tell whether or not the code is running in atomic
> context.
>
> Please take a look at the Highmem documentation which has been recently
> reworked and extended by me: https://docs.kernel.org/mm/highmem.html

This is really helpful.

>
> Anyway, I have been ATK 12 or 13 hours in a row. So I'm probably missing the
> whole picture.

It's me that misses the movitiation of kmap_local().

Thanks

>
> Thanks,
>
> Fabio
>
> > Thanks
> >
> > > so that mere replacements of the old API with the new one
> > > is all that is required (i.e., there is no need to explicitly add calls
> > > to pagefault_disable() and/or preempt_disable()).
> > >
> > > Signed-off-by: Stefano Garzarella <sgarzare@redhat.com>
> > > ---
> > >
> > > Notes:
> > >     v2:
> > >     - added this patch since checkpatch.pl complained about deprecation
> > >
> > >       of kmap_atomic() touched by next patch
> > >
> > >  drivers/vhost/vringh.c | 8 ++++----
> > >  1 file changed, 4 insertions(+), 4 deletions(-)
> > >
> > > diff --git a/drivers/vhost/vringh.c b/drivers/vhost/vringh.c
> > > index a1e27da54481..0ba3ef809e48 100644
> > > --- a/drivers/vhost/vringh.c
> > > +++ b/drivers/vhost/vringh.c
> > > @@ -1220,10 +1220,10 @@ static inline int getu16_iotlb(const struct vringh
> > > *vrh,>
> > >         if (ret < 0)
> > >
> > >                 return ret;
> > >
> > > -       kaddr = kmap_atomic(iov.bv_page);
> > > +       kaddr = kmap_local_page(iov.bv_page);
> > >
> > >         from = kaddr + iov.bv_offset;
> > >         *val = vringh16_to_cpu(vrh, READ_ONCE(*(__virtio16 *)from));
> > >
> > > -       kunmap_atomic(kaddr);
> > > +       kunmap_local(kaddr);
> > >
> > >         return 0;
> > >
> > >  }
> > >
> > > @@ -1241,10 +1241,10 @@ static inline int putu16_iotlb(const struct vringh
> > > *vrh,>
> > >         if (ret < 0)
> > >
> > >                 return ret;
> > >
> > > -       kaddr = kmap_atomic(iov.bv_page);
> > > +       kaddr = kmap_local_page(iov.bv_page);
> > >
> > >         to = kaddr + iov.bv_offset;
> > >         WRITE_ONCE(*(__virtio16 *)to, cpu_to_vringh16(vrh, val));
> > >
> > > -       kunmap_atomic(kaddr);
> > > +       kunmap_local(kaddr);
> > >
> > >         return 0;
> > >
> > >  }
> > >
> > > --
> > > 2.39.2
>
>
>
>
  
Stefano Garzarella March 16, 2023, 8:09 a.m. UTC | #4
On Wed, Mar 15, 2023 at 10:12 PM Fabio M. De Francesco
<fmdefrancesco@gmail.com> wrote:
>
> On martedì 14 marzo 2023 04:56:08 CET Jason Wang wrote:
> > On Thu, Mar 2, 2023 at 7:34 PM Stefano Garzarella <sgarzare@redhat.com>
> wrote:
> > > kmap_atomic() is deprecated in favor of kmap_local_page().
> >
> > It's better to mention the commit or code that introduces this.
> >
> > > With kmap_local_page() the mappings are per thread, CPU local, can take
> > > page-faults, and can be called from any context (including interrupts).
> > > Furthermore, the tasks can be preempted and, when they are scheduled to
> > > run again, the kernel virtual addresses are restored and still valid.
> > >
> > > kmap_atomic() is implemented like a kmap_local_page() which also disables
> > > page-faults and preemption (the latter only for !PREEMPT_RT kernels,
> > > otherwise it only disables migration).
> > >
> > > The code within the mappings/un-mappings in getu16_iotlb() and
> > > putu16_iotlb() don't depend on the above-mentioned side effects of
> > > kmap_atomic(),
> >
> > Note we used to use spinlock to protect simulators (at least until
> > patch 7, so we probably need to re-order the patches at least) so I
> > think this is only valid when:
> >
> > The vringh IOTLB helpers are not used in atomic context (e.g spinlock,
> > interrupts).
>
> I'm probably missing some context but it looks that you are saying that
> kmap_local_page() is not suited for any use in atomic context (you are
> mentioning spinlocks).
>
> The commit message (that I know pretty well since it's the exact copy, word by
> word, of my boiler plate commits)

I hope it's not a problem for you, should I mention it somehow?

I searched for the last commits that made a similar change and found
yours that explained it perfectly ;-)

Do I need to rephrase?

> explains that kmap_local_page() is perfectly
> usable in atomic context (including interrupts).
>
> I don't know this code, however I am not able to see why these vringh IOTLB
> helpers cannot work if used under spinlocks. Can you please elaborate a little
> more?
>
> > If yes, should we document this? (Or should we introduce a boolean to
> > say whether an IOTLB variant can be used in an atomic context)?
>
> Again, you'll have no problems from the use of kmap_local_page() and so you
> don't need any boolean to tell whether or not the code is running in atomic
> context.
>
> Please take a look at the Highmem documentation which has been recently
> reworked and extended by me: https://docs.kernel.org/mm/highmem.html
>
> Anyway, I have been ATK 12 or 13 hours in a row. So I'm probably missing the
> whole picture.

Thanks for your useful info!
Stefano
  
Fabio M. De Francesco March 16, 2023, 9:13 a.m. UTC | #5
On giovedì 2 marzo 2023 12:34:16 CET Stefano Garzarella wrote:
> kmap_atomic() is deprecated in favor of kmap_local_page().
> 
> With kmap_local_page() the mappings are per thread, CPU local, can take
> page-faults, and can be called from any context (including interrupts).
> Furthermore, the tasks can be preempted and, when they are scheduled to
> run again, the kernel virtual addresses are restored and still valid.
> 
> kmap_atomic() is implemented like a kmap_local_page() which also disables
> page-faults and preemption (the latter only for !PREEMPT_RT kernels,
> otherwise it only disables migration).
> 
> The code within the mappings/un-mappings in getu16_iotlb() and
> putu16_iotlb() don't depend on the above-mentioned side effects of
> kmap_atomic(), so that mere replacements of the old API with the new one
> is all that is required (i.e., there is no need to explicitly add calls
> to pagefault_disable() and/or preempt_disable()).

It seems that my commit message is quite clear and complete and therefore has 
already been reused by others who have somehow given me credit. 

I would really appreciate it being mentioned here that you are reusing a 
"boiler plate" commit message of my own making and Cc me :-)

Thanks,

Fabio

> Signed-off-by: Stefano Garzarella <sgarzare@redhat.com>
> ---
> 
> Notes:
>     v2:
>     - added this patch since checkpatch.pl complained about deprecation
>       of kmap_atomic() touched by next patch
> 
>  drivers/vhost/vringh.c | 8 ++++----
>  1 file changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/vhost/vringh.c b/drivers/vhost/vringh.c
> index a1e27da54481..0ba3ef809e48 100644
> --- a/drivers/vhost/vringh.c
> +++ b/drivers/vhost/vringh.c
> @@ -1220,10 +1220,10 @@ static inline int getu16_iotlb(const struct vringh
> *vrh, if (ret < 0)
>  		return ret;
> 
> -	kaddr = kmap_atomic(iov.bv_page);
> +	kaddr = kmap_local_page(iov.bv_page);
>  	from = kaddr + iov.bv_offset;
>  	*val = vringh16_to_cpu(vrh, READ_ONCE(*(__virtio16 *)from));
> -	kunmap_atomic(kaddr);
> +	kunmap_local(kaddr);
> 
>  	return 0;
>  }
> @@ -1241,10 +1241,10 @@ static inline int putu16_iotlb(const struct vringh
> *vrh, if (ret < 0)
>  		return ret;
> 
> -	kaddr = kmap_atomic(iov.bv_page);
> +	kaddr = kmap_local_page(iov.bv_page);
>  	to = kaddr + iov.bv_offset;
>  	WRITE_ONCE(*(__virtio16 *)to, cpu_to_vringh16(vrh, val));
> -	kunmap_atomic(kaddr);
> +	kunmap_local(kaddr);
> 
>  	return 0;
>  }
> --
> 2.39.2
  
Stefano Garzarella March 16, 2023, 9:17 a.m. UTC | #6
On Thu, Mar 16, 2023 at 10:13:39AM +0100, Fabio M. De Francesco wrote:
>On giovedì 2 marzo 2023 12:34:16 CET Stefano Garzarella wrote:
>> kmap_atomic() is deprecated in favor of kmap_local_page().
>>
>> With kmap_local_page() the mappings are per thread, CPU local, can take
>> page-faults, and can be called from any context (including interrupts).
>> Furthermore, the tasks can be preempted and, when they are scheduled to
>> run again, the kernel virtual addresses are restored and still valid.
>>
>> kmap_atomic() is implemented like a kmap_local_page() which also disables
>> page-faults and preemption (the latter only for !PREEMPT_RT kernels,
>> otherwise it only disables migration).
>>
>> The code within the mappings/un-mappings in getu16_iotlb() and
>> putu16_iotlb() don't depend on the above-mentioned side effects of
>> kmap_atomic(), so that mere replacements of the old API with the new one
>> is all that is required (i.e., there is no need to explicitly add calls
>> to pagefault_disable() and/or preempt_disable()).
>
>It seems that my commit message is quite clear and complete and therefore has
>already been reused by others who have somehow given me credit.
>
>I would really appreciate it being mentioned here that you are reusing a
>"boiler plate" commit message of my own making and Cc me :-)

Yes of course, sorry for not doing this previously!

Thanks,
Stefano
  
Fabio M. De Francesco March 16, 2023, 9:25 a.m. UTC | #7
On giovedì 16 marzo 2023 09:09:29 CET Stefano Garzarella wrote:
> On Wed, Mar 15, 2023 at 10:12 PM Fabio M. De Francesco
> 
> <fmdefrancesco@gmail.com> wrote:
> > On martedì 14 marzo 2023 04:56:08 CET Jason Wang wrote:
> > > On Thu, Mar 2, 2023 at 7:34 PM Stefano Garzarella <sgarzare@redhat.com>
> > 
> > wrote:
> > > > kmap_atomic() is deprecated in favor of kmap_local_page().
> > > 
> > > It's better to mention the commit or code that introduces this.
> > > 
> > > > With kmap_local_page() the mappings are per thread, CPU local, can 
take
> > > > page-faults, and can be called from any context (including 
interrupts).
> > > > Furthermore, the tasks can be preempted and, when they are scheduled 
to
> > > > run again, the kernel virtual addresses are restored and still valid.
> > > > 
> > > > kmap_atomic() is implemented like a kmap_local_page() which also
> > > > disables
> > > > page-faults and preemption (the latter only for !PREEMPT_RT kernels,
> > > > otherwise it only disables migration).
> > > > 
> > > > The code within the mappings/un-mappings in getu16_iotlb() and
> > > > putu16_iotlb() don't depend on the above-mentioned side effects of
> > > > kmap_atomic(),
> > > 
> > > Note we used to use spinlock to protect simulators (at least until
> > > patch 7, so we probably need to re-order the patches at least) so I
> > > think this is only valid when:
> > > 
> > > The vringh IOTLB helpers are not used in atomic context (e.g spinlock,
> > > interrupts).
> > 
> > I'm probably missing some context but it looks that you are saying that
> > kmap_local_page() is not suited for any use in atomic context (you are
> > mentioning spinlocks).
> > 
> > The commit message (that I know pretty well since it's the exact copy, 
word
> > by word, of my boiler plate commits)
> 
> I hope it's not a problem for you, should I mention it somehow?

Sorry, I had missed your last message when I wrote a another message few 
minutes ago in this thread.

Obviously, I'm happy that my commit message it's being reused. As I said in 
the other message I would appreciate some kind of crediting me as the author.

I proposed a means you can use, but feel free to ignore my suggestion and do 
differently if you prefer to.

Again thanks,

Fabio

> I searched for the last commits that made a similar change and found
> yours that explained it perfectly ;-)
> 
> Do I need to rephrase?
> 
> > explains that kmap_local_page() is perfectly
> > usable in atomic context (including interrupts).
> > 
> > I don't know this code, however I am not able to see why these vringh 
IOTLB
> > helpers cannot work if used under spinlocks. Can you please elaborate a
> > little more?
> > 
> > > If yes, should we document this? (Or should we introduce a boolean to
> > > say whether an IOTLB variant can be used in an atomic context)?
> > 
> > Again, you'll have no problems from the use of kmap_local_page() and so 
you
> > don't need any boolean to tell whether or not the code is running in 
atomic
> > context.
> > 
> > Please take a look at the Highmem documentation which has been recently
> > reworked and extended by me: https://docs.kernel.org/mm/highmem.html
> > 
> > Anyway, I have been ATK 12 or 13 hours in a row. So I'm probably missing 
the
> > whole picture.
> 
> Thanks for your useful info!
> Stefano
  

Patch

diff --git a/drivers/vhost/vringh.c b/drivers/vhost/vringh.c
index a1e27da54481..0ba3ef809e48 100644
--- a/drivers/vhost/vringh.c
+++ b/drivers/vhost/vringh.c
@@ -1220,10 +1220,10 @@  static inline int getu16_iotlb(const struct vringh *vrh,
 	if (ret < 0)
 		return ret;
 
-	kaddr = kmap_atomic(iov.bv_page);
+	kaddr = kmap_local_page(iov.bv_page);
 	from = kaddr + iov.bv_offset;
 	*val = vringh16_to_cpu(vrh, READ_ONCE(*(__virtio16 *)from));
-	kunmap_atomic(kaddr);
+	kunmap_local(kaddr);
 
 	return 0;
 }
@@ -1241,10 +1241,10 @@  static inline int putu16_iotlb(const struct vringh *vrh,
 	if (ret < 0)
 		return ret;
 
-	kaddr = kmap_atomic(iov.bv_page);
+	kaddr = kmap_local_page(iov.bv_page);
 	to = kaddr + iov.bv_offset;
 	WRITE_ONCE(*(__virtio16 *)to, cpu_to_vringh16(vrh, val));
-	kunmap_atomic(kaddr);
+	kunmap_local(kaddr);
 
 	return 0;
 }