Message ID | 20230302113421.174582-4-sgarzare@redhat.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:5915:0:0:0:0:0 with SMTP id v21csp4185272wrd; Thu, 2 Mar 2023 03:39:18 -0800 (PST) X-Google-Smtp-Source: AK7set8qPYKzyT+hTno7YCNCYO9J4sFihhY0VNKpL4bSLXwZHbOOWvOJlVO3tUsvrtwt/6IbV8f0 X-Received: by 2002:aa7:c498:0:b0:474:a583:2e1a with SMTP id m24-20020aa7c498000000b00474a5832e1amr9090270edq.12.1677757158287; Thu, 02 Mar 2023 03:39:18 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1677757158; cv=none; d=google.com; s=arc-20160816; b=qX5tk/lIgX2YkvxmXUO2y4DN6ciTwSo0QHMllf25YuhIrDMqwCNyaj4MvkT7T2tYif Q0Wc6H1Lc/lRh6HPbajPwHENfd4acV9Dp4sjiec1PTw86bpLtE8o587V+YaRd5ebvj8O 7xednDEQ35sp5XWH5gtAgIFoyOea80/b12JNI0sGF3FMQ0xY4Rz/xEsYnXTniZO6CSXW sNjRA2yJcs/JQ2EOkJqd/gvBliR2adfSsjE8U/us0aNzdcSi9hbBvZujM7tUcWBUQpXq KV9J0sIDFcWbFD5Gfdzwahw2syPHecOrFbTQhMWLXo+ojCmFj5ddmDAZW1Q0t8/Ml1Vf nZTw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=nMUvg2hEs0iMx2dm3/Jg9b5hv2Rlumo7TQ+47EWr+7I=; b=TUppjhVY5c2snKKeHTepvb+Xnvl7R2BTGc5HxnV01PvpyAI6Nsamf6aFkkMS280ClE bUG/JGNOziVXgfZrSonnKrnuFCRamcVuCnypB7OYkXPXEtIAWv4ezYcA40NoJq0+xYKU tglz0iwU2SvW2RqP2JFmQ3sC0ZazoT1bEuHiBPteNJYlYTYnhgJmytOvUhMixW10Kf/G /iwS8E/hW6qwGZzNxydmC357AEbY0qzPLvJNhjSZIArjWXLrM2MncpF3sJPz55RpDOlt HXQEbl9ori9fpEt2S5S+y8cC8j2rNvusOdWHP2YKCy5mUynoOQf1R41FoJkKuJENMMzF WQVg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=eqM4MKD4; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id e18-20020a056402089200b004aaa653a74asi7525832edy.596.2023.03.02.03.38.55; Thu, 02 Mar 2023 03:39:18 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=eqM4MKD4; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229592AbjCBLfr (ORCPT <rfc822;davidbtadokoro@gmail.com> + 99 others); Thu, 2 Mar 2023 06:35:47 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55214 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229831AbjCBLfe (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Thu, 2 Mar 2023 06:35:34 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9509B1517F for <linux-kernel@vger.kernel.org>; Thu, 2 Mar 2023 03:34:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1677756880; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=nMUvg2hEs0iMx2dm3/Jg9b5hv2Rlumo7TQ+47EWr+7I=; b=eqM4MKD4lbhRF16c0egtl+jxhsbdRBQsXRI4OHQkwG+VozXKm3UxGokgCw0Zq5uPLEPCfu NXl5vgLB/aDOCv+Zsp8zMHfkgbP7oZcEWgWf/q/o3GiDfb2GTqItxf4LidNxasFa+rhfYz 2ej4Im32XE4zOHKneIupZ+1GCn7w7rM= Received: from mail-qv1-f69.google.com (mail-qv1-f69.google.com [209.85.219.69]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_128_GCM_SHA256) id us-mta-634-6luZiEUZPIePmsJYjtSOYg-1; Thu, 02 Mar 2023 06:34:39 -0500 X-MC-Unique: 6luZiEUZPIePmsJYjtSOYg-1 Received: by mail-qv1-f69.google.com with SMTP id y3-20020a0cec03000000b0056ee5b123bbso8740724qvo.7 for <linux-kernel@vger.kernel.org>; Thu, 02 Mar 2023 03:34:39 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1677756879; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=nMUvg2hEs0iMx2dm3/Jg9b5hv2Rlumo7TQ+47EWr+7I=; b=Ulh+wiMhKXTnEjeOJLA/7uXFxjN8HBI+5E/aMRxir6f8kh9FgPchZy+AQw8htRF+WV +co6DCjbWvxpW+KP+QbTp6Jae7mFWHxG4yVstkVpA9trLJ5pOlZdOxScmn0ceLQ8cx85 nL20Mp07tqzyBVnr3DLgie4+MWhhQkmp9RdA6UxSYyFfYImeaNzOUcHJ84ixGyRhFRDM doiB6T8yC6uBIFHUuEFrMibW/Yzskzclu2HIPgsRfaBlwOnmf9yW1QsyD/rjqUoY+5dX jqFGAWOtjg7EkpD20UuHG8J52OU4RwulQpWHUJNMfaPoI+xJ6zS2B76P7NaQEujXzrDy k6Xw== X-Gm-Message-State: AO0yUKW3PejyuFnr4P57wWa4Exk01yf59/w58wFk386xA9VT9wfOVp0n wI/AelEh1KBCUlAOI3LAShk9HKDEnCVNfwVkKhHoCA1vHVmrJjLI+qRIA8oZ3LzKV2JXf9dZCld N6WDJgHyZVw/xDZviFsEIjRgX X-Received: by 2002:ac8:5850:0:b0:3bf:d6ad:5236 with SMTP id h16-20020ac85850000000b003bfd6ad5236mr15655317qth.32.1677756879136; Thu, 02 Mar 2023 03:34:39 -0800 (PST) X-Received: by 2002:ac8:5850:0:b0:3bf:d6ad:5236 with SMTP id h16-20020ac85850000000b003bfd6ad5236mr15655295qth.32.1677756878884; Thu, 02 Mar 2023 03:34:38 -0800 (PST) Received: from step1.redhat.com (c-115-213.cust-q.wadsl.it. [212.43.115.213]) by smtp.gmail.com with ESMTPSA id o12-20020ac8698c000000b003ba19e53e43sm10084156qtq.25.2023.03.02.03.34.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 02 Mar 2023 03:34:38 -0800 (PST) From: Stefano Garzarella <sgarzare@redhat.com> To: virtualization@lists.linux-foundation.org Cc: Andrey Zhadchenko <andrey.zhadchenko@virtuozzo.com>, eperezma@redhat.com, netdev@vger.kernel.org, stefanha@redhat.com, linux-kernel@vger.kernel.org, Jason Wang <jasowang@redhat.com>, "Michael S. Tsirkin" <mst@redhat.com>, kvm@vger.kernel.org, Stefano Garzarella <sgarzare@redhat.com> Subject: [PATCH v2 3/8] vringh: replace kmap_atomic() with kmap_local_page() Date: Thu, 2 Mar 2023 12:34:16 +0100 Message-Id: <20230302113421.174582-4-sgarzare@redhat.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230302113421.174582-1-sgarzare@redhat.com> References: <20230302113421.174582-1-sgarzare@redhat.com> MIME-Version: 1.0 Content-type: text/plain Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1759255890085488115?= X-GMAIL-MSGID: =?utf-8?q?1759255890085488115?= |
Series |
vdpa_sim: add support for user VA
|
|
Commit Message
Stefano Garzarella
March 2, 2023, 11:34 a.m. UTC
kmap_atomic() is deprecated in favor of kmap_local_page().
With kmap_local_page() the mappings are per thread, CPU local, can take
page-faults, and can be called from any context (including interrupts).
Furthermore, the tasks can be preempted and, when they are scheduled to
run again, the kernel virtual addresses are restored and still valid.
kmap_atomic() is implemented like a kmap_local_page() which also disables
page-faults and preemption (the latter only for !PREEMPT_RT kernels,
otherwise it only disables migration).
The code within the mappings/un-mappings in getu16_iotlb() and
putu16_iotlb() don't depend on the above-mentioned side effects of
kmap_atomic(), so that mere replacements of the old API with the new one
is all that is required (i.e., there is no need to explicitly add calls
to pagefault_disable() and/or preempt_disable()).
Signed-off-by: Stefano Garzarella <sgarzare@redhat.com>
---
Notes:
v2:
- added this patch since checkpatch.pl complained about deprecation
of kmap_atomic() touched by next patch
drivers/vhost/vringh.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
Comments
On Thu, Mar 2, 2023 at 7:34 PM Stefano Garzarella <sgarzare@redhat.com> wrote: > > kmap_atomic() is deprecated in favor of kmap_local_page(). It's better to mention the commit or code that introduces this. > > With kmap_local_page() the mappings are per thread, CPU local, can take > page-faults, and can be called from any context (including interrupts). > Furthermore, the tasks can be preempted and, when they are scheduled to > run again, the kernel virtual addresses are restored and still valid. > > kmap_atomic() is implemented like a kmap_local_page() which also disables > page-faults and preemption (the latter only for !PREEMPT_RT kernels, > otherwise it only disables migration). > > The code within the mappings/un-mappings in getu16_iotlb() and > putu16_iotlb() don't depend on the above-mentioned side effects of > kmap_atomic(), Note we used to use spinlock to protect simulators (at least until patch 7, so we probably need to re-order the patches at least) so I think this is only valid when: The vringh IOTLB helpers are not used in atomic context (e.g spinlock, interrupts). If yes, should we document this? (Or should we introduce a boolean to say whether an IOTLB variant can be used in an atomic context)? Thanks > so that mere replacements of the old API with the new one > is all that is required (i.e., there is no need to explicitly add calls > to pagefault_disable() and/or preempt_disable()). > > Signed-off-by: Stefano Garzarella <sgarzare@redhat.com> > --- > > Notes: > v2: > - added this patch since checkpatch.pl complained about deprecation > of kmap_atomic() touched by next patch > > drivers/vhost/vringh.c | 8 ++++---- > 1 file changed, 4 insertions(+), 4 deletions(-) > > diff --git a/drivers/vhost/vringh.c b/drivers/vhost/vringh.c > index a1e27da54481..0ba3ef809e48 100644 > --- a/drivers/vhost/vringh.c > +++ b/drivers/vhost/vringh.c > @@ -1220,10 +1220,10 @@ static inline int getu16_iotlb(const struct vringh *vrh, > if (ret < 0) > return ret; > > - kaddr = kmap_atomic(iov.bv_page); > + kaddr = kmap_local_page(iov.bv_page); > from = kaddr + iov.bv_offset; > *val = vringh16_to_cpu(vrh, READ_ONCE(*(__virtio16 *)from)); > - kunmap_atomic(kaddr); > + kunmap_local(kaddr); > > return 0; > } > @@ -1241,10 +1241,10 @@ static inline int putu16_iotlb(const struct vringh *vrh, > if (ret < 0) > return ret; > > - kaddr = kmap_atomic(iov.bv_page); > + kaddr = kmap_local_page(iov.bv_page); > to = kaddr + iov.bv_offset; > WRITE_ONCE(*(__virtio16 *)to, cpu_to_vringh16(vrh, val)); > - kunmap_atomic(kaddr); > + kunmap_local(kaddr); > > return 0; > } > -- > 2.39.2 >
On martedì 14 marzo 2023 04:56:08 CET Jason Wang wrote: > On Thu, Mar 2, 2023 at 7:34 PM Stefano Garzarella <sgarzare@redhat.com> wrote: > > kmap_atomic() is deprecated in favor of kmap_local_page(). > > It's better to mention the commit or code that introduces this. > > > With kmap_local_page() the mappings are per thread, CPU local, can take > > page-faults, and can be called from any context (including interrupts). > > Furthermore, the tasks can be preempted and, when they are scheduled to > > run again, the kernel virtual addresses are restored and still valid. > > > > kmap_atomic() is implemented like a kmap_local_page() which also disables > > page-faults and preemption (the latter only for !PREEMPT_RT kernels, > > otherwise it only disables migration). > > > > The code within the mappings/un-mappings in getu16_iotlb() and > > putu16_iotlb() don't depend on the above-mentioned side effects of > > kmap_atomic(), > > Note we used to use spinlock to protect simulators (at least until > patch 7, so we probably need to re-order the patches at least) so I > think this is only valid when: > > The vringh IOTLB helpers are not used in atomic context (e.g spinlock, > interrupts). I'm probably missing some context but it looks that you are saying that kmap_local_page() is not suited for any use in atomic context (you are mentioning spinlocks). The commit message (that I know pretty well since it's the exact copy, word by word, of my boiler plate commits) explains that kmap_local_page() is perfectly usable in atomic context (including interrupts). I don't know this code, however I am not able to see why these vringh IOTLB helpers cannot work if used under spinlocks. Can you please elaborate a little more? > If yes, should we document this? (Or should we introduce a boolean to > say whether an IOTLB variant can be used in an atomic context)? Again, you'll have no problems from the use of kmap_local_page() and so you don't need any boolean to tell whether or not the code is running in atomic context. Please take a look at the Highmem documentation which has been recently reworked and extended by me: https://docs.kernel.org/mm/highmem.html Anyway, I have been ATK 12 or 13 hours in a row. So I'm probably missing the whole picture. Thanks, Fabio > Thanks > > > so that mere replacements of the old API with the new one > > is all that is required (i.e., there is no need to explicitly add calls > > to pagefault_disable() and/or preempt_disable()). > > > > Signed-off-by: Stefano Garzarella <sgarzare@redhat.com> > > --- > > > > Notes: > > v2: > > - added this patch since checkpatch.pl complained about deprecation > > > > of kmap_atomic() touched by next patch > > > > drivers/vhost/vringh.c | 8 ++++---- > > 1 file changed, 4 insertions(+), 4 deletions(-) > > > > diff --git a/drivers/vhost/vringh.c b/drivers/vhost/vringh.c > > index a1e27da54481..0ba3ef809e48 100644 > > --- a/drivers/vhost/vringh.c > > +++ b/drivers/vhost/vringh.c > > @@ -1220,10 +1220,10 @@ static inline int getu16_iotlb(const struct vringh > > *vrh,> > > if (ret < 0) > > > > return ret; > > > > - kaddr = kmap_atomic(iov.bv_page); > > + kaddr = kmap_local_page(iov.bv_page); > > > > from = kaddr + iov.bv_offset; > > *val = vringh16_to_cpu(vrh, READ_ONCE(*(__virtio16 *)from)); > > > > - kunmap_atomic(kaddr); > > + kunmap_local(kaddr); > > > > return 0; > > > > } > > > > @@ -1241,10 +1241,10 @@ static inline int putu16_iotlb(const struct vringh > > *vrh,> > > if (ret < 0) > > > > return ret; > > > > - kaddr = kmap_atomic(iov.bv_page); > > + kaddr = kmap_local_page(iov.bv_page); > > > > to = kaddr + iov.bv_offset; > > WRITE_ONCE(*(__virtio16 *)to, cpu_to_vringh16(vrh, val)); > > > > - kunmap_atomic(kaddr); > > + kunmap_local(kaddr); > > > > return 0; > > > > } > > > > -- > > 2.39.2
On Thu, Mar 16, 2023 at 5:12 AM Fabio M. De Francesco <fmdefrancesco@gmail.com> wrote: > > On martedì 14 marzo 2023 04:56:08 CET Jason Wang wrote: > > On Thu, Mar 2, 2023 at 7:34 PM Stefano Garzarella <sgarzare@redhat.com> > wrote: > > > kmap_atomic() is deprecated in favor of kmap_local_page(). > > > > It's better to mention the commit or code that introduces this. > > > > > With kmap_local_page() the mappings are per thread, CPU local, can take > > > page-faults, and can be called from any context (including interrupts). > > > Furthermore, the tasks can be preempted and, when they are scheduled to > > > run again, the kernel virtual addresses are restored and still valid. > > > > > > kmap_atomic() is implemented like a kmap_local_page() which also disables > > > page-faults and preemption (the latter only for !PREEMPT_RT kernels, > > > otherwise it only disables migration). > > > > > > The code within the mappings/un-mappings in getu16_iotlb() and > > > putu16_iotlb() don't depend on the above-mentioned side effects of > > > kmap_atomic(), > > > > Note we used to use spinlock to protect simulators (at least until > > patch 7, so we probably need to re-order the patches at least) so I > > think this is only valid when: > > > > The vringh IOTLB helpers are not used in atomic context (e.g spinlock, > > interrupts). > > I'm probably missing some context but it looks that you are saying that > kmap_local_page() is not suited for any use in atomic context (you are > mentioning spinlocks). > > The commit message (that I know pretty well since it's the exact copy, word by > word, of my boiler plate commits) explains that kmap_local_page() is perfectly > usable in atomic context (including interrupts). Thanks for the confirmation, I misread the change log and thought it said it can't be used in interrupts. > > I don't know this code, however I am not able to see why these vringh IOTLB > helpers cannot work if used under spinlocks. Can you please elaborate a little > more? My fault, see above. > > > If yes, should we document this? (Or should we introduce a boolean to > > say whether an IOTLB variant can be used in an atomic context)? > > Again, you'll have no problems from the use of kmap_local_page() and so you > don't need any boolean to tell whether or not the code is running in atomic > context. > > Please take a look at the Highmem documentation which has been recently > reworked and extended by me: https://docs.kernel.org/mm/highmem.html This is really helpful. > > Anyway, I have been ATK 12 or 13 hours in a row. So I'm probably missing the > whole picture. It's me that misses the movitiation of kmap_local(). Thanks > > Thanks, > > Fabio > > > Thanks > > > > > so that mere replacements of the old API with the new one > > > is all that is required (i.e., there is no need to explicitly add calls > > > to pagefault_disable() and/or preempt_disable()). > > > > > > Signed-off-by: Stefano Garzarella <sgarzare@redhat.com> > > > --- > > > > > > Notes: > > > v2: > > > - added this patch since checkpatch.pl complained about deprecation > > > > > > of kmap_atomic() touched by next patch > > > > > > drivers/vhost/vringh.c | 8 ++++---- > > > 1 file changed, 4 insertions(+), 4 deletions(-) > > > > > > diff --git a/drivers/vhost/vringh.c b/drivers/vhost/vringh.c > > > index a1e27da54481..0ba3ef809e48 100644 > > > --- a/drivers/vhost/vringh.c > > > +++ b/drivers/vhost/vringh.c > > > @@ -1220,10 +1220,10 @@ static inline int getu16_iotlb(const struct vringh > > > *vrh,> > > > if (ret < 0) > > > > > > return ret; > > > > > > - kaddr = kmap_atomic(iov.bv_page); > > > + kaddr = kmap_local_page(iov.bv_page); > > > > > > from = kaddr + iov.bv_offset; > > > *val = vringh16_to_cpu(vrh, READ_ONCE(*(__virtio16 *)from)); > > > > > > - kunmap_atomic(kaddr); > > > + kunmap_local(kaddr); > > > > > > return 0; > > > > > > } > > > > > > @@ -1241,10 +1241,10 @@ static inline int putu16_iotlb(const struct vringh > > > *vrh,> > > > if (ret < 0) > > > > > > return ret; > > > > > > - kaddr = kmap_atomic(iov.bv_page); > > > + kaddr = kmap_local_page(iov.bv_page); > > > > > > to = kaddr + iov.bv_offset; > > > WRITE_ONCE(*(__virtio16 *)to, cpu_to_vringh16(vrh, val)); > > > > > > - kunmap_atomic(kaddr); > > > + kunmap_local(kaddr); > > > > > > return 0; > > > > > > } > > > > > > -- > > > 2.39.2 > > > >
On Wed, Mar 15, 2023 at 10:12 PM Fabio M. De Francesco <fmdefrancesco@gmail.com> wrote: > > On martedì 14 marzo 2023 04:56:08 CET Jason Wang wrote: > > On Thu, Mar 2, 2023 at 7:34 PM Stefano Garzarella <sgarzare@redhat.com> > wrote: > > > kmap_atomic() is deprecated in favor of kmap_local_page(). > > > > It's better to mention the commit or code that introduces this. > > > > > With kmap_local_page() the mappings are per thread, CPU local, can take > > > page-faults, and can be called from any context (including interrupts). > > > Furthermore, the tasks can be preempted and, when they are scheduled to > > > run again, the kernel virtual addresses are restored and still valid. > > > > > > kmap_atomic() is implemented like a kmap_local_page() which also disables > > > page-faults and preemption (the latter only for !PREEMPT_RT kernels, > > > otherwise it only disables migration). > > > > > > The code within the mappings/un-mappings in getu16_iotlb() and > > > putu16_iotlb() don't depend on the above-mentioned side effects of > > > kmap_atomic(), > > > > Note we used to use spinlock to protect simulators (at least until > > patch 7, so we probably need to re-order the patches at least) so I > > think this is only valid when: > > > > The vringh IOTLB helpers are not used in atomic context (e.g spinlock, > > interrupts). > > I'm probably missing some context but it looks that you are saying that > kmap_local_page() is not suited for any use in atomic context (you are > mentioning spinlocks). > > The commit message (that I know pretty well since it's the exact copy, word by > word, of my boiler plate commits) I hope it's not a problem for you, should I mention it somehow? I searched for the last commits that made a similar change and found yours that explained it perfectly ;-) Do I need to rephrase? > explains that kmap_local_page() is perfectly > usable in atomic context (including interrupts). > > I don't know this code, however I am not able to see why these vringh IOTLB > helpers cannot work if used under spinlocks. Can you please elaborate a little > more? > > > If yes, should we document this? (Or should we introduce a boolean to > > say whether an IOTLB variant can be used in an atomic context)? > > Again, you'll have no problems from the use of kmap_local_page() and so you > don't need any boolean to tell whether or not the code is running in atomic > context. > > Please take a look at the Highmem documentation which has been recently > reworked and extended by me: https://docs.kernel.org/mm/highmem.html > > Anyway, I have been ATK 12 or 13 hours in a row. So I'm probably missing the > whole picture. Thanks for your useful info! Stefano
On giovedì 2 marzo 2023 12:34:16 CET Stefano Garzarella wrote: > kmap_atomic() is deprecated in favor of kmap_local_page(). > > With kmap_local_page() the mappings are per thread, CPU local, can take > page-faults, and can be called from any context (including interrupts). > Furthermore, the tasks can be preempted and, when they are scheduled to > run again, the kernel virtual addresses are restored and still valid. > > kmap_atomic() is implemented like a kmap_local_page() which also disables > page-faults and preemption (the latter only for !PREEMPT_RT kernels, > otherwise it only disables migration). > > The code within the mappings/un-mappings in getu16_iotlb() and > putu16_iotlb() don't depend on the above-mentioned side effects of > kmap_atomic(), so that mere replacements of the old API with the new one > is all that is required (i.e., there is no need to explicitly add calls > to pagefault_disable() and/or preempt_disable()). It seems that my commit message is quite clear and complete and therefore has already been reused by others who have somehow given me credit. I would really appreciate it being mentioned here that you are reusing a "boiler plate" commit message of my own making and Cc me :-) Thanks, Fabio > Signed-off-by: Stefano Garzarella <sgarzare@redhat.com> > --- > > Notes: > v2: > - added this patch since checkpatch.pl complained about deprecation > of kmap_atomic() touched by next patch > > drivers/vhost/vringh.c | 8 ++++---- > 1 file changed, 4 insertions(+), 4 deletions(-) > > diff --git a/drivers/vhost/vringh.c b/drivers/vhost/vringh.c > index a1e27da54481..0ba3ef809e48 100644 > --- a/drivers/vhost/vringh.c > +++ b/drivers/vhost/vringh.c > @@ -1220,10 +1220,10 @@ static inline int getu16_iotlb(const struct vringh > *vrh, if (ret < 0) > return ret; > > - kaddr = kmap_atomic(iov.bv_page); > + kaddr = kmap_local_page(iov.bv_page); > from = kaddr + iov.bv_offset; > *val = vringh16_to_cpu(vrh, READ_ONCE(*(__virtio16 *)from)); > - kunmap_atomic(kaddr); > + kunmap_local(kaddr); > > return 0; > } > @@ -1241,10 +1241,10 @@ static inline int putu16_iotlb(const struct vringh > *vrh, if (ret < 0) > return ret; > > - kaddr = kmap_atomic(iov.bv_page); > + kaddr = kmap_local_page(iov.bv_page); > to = kaddr + iov.bv_offset; > WRITE_ONCE(*(__virtio16 *)to, cpu_to_vringh16(vrh, val)); > - kunmap_atomic(kaddr); > + kunmap_local(kaddr); > > return 0; > } > -- > 2.39.2
On Thu, Mar 16, 2023 at 10:13:39AM +0100, Fabio M. De Francesco wrote: >On giovedì 2 marzo 2023 12:34:16 CET Stefano Garzarella wrote: >> kmap_atomic() is deprecated in favor of kmap_local_page(). >> >> With kmap_local_page() the mappings are per thread, CPU local, can take >> page-faults, and can be called from any context (including interrupts). >> Furthermore, the tasks can be preempted and, when they are scheduled to >> run again, the kernel virtual addresses are restored and still valid. >> >> kmap_atomic() is implemented like a kmap_local_page() which also disables >> page-faults and preemption (the latter only for !PREEMPT_RT kernels, >> otherwise it only disables migration). >> >> The code within the mappings/un-mappings in getu16_iotlb() and >> putu16_iotlb() don't depend on the above-mentioned side effects of >> kmap_atomic(), so that mere replacements of the old API with the new one >> is all that is required (i.e., there is no need to explicitly add calls >> to pagefault_disable() and/or preempt_disable()). > >It seems that my commit message is quite clear and complete and therefore has >already been reused by others who have somehow given me credit. > >I would really appreciate it being mentioned here that you are reusing a >"boiler plate" commit message of my own making and Cc me :-) Yes of course, sorry for not doing this previously! Thanks, Stefano
On giovedì 16 marzo 2023 09:09:29 CET Stefano Garzarella wrote: > On Wed, Mar 15, 2023 at 10:12 PM Fabio M. De Francesco > > <fmdefrancesco@gmail.com> wrote: > > On martedì 14 marzo 2023 04:56:08 CET Jason Wang wrote: > > > On Thu, Mar 2, 2023 at 7:34 PM Stefano Garzarella <sgarzare@redhat.com> > > > > wrote: > > > > kmap_atomic() is deprecated in favor of kmap_local_page(). > > > > > > It's better to mention the commit or code that introduces this. > > > > > > > With kmap_local_page() the mappings are per thread, CPU local, can take > > > > page-faults, and can be called from any context (including interrupts). > > > > Furthermore, the tasks can be preempted and, when they are scheduled to > > > > run again, the kernel virtual addresses are restored and still valid. > > > > > > > > kmap_atomic() is implemented like a kmap_local_page() which also > > > > disables > > > > page-faults and preemption (the latter only for !PREEMPT_RT kernels, > > > > otherwise it only disables migration). > > > > > > > > The code within the mappings/un-mappings in getu16_iotlb() and > > > > putu16_iotlb() don't depend on the above-mentioned side effects of > > > > kmap_atomic(), > > > > > > Note we used to use spinlock to protect simulators (at least until > > > patch 7, so we probably need to re-order the patches at least) so I > > > think this is only valid when: > > > > > > The vringh IOTLB helpers are not used in atomic context (e.g spinlock, > > > interrupts). > > > > I'm probably missing some context but it looks that you are saying that > > kmap_local_page() is not suited for any use in atomic context (you are > > mentioning spinlocks). > > > > The commit message (that I know pretty well since it's the exact copy, word > > by word, of my boiler plate commits) > > I hope it's not a problem for you, should I mention it somehow? Sorry, I had missed your last message when I wrote a another message few minutes ago in this thread. Obviously, I'm happy that my commit message it's being reused. As I said in the other message I would appreciate some kind of crediting me as the author. I proposed a means you can use, but feel free to ignore my suggestion and do differently if you prefer to. Again thanks, Fabio > I searched for the last commits that made a similar change and found > yours that explained it perfectly ;-) > > Do I need to rephrase? > > > explains that kmap_local_page() is perfectly > > usable in atomic context (including interrupts). > > > > I don't know this code, however I am not able to see why these vringh IOTLB > > helpers cannot work if used under spinlocks. Can you please elaborate a > > little more? > > > > > If yes, should we document this? (Or should we introduce a boolean to > > > say whether an IOTLB variant can be used in an atomic context)? > > > > Again, you'll have no problems from the use of kmap_local_page() and so you > > don't need any boolean to tell whether or not the code is running in atomic > > context. > > > > Please take a look at the Highmem documentation which has been recently > > reworked and extended by me: https://docs.kernel.org/mm/highmem.html > > > > Anyway, I have been ATK 12 or 13 hours in a row. So I'm probably missing the > > whole picture. > > Thanks for your useful info! > Stefano
diff --git a/drivers/vhost/vringh.c b/drivers/vhost/vringh.c index a1e27da54481..0ba3ef809e48 100644 --- a/drivers/vhost/vringh.c +++ b/drivers/vhost/vringh.c @@ -1220,10 +1220,10 @@ static inline int getu16_iotlb(const struct vringh *vrh, if (ret < 0) return ret; - kaddr = kmap_atomic(iov.bv_page); + kaddr = kmap_local_page(iov.bv_page); from = kaddr + iov.bv_offset; *val = vringh16_to_cpu(vrh, READ_ONCE(*(__virtio16 *)from)); - kunmap_atomic(kaddr); + kunmap_local(kaddr); return 0; } @@ -1241,10 +1241,10 @@ static inline int putu16_iotlb(const struct vringh *vrh, if (ret < 0) return ret; - kaddr = kmap_atomic(iov.bv_page); + kaddr = kmap_local_page(iov.bv_page); to = kaddr + iov.bv_offset; WRITE_ONCE(*(__virtio16 *)to, cpu_to_vringh16(vrh, val)); - kunmap_atomic(kaddr); + kunmap_local(kaddr); return 0; }