From patchwork Thu Mar 30 08:57:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Stevens X-Patchwork-Id: 76977 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp981396vqo; Thu, 30 Mar 2023 02:02:36 -0700 (PDT) X-Google-Smtp-Source: AKy350blbhZiuM4qoPh59Mdcb7FzDyfl+xWb01yoUYpdgRQtCEF0Hs6xmcnLjj9sUdmgc0wMkFZ8 X-Received: by 2002:a05:6402:417:b0:4c1:2252:f72c with SMTP id q23-20020a056402041700b004c12252f72cmr19783627edv.27.1680166956193; Thu, 30 Mar 2023 02:02:36 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680166956; cv=none; d=google.com; s=arc-20160816; b=T9CrRj1QD4V7UDVPN5KjHY0H0bzsjy/qezny6t54eLT5kl9LoTAwYEsTIwQt52XM0U +1G2ilj8SoNoJMWO2avZmo7OwHrOnrjgHlDiIoL/ig0nniK2D47xtJ6BLlIoFXWwEhpn 9X6vgANSYjCeEPlTMpuF+hMvWEtJlz6Y+gdS7V3YdBJn+EfRhd3jDNKeCIId2FwGt9+k EYCvfX2e9bR6MG81AttVH2LZ5iek/KFxaJoJTiB6S2TqBjtj6iF39sMiAjMqsYx7A3ny 8oZRimGZxQFwqy7nTdKFQtpUiVjARoXK6G79hPXvSlJiwkD5J2IicIhrbXUagR2TvS8H qHZA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=BG1wtweCRp2yLKIQJrZsbcG2DCCs4VdZ6zbbPPJNRdk=; b=sd2dI3pllYYfLN1Z8if4pQWI85tKDKe8DrW9inv6LQjIytAU/r1yvb+828M+N6E8FW 5FyqvOyW79faU1Qwc04kwBsNr1i5p6HlfbLc0Fittu3DPaM9OuTQZLRjPZbXQMuAn6J2 KxnIuaWiKP5XzvN01IKC4oisg171FUpR2pj0gHt6PIw2XSXKmJ90d3l5VlT9kY7K6INR 1Ef+L/haL0CkeQH4tif6zN7JG4luWBINi5wlCeyF0V6FuWqJmdcTR8vLkjSUBs7XIJEX OfWlXSni8l9OJouZxbIdc0VZpTJTjcDnUcQtWObiS/ri1I3lDt2aSxKuC8SYIaSqKGsL R4HA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@chromium.org header.s=google header.b=T94hQX0k; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=chromium.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id w25-20020a50fa99000000b004af50024631si38647636edr.433.2023.03.30.02.02.10; Thu, 30 Mar 2023 02:02:36 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@chromium.org header.s=google header.b=T94hQX0k; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=chromium.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230139AbjC3I6Y (ORCPT + 99 others); Thu, 30 Mar 2023 04:58:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58922 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230114AbjC3I6U (ORCPT ); Thu, 30 Mar 2023 04:58:20 -0400 Received: from mail-pl1-x62f.google.com (mail-pl1-x62f.google.com [IPv6:2607:f8b0:4864:20::62f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EB75E7A8B for ; Thu, 30 Mar 2023 01:58:15 -0700 (PDT) Received: by mail-pl1-x62f.google.com with SMTP id kc4so17440935plb.10 for ; Thu, 30 Mar 2023 01:58:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1680166695; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=BG1wtweCRp2yLKIQJrZsbcG2DCCs4VdZ6zbbPPJNRdk=; b=T94hQX0kZBz/CdZCP5ycbTmHKRaEFgSE7kVBYecmLALA9exK+oCTOJrxbI+hIzD6hb W7+6L1AMRc1lBk3/hGte25vuDaT3q0QKDeDkJnHx0kSz4PNqzuhUPXY3ARjk+dDKky4b exdeTDukFKSSg7FAp6mO+twCNdDegcjlxSP7o= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1680166695; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=BG1wtweCRp2yLKIQJrZsbcG2DCCs4VdZ6zbbPPJNRdk=; b=Z2EuFtj1Nd5Aylq28dDUxiHPd048eg0N0EmF/Fe/Y9IvDRlJp4suO2voM1IV0Ldg1I 9UEghBR87WXlDrC3wgYKsxYPpd1zRxe7Gwm5Ydqu/uyd9l9pun7buW+98vA4SmY7Gxq5 SNMeZ5XrxOAkj/lFvObYh/bW2dhsQWGJdmPwazMTfqWCdFe65kB4Tte8LwKyDfV9x/t5 aMUpBeDTCNLTLCyavTmATtIQSPvR31/2hK5LkpbwiZ9w7h7qGbb6T13T30Ti9BG3sfjM tSRznvnXHE48Z9x/MhE/kgF6Vapiaoieo09VopVddp3JGcSJ4mfqHe88z69enm0jG4JW au4A== X-Gm-Message-State: AO0yUKU3XrHi+bgw5CtzeSoPQS95k0P8RbQ9bytTWLqWO9qh7lQCcLOA 9PboNQxqSqOgSEJ9jpbtCEbcgQ== X-Received: by 2002:a05:6a20:3f0c:b0:db:4c00:7918 with SMTP id az12-20020a056a203f0c00b000db4c007918mr18322352pzb.0.1680166695264; Thu, 30 Mar 2023 01:58:15 -0700 (PDT) Received: from localhost ([2401:fa00:8f:203:1320:eef8:d0bb:b161]) by smtp.gmail.com with UTF8SMTPSA id i12-20020aa787cc000000b00580e3917af7sm17437668pfo.117.2023.03.30.01.58.12 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 30 Mar 2023 01:58:14 -0700 (PDT) From: David Stevens X-Google-Original-From: David Stevens To: Marc Zyngier , Sean Christopherson Cc: Oliver Upton , Paolo Bonzini , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, David Stevens Subject: [PATCH v6 1/4] KVM: mmu: introduce new gfn_to_pfn_noref functions Date: Thu, 30 Mar 2023 17:57:59 +0900 Message-Id: <20230330085802.2414466-2-stevensd@google.com> X-Mailer: git-send-email 2.40.0.348.gf938b09366-goog In-Reply-To: <20230330085802.2414466-1-stevensd@google.com> References: <20230330085802.2414466-1-stevensd@google.com> MIME-Version: 1.0 X-Spam-Status: No, score=-0.2 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761782746234531207?= X-GMAIL-MSGID: =?utf-8?q?1761782746234531207?= From: David Stevens Introduce new gfn_to_pfn_noref functions that parallel existing gfn_to_pfn functions. These functions can be used when the caller does not need to maintain a reference to the returned pfn (i.e. when usage is guarded by a mmu_notifier). The noref functions take an out parameter that is used to return the struct page if the hva was resolved via gup. The caller needs to drop its reference such a returned page. Signed-off-by: David Stevens --- include/linux/kvm_host.h | 18 ++++ virt/kvm/kvm_main.c | 209 ++++++++++++++++++++++++++++----------- virt/kvm/kvm_mm.h | 6 +- virt/kvm/pfncache.c | 12 ++- 4 files changed, 188 insertions(+), 57 deletions(-) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 90edc16d37e5..146f220cc25b 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1162,8 +1162,22 @@ kvm_pfn_t __gfn_to_pfn_memslot(const struct kvm_memory_slot *slot, gfn_t gfn, bool atomic, bool interruptible, bool *async, bool write_fault, bool *writable, hva_t *hva); +kvm_pfn_t gfn_to_pfn_noref(struct kvm *kvm, gfn_t gfn, struct page **page); +kvm_pfn_t gfn_to_pfn_noref_prot(struct kvm *kvm, gfn_t gfn, + bool write_fault, bool *writable, + struct page **page); +kvm_pfn_t gfn_to_pfn_noref_memslot(const struct kvm_memory_slot *slot, + gfn_t gfn, struct page **page); +kvm_pfn_t gfn_to_pfn_noref_memslot_atomic(const struct kvm_memory_slot *slot, + gfn_t gfn, struct page **page); +kvm_pfn_t __gfn_to_pfn_noref_memslot(const struct kvm_memory_slot *slot, + gfn_t gfn, bool atomic, bool interruptible, + bool *async, bool write_fault, bool *writable, + hva_t *hva, struct page **page); + void kvm_release_pfn_clean(kvm_pfn_t pfn); void kvm_release_pfn_dirty(kvm_pfn_t pfn); +void kvm_release_pfn_noref_clean(kvm_pfn_t pfn, struct page *page); void kvm_set_pfn_dirty(kvm_pfn_t pfn); void kvm_set_pfn_accessed(kvm_pfn_t pfn); @@ -1242,6 +1256,10 @@ struct kvm_memslots *kvm_vcpu_memslots(struct kvm_vcpu *vcpu); struct kvm_memory_slot *kvm_vcpu_gfn_to_memslot(struct kvm_vcpu *vcpu, gfn_t gfn); kvm_pfn_t kvm_vcpu_gfn_to_pfn_atomic(struct kvm_vcpu *vcpu, gfn_t gfn); kvm_pfn_t kvm_vcpu_gfn_to_pfn(struct kvm_vcpu *vcpu, gfn_t gfn); +kvm_pfn_t kvm_vcpu_gfn_to_pfn_noref_atomic(struct kvm_vcpu *vcpu, gfn_t gfn, + struct page **page); +kvm_pfn_t kvm_vcpu_gfn_to_pfn_noref(struct kvm_vcpu *vcpu, gfn_t gfn, + struct page **page); int kvm_vcpu_map(struct kvm_vcpu *vcpu, gpa_t gpa, struct kvm_host_map *map); void kvm_vcpu_unmap(struct kvm_vcpu *vcpu, struct kvm_host_map *map, bool dirty); unsigned long kvm_vcpu_gfn_to_hva(struct kvm_vcpu *vcpu, gfn_t gfn); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index f40b72eb0e7b..007dd984eeea 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -2484,9 +2484,9 @@ static inline int check_user_page_hwpoison(unsigned long addr) * only part that runs if we can in atomic context. */ static bool hva_to_pfn_fast(unsigned long addr, bool write_fault, - bool *writable, kvm_pfn_t *pfn) + bool *writable, kvm_pfn_t *pfn, + struct page **page) { - struct page *page[1]; /* * Fast pin a writable pfn only if it is a write fault request @@ -2497,7 +2497,7 @@ static bool hva_to_pfn_fast(unsigned long addr, bool write_fault, return false; if (get_user_page_fast_only(addr, FOLL_WRITE, page)) { - *pfn = page_to_pfn(page[0]); + *pfn = page_to_pfn(*page); if (writable) *writable = true; @@ -2512,10 +2512,10 @@ static bool hva_to_pfn_fast(unsigned long addr, bool write_fault, * 1 indicates success, -errno is returned if error is detected. */ static int hva_to_pfn_slow(unsigned long addr, bool *async, bool write_fault, - bool interruptible, bool *writable, kvm_pfn_t *pfn) + bool interruptible, bool *writable, kvm_pfn_t *pfn, + struct page **page) { unsigned int flags = FOLL_HWPOISON; - struct page *page; int npages; might_sleep(); @@ -2530,7 +2530,7 @@ static int hva_to_pfn_slow(unsigned long addr, bool *async, bool write_fault, if (interruptible) flags |= FOLL_INTERRUPTIBLE; - npages = get_user_pages_unlocked(addr, 1, &page, flags); + npages = get_user_pages_unlocked(addr, 1, page, flags); if (npages != 1) return npages; @@ -2540,11 +2540,11 @@ static int hva_to_pfn_slow(unsigned long addr, bool *async, bool write_fault, if (get_user_page_fast_only(addr, FOLL_WRITE, &wpage)) { *writable = true; - put_page(page); - page = wpage; + put_page(*page); + *page = wpage; } } - *pfn = page_to_pfn(page); + *pfn = page_to_pfn(*page); return npages; } @@ -2559,16 +2559,6 @@ static bool vma_is_valid(struct vm_area_struct *vma, bool write_fault) return true; } -static int kvm_try_get_pfn(kvm_pfn_t pfn) -{ - struct page *page = kvm_pfn_to_refcounted_page(pfn); - - if (!page) - return 1; - - return get_page_unless_zero(page); -} - static int hva_to_pfn_remapped(struct vm_area_struct *vma, unsigned long addr, bool write_fault, bool *writable, kvm_pfn_t *p_pfn) @@ -2607,26 +2597,6 @@ static int hva_to_pfn_remapped(struct vm_area_struct *vma, *writable = pte_write(*ptep); pfn = pte_pfn(*ptep); - /* - * Get a reference here because callers of *hva_to_pfn* and - * *gfn_to_pfn* ultimately call kvm_release_pfn_clean on the - * returned pfn. This is only needed if the VMA has VM_MIXEDMAP - * set, but the kvm_try_get_pfn/kvm_release_pfn_clean pair will - * simply do nothing for reserved pfns. - * - * Whoever called remap_pfn_range is also going to call e.g. - * unmap_mapping_range before the underlying pages are freed, - * causing a call to our MMU notifier. - * - * Certain IO or PFNMAP mappings can be backed with valid - * struct pages, but be allocated without refcounting e.g., - * tail pages of non-compound higher order allocations, which - * would then underflow the refcount when the caller does the - * required put_page. Don't allow those pages here. - */ - if (!kvm_try_get_pfn(pfn)) - r = -EFAULT; - out: pte_unmap_unlock(ptep, ptl); *p_pfn = pfn; @@ -2643,6 +2613,7 @@ static int hva_to_pfn_remapped(struct vm_area_struct *vma, * host page is not in the memory * @write_fault: whether we should get a writable host page * @writable: whether it allows to map a writable host page for !@write_fault + * @page: outparam for the refcounted page assicated with the pfn, if any * * The function will map a writable host page for these two cases: * 1): @write_fault = true @@ -2650,23 +2621,25 @@ static int hva_to_pfn_remapped(struct vm_area_struct *vma, * whether the mapping is writable. */ kvm_pfn_t hva_to_pfn(unsigned long addr, bool atomic, bool interruptible, - bool *async, bool write_fault, bool *writable) + bool *async, bool write_fault, bool *writable, + struct page **page) { struct vm_area_struct *vma; kvm_pfn_t pfn; int npages, r; + *page = NULL; /* we can do it either atomically or asynchronously, not both */ BUG_ON(atomic && async); - if (hva_to_pfn_fast(addr, write_fault, writable, &pfn)) + if (hva_to_pfn_fast(addr, write_fault, writable, &pfn, page)) return pfn; if (atomic) return KVM_PFN_ERR_FAULT; npages = hva_to_pfn_slow(addr, async, write_fault, interruptible, - writable, &pfn); + writable, &pfn, page); if (npages == 1) return pfn; if (npages == -EINTR) @@ -2700,9 +2673,37 @@ kvm_pfn_t hva_to_pfn(unsigned long addr, bool atomic, bool interruptible, return pfn; } -kvm_pfn_t __gfn_to_pfn_memslot(const struct kvm_memory_slot *slot, gfn_t gfn, - bool atomic, bool interruptible, bool *async, - bool write_fault, bool *writable, hva_t *hva) +/* + * Helper function for managing refcounts of pfn returned by hva_to_pfn. + * @pfn: pfn returned by hva_to_pfn + * @page: page outparam from hva_to_pfn + * + * In cases where access to the pfn resolved by hva_to_pfn isn't protected by + * our MMU notifier, if the pfn was resolved by hva_to_pfn_remapped instead of + * gup, then its refcount needs to be bumped. + * + * Certain IO or PFNMAP mappings can be backed with valid struct pages, but be + * allocated without refcounting e.g., tail pages of non-compound higher order + * allocations, which would then underflow the refcount when the caller does the + * required put_page. Don't allow those pages here. + */ +kvm_pfn_t kvm_try_get_refcounted_page_ref(kvm_pfn_t pfn, struct page *page) +{ + /* If @page is valid, KVM already has a reference to the pfn/page. */ + if (page || is_error_pfn(pfn)) + return pfn; + + page = kvm_pfn_to_refcounted_page(pfn); + if (!page || get_page_unless_zero(page)) + return pfn; + + return KVM_PFN_ERR_FAULT; +} + +kvm_pfn_t __gfn_to_pfn_noref_memslot(const struct kvm_memory_slot *slot, gfn_t gfn, + bool atomic, bool interruptible, bool *async, + bool write_fault, bool *writable, hva_t *hva, + struct page **page) { unsigned long addr = __gfn_to_hva_many(slot, gfn, NULL, write_fault); @@ -2728,47 +2729,134 @@ kvm_pfn_t __gfn_to_pfn_memslot(const struct kvm_memory_slot *slot, gfn_t gfn, } return hva_to_pfn(addr, atomic, interruptible, async, write_fault, - writable); + writable, page); +} +EXPORT_SYMBOL_GPL(__gfn_to_pfn_noref_memslot); + +kvm_pfn_t gfn_to_pfn_noref_prot(struct kvm *kvm, gfn_t gfn, bool write_fault, + bool *writable, struct page **page) +{ + return __gfn_to_pfn_noref_memslot(gfn_to_memslot(kvm, gfn), gfn, false, false, + NULL, write_fault, writable, NULL, page); +} +EXPORT_SYMBOL_GPL(gfn_to_pfn_noref_prot); + +kvm_pfn_t gfn_to_pfn_noref_memslot(const struct kvm_memory_slot *slot, gfn_t gfn, + struct page **page) +{ + return __gfn_to_pfn_noref_memslot(slot, gfn, false, false, NULL, true, + NULL, NULL, page); +} +EXPORT_SYMBOL_GPL(gfn_to_pfn_noref_memslot); + +kvm_pfn_t gfn_to_pfn_noref_memslot_atomic(const struct kvm_memory_slot *slot, + gfn_t gfn, struct page **page) +{ + return __gfn_to_pfn_noref_memslot(slot, gfn, true, false, NULL, true, NULL, + NULL, page); +} +EXPORT_SYMBOL_GPL(gfn_to_pfn_noref_memslot_atomic); + +kvm_pfn_t kvm_vcpu_gfn_to_pfn_noref_atomic(struct kvm_vcpu *vcpu, gfn_t gfn, + struct page **page) +{ + return gfn_to_pfn_noref_memslot_atomic( + kvm_vcpu_gfn_to_memslot(vcpu, gfn), gfn, page); +} +EXPORT_SYMBOL_GPL(kvm_vcpu_gfn_to_pfn_noref_atomic); + +kvm_pfn_t gfn_to_pfn_noref(struct kvm *kvm, gfn_t gfn, struct page **page) +{ + return gfn_to_pfn_noref_memslot(gfn_to_memslot(kvm, gfn), gfn, page); +} +EXPORT_SYMBOL_GPL(gfn_to_pfn_noref); + +kvm_pfn_t kvm_vcpu_gfn_to_pfn_noref(struct kvm_vcpu *vcpu, gfn_t gfn, + struct page **page) +{ + return gfn_to_pfn_noref_memslot(kvm_vcpu_gfn_to_memslot(vcpu, gfn), + gfn, page); +} +EXPORT_SYMBOL_GPL(kvm_vcpu_gfn_to_pfn_noref); + +kvm_pfn_t __gfn_to_pfn_memslot(const struct kvm_memory_slot *slot, gfn_t gfn, + bool atomic, bool interruptible, bool *async, + bool write_fault, bool *writable, hva_t *hva) +{ + struct page *page; + kvm_pfn_t pfn; + + pfn = __gfn_to_pfn_noref_memslot(slot, gfn, atomic, interruptible, async, + write_fault, writable, hva, &page); + + return kvm_try_get_refcounted_page_ref(pfn, page); } EXPORT_SYMBOL_GPL(__gfn_to_pfn_memslot); kvm_pfn_t gfn_to_pfn_prot(struct kvm *kvm, gfn_t gfn, bool write_fault, bool *writable) { - return __gfn_to_pfn_memslot(gfn_to_memslot(kvm, gfn), gfn, false, false, - NULL, write_fault, writable, NULL); + struct page *page; + kvm_pfn_t pfn; + + pfn = gfn_to_pfn_noref_prot(kvm, gfn, write_fault, writable, &page); + + return kvm_try_get_refcounted_page_ref(pfn, page); } EXPORT_SYMBOL_GPL(gfn_to_pfn_prot); kvm_pfn_t gfn_to_pfn_memslot(const struct kvm_memory_slot *slot, gfn_t gfn) { - return __gfn_to_pfn_memslot(slot, gfn, false, false, NULL, true, - NULL, NULL); + struct page *page; + kvm_pfn_t pfn; + + pfn = gfn_to_pfn_noref_memslot(slot, gfn, &page); + + return kvm_try_get_refcounted_page_ref(pfn, page); } EXPORT_SYMBOL_GPL(gfn_to_pfn_memslot); kvm_pfn_t gfn_to_pfn_memslot_atomic(const struct kvm_memory_slot *slot, gfn_t gfn) { - return __gfn_to_pfn_memslot(slot, gfn, true, false, NULL, true, - NULL, NULL); + struct page *page; + kvm_pfn_t pfn; + + pfn = gfn_to_pfn_noref_memslot_atomic(slot, gfn, &page); + + return kvm_try_get_refcounted_page_ref(pfn, page); } EXPORT_SYMBOL_GPL(gfn_to_pfn_memslot_atomic); kvm_pfn_t kvm_vcpu_gfn_to_pfn_atomic(struct kvm_vcpu *vcpu, gfn_t gfn) { - return gfn_to_pfn_memslot_atomic(kvm_vcpu_gfn_to_memslot(vcpu, gfn), gfn); + struct page *page; + kvm_pfn_t pfn; + + pfn = kvm_vcpu_gfn_to_pfn_noref_atomic(vcpu, gfn, &page); + + return kvm_try_get_refcounted_page_ref(pfn, page); } EXPORT_SYMBOL_GPL(kvm_vcpu_gfn_to_pfn_atomic); kvm_pfn_t gfn_to_pfn(struct kvm *kvm, gfn_t gfn) { - return gfn_to_pfn_memslot(gfn_to_memslot(kvm, gfn), gfn); + struct page *page; + kvm_pfn_t pfn; + + pfn = gfn_to_pfn_noref(kvm, gfn, &page); + + return kvm_try_get_refcounted_page_ref(pfn, page); } EXPORT_SYMBOL_GPL(gfn_to_pfn); kvm_pfn_t kvm_vcpu_gfn_to_pfn(struct kvm_vcpu *vcpu, gfn_t gfn) { - return gfn_to_pfn_memslot(kvm_vcpu_gfn_to_memslot(vcpu, gfn), gfn); + struct page *page; + kvm_pfn_t pfn; + + pfn = kvm_vcpu_gfn_to_pfn_noref(vcpu, gfn, &page); + + return kvm_try_get_refcounted_page_ref(pfn, page); } EXPORT_SYMBOL_GPL(kvm_vcpu_gfn_to_pfn); @@ -2925,6 +3013,17 @@ void kvm_release_pfn_clean(kvm_pfn_t pfn) } EXPORT_SYMBOL_GPL(kvm_release_pfn_clean); +void kvm_release_pfn_noref_clean(kvm_pfn_t pfn, struct page *page) +{ + if (is_error_noslot_pfn(pfn)) + return; + + kvm_set_pfn_accessed(pfn); + if (page) + put_page(page); +} +EXPORT_SYMBOL_GPL(kvm_release_pfn_noref_clean); + void kvm_release_page_dirty(struct page *page) { WARN_ON(is_error_page(page)); diff --git a/virt/kvm/kvm_mm.h b/virt/kvm/kvm_mm.h index 180f1a09e6ba..a4072cc5a189 100644 --- a/virt/kvm/kvm_mm.h +++ b/virt/kvm/kvm_mm.h @@ -3,6 +3,8 @@ #ifndef __KVM_MM_H__ #define __KVM_MM_H__ 1 +#include + /* * Architectures can choose whether to use an rwlock or spinlock * for the mmu_lock. These macros, for use in common code @@ -21,7 +23,9 @@ #endif /* KVM_HAVE_MMU_RWLOCK */ kvm_pfn_t hva_to_pfn(unsigned long addr, bool atomic, bool interruptible, - bool *async, bool write_fault, bool *writable); + bool *async, bool write_fault, bool *writable, + struct page **page); +kvm_pfn_t kvm_try_get_refcounted_page_ref(kvm_pfn_t pfn, struct page *page); #ifdef CONFIG_HAVE_KVM_PFNCACHE void gfn_to_pfn_cache_invalidate_start(struct kvm *kvm, diff --git a/virt/kvm/pfncache.c b/virt/kvm/pfncache.c index 2d6aba677830..e25d3af969f4 100644 --- a/virt/kvm/pfncache.c +++ b/virt/kvm/pfncache.c @@ -144,6 +144,7 @@ static kvm_pfn_t hva_to_pfn_retry(struct gfn_to_pfn_cache *gpc) kvm_pfn_t new_pfn = KVM_PFN_ERR_FAULT; void *new_khva = NULL; unsigned long mmu_seq; + struct page *page; lockdep_assert_held(&gpc->refresh_lock); @@ -183,10 +184,19 @@ static kvm_pfn_t hva_to_pfn_retry(struct gfn_to_pfn_cache *gpc) } /* We always request a writeable mapping */ - new_pfn = hva_to_pfn(gpc->uhva, false, false, NULL, true, NULL); + new_pfn = hva_to_pfn(gpc->uhva, false, false, NULL, true, NULL, &page); if (is_error_noslot_pfn(new_pfn)) goto out_error; + /* + * Filter out pages that support refcounting but which aren't + * currently being refcounted. Some KVM MMUs support such pages, but + * although we could support them here, kvm internals more generally + * don't. Reject them here for consistency. + */ + if (kvm_try_get_refcounted_page_ref(new_pfn, page) != new_pfn) + goto out_error; + /* * Obtain a new kernel mapping if KVM itself will access the * pfn. Note, kmap() and memremap() can both sleep, so this From patchwork Thu Mar 30 08:58:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Stevens X-Patchwork-Id: 76981 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp987572vqo; Thu, 30 Mar 2023 02:15:00 -0700 (PDT) X-Google-Smtp-Source: AKy350YKnuH4SrQJOdV9q6hbBhKWJ1TJ0KdQ/xswAygL30wWpk6zjpcSCoLS1ApI680BhkysbXw/ X-Received: by 2002:a17:906:4d47:b0:921:5e7b:1c27 with SMTP id b7-20020a1709064d4700b009215e7b1c27mr5214667ejv.24.1680167700239; Thu, 30 Mar 2023 02:15:00 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680167700; cv=none; d=google.com; s=arc-20160816; b=njzclZrTKwVUAVs+LVc5DdNF8ci34flmLBb7vswbOOVd08jhggXRMKKM4amP7B45Hf Bj1yeOE6ce2CtdCVqtFp6aEk3h1nS4yr9jaaPtziRo7ICJilJE3ZKs7+Iu0U7awTkldQ hcTUr7ILYzLmGOCyZ8Kw/jXnG77KbXx19ExVnb9xLvqO4NhSmq0aCpklvP4LEt4QaDRJ QmV4p3p43ILPOf6zslES+uMayrg2N8p4V4h76kdLedy54azo7ER10uWhJF2wu1z5WPdi 1odcnT8xXY6/cIUNKvmpA5q4Aa91lk92XLd91qTUpLsVoWeVEiUMqJsMQS3vBYMohtql g4Ag== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=5uuafki+29cUKaQO0uBXDUvz3IdimYT6pah8riQ/iK4=; b=W4gkzsiqHWFWvm/4fFCdfm12+5hGRJQPS2YEZpBWwDYE8FZpy8wgYe6aR9Wsia0jTM QvuNAm4cwud9LXJBZFDzrih6R2emvhp0k53xmIR2KPcO4xAzltd9KgMK4BrsQ2vKSdMX UVpxKo1BZHdfImkD10TcY/+mftpU6DfkfZQzmgY6xon+Mj4tnd+IjjPlcpbVUHXtzKYU mp9MltaIDwAvNynMTfazDUSbNaDf2oCoXzRBGEPHPwf7xyvfSarmo2CzIBGM3reheT5N AfInbX/8YA6lgQq+3/ir7RStgbVAmUFYdW3safx9ZYbsc1DODwuukH9cx1ZKI4SeyCWf Re6Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@chromium.org header.s=google header.b=XWRGFBVj; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=chromium.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id xo21-20020a170907bb9500b009332ffa5891si21267147ejc.136.2023.03.30.02.14.36; Thu, 30 Mar 2023 02:15:00 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@chromium.org header.s=google header.b=XWRGFBVj; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=chromium.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230029AbjC3I61 (ORCPT + 99 others); Thu, 30 Mar 2023 04:58:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58964 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230051AbjC3I6V (ORCPT ); Thu, 30 Mar 2023 04:58:21 -0400 Received: from mail-pl1-x62f.google.com (mail-pl1-x62f.google.com [IPv6:2607:f8b0:4864:20::62f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 60B8B6EAD for ; Thu, 30 Mar 2023 01:58:20 -0700 (PDT) Received: by mail-pl1-x62f.google.com with SMTP id kc4so17441074plb.10 for ; Thu, 30 Mar 2023 01:58:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1680166700; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=5uuafki+29cUKaQO0uBXDUvz3IdimYT6pah8riQ/iK4=; b=XWRGFBVjmIZuyZ3Ya48quyoPniEx9j7UWv6hygUlARVxXnsydbG7vAKY/q29p8rYQ3 mRfqICN98s7dHIVZcDAO9RmXf+rxtWc5XaqCouT8GxFxwoPkBWKJkthWVTfwW4z2XTO4 nVjOM6P2+oqctx4yIpaqFnrI8m0o6OOakHDPE= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1680166700; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=5uuafki+29cUKaQO0uBXDUvz3IdimYT6pah8riQ/iK4=; b=zon2qvhVFu3Q8xXso32E4xPK54hD2giq7saDClts4xwzDVhBbc3Y35kOItpnE3nHfM HyP8AZ5szFniI3MdJrgVXQbRoUhVztuqjIGNNVQmhq7rwef2oPcWwfkpoNZaJb+jIaBc 2WWC8WQTZWuXCl8oJZowRBXKyeyBHLULLPvj/oYL0MZzopbLRzNj+T+FGV0QBUYdQdCb +5AlLYyMgnlwR5w9ukW3wBuuyj+apq3LkRPs0zGr42LLVvefpVnah+O4OF7Wl/w9T6cl ghW9qqH/pn8wEhy6poZVM9VFjaX5jF/ckmqHC43eTVhJcaf0flxV6lTTy8x7OMyP80Sx xCNQ== X-Gm-Message-State: AAQBX9fsXHtSF7vkGhUcs5/D0kDM5O4JlzNTOvpYEkSmi1/DjqeA9ddl XIqVAOXLr5bLzWwmI+B0oFEpRA== X-Received: by 2002:a17:90a:190f:b0:233:c301:32b3 with SMTP id 15-20020a17090a190f00b00233c30132b3mr24295355pjg.3.1680166699779; Thu, 30 Mar 2023 01:58:19 -0700 (PDT) Received: from localhost ([2401:fa00:8f:203:1320:eef8:d0bb:b161]) by smtp.gmail.com with UTF8SMTPSA id g10-20020a17090a7d0a00b00234115a2221sm2718564pjl.39.2023.03.30.01.58.17 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 30 Mar 2023 01:58:19 -0700 (PDT) From: David Stevens X-Google-Original-From: David Stevens To: Marc Zyngier , Sean Christopherson Cc: Oliver Upton , Paolo Bonzini , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, David Stevens Subject: [PATCH v6 2/4] KVM: x86/mmu: use gfn_to_pfn_noref Date: Thu, 30 Mar 2023 17:58:00 +0900 Message-Id: <20230330085802.2414466-3-stevensd@google.com> X-Mailer: git-send-email 2.40.0.348.gf938b09366-goog In-Reply-To: <20230330085802.2414466-1-stevensd@google.com> References: <20230330085802.2414466-1-stevensd@google.com> MIME-Version: 1.0 X-Spam-Status: No, score=-0.2 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761783526343809463?= X-GMAIL-MSGID: =?utf-8?q?1761783526343809463?= From: David Stevens Switch the x86 mmu to the new gfn_to_pfn_noref functions. This allows IO and PFNMAP mappings backed with valid struct pages but without refcounting (e.g. tail pages of non-compound higher order allocations) to be mapped into the guest. Signed-off-by: David Stevens --- arch/x86/kvm/mmu/mmu.c | 19 ++++++++++--------- arch/x86/kvm/mmu/mmu_internal.h | 1 + arch/x86/kvm/mmu/paging_tmpl.h | 7 ++++--- arch/x86/kvm/x86.c | 5 +++-- 4 files changed, 18 insertions(+), 14 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 144c5a01cd77..86b74e7bccfa 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3114,7 +3114,7 @@ void kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault if (unlikely(fault->max_level == PG_LEVEL_4K)) return; - if (is_error_noslot_pfn(fault->pfn)) + if (!fault->page) return; if (kvm_slot_dirty_track_enabled(slot)) @@ -4224,6 +4224,7 @@ static int __kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault if (is_guest_mode(vcpu)) { fault->slot = NULL; fault->pfn = KVM_PFN_NOSLOT; + fault->page = NULL; fault->map_writable = false; return RET_PF_CONTINUE; } @@ -4239,9 +4240,9 @@ static int __kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault } async = false; - fault->pfn = __gfn_to_pfn_memslot(slot, fault->gfn, false, false, &async, - fault->write, &fault->map_writable, - &fault->hva); + fault->pfn = __gfn_to_pfn_noref_memslot(slot, fault->gfn, false, false, &async, + fault->write, &fault->map_writable, + &fault->hva, &fault->page); if (!async) return RET_PF_CONTINUE; /* *pfn has correct page already */ @@ -4261,9 +4262,9 @@ static int __kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault * to wait for IO. Note, gup always bails if it is unable to quickly * get a page and a fatal signal, i.e. SIGKILL, is pending. */ - fault->pfn = __gfn_to_pfn_memslot(slot, fault->gfn, false, true, NULL, - fault->write, &fault->map_writable, - &fault->hva); + fault->pfn = __gfn_to_pfn_noref_memslot(slot, fault->gfn, false, true, NULL, + fault->write, &fault->map_writable, + &fault->hva, &fault->page); return RET_PF_CONTINUE; } @@ -4349,7 +4350,7 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault out_unlock: write_unlock(&vcpu->kvm->mmu_lock); - kvm_release_pfn_clean(fault->pfn); + kvm_release_pfn_noref_clean(fault->pfn, fault->page); return r; } @@ -4427,7 +4428,7 @@ static int kvm_tdp_mmu_page_fault(struct kvm_vcpu *vcpu, out_unlock: read_unlock(&vcpu->kvm->mmu_lock); - kvm_release_pfn_clean(fault->pfn); + kvm_release_pfn_noref_clean(fault->pfn, fault->page); return r; } #endif diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index 2cbb155c686c..6ee34a2d0e13 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -239,6 +239,7 @@ struct kvm_page_fault { unsigned long mmu_seq; kvm_pfn_t pfn; hva_t hva; + struct page *page; bool map_writable; /* diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index a056f2773dd9..e4e54e372721 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -525,6 +525,7 @@ FNAME(prefetch_gpte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, unsigned pte_access; gfn_t gfn; kvm_pfn_t pfn; + struct page *page; if (FNAME(prefetch_invalid_gpte)(vcpu, sp, spte, gpte)) return false; @@ -540,12 +541,12 @@ FNAME(prefetch_gpte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, if (!slot) return false; - pfn = gfn_to_pfn_memslot_atomic(slot, gfn); + pfn = gfn_to_pfn_noref_memslot_atomic(slot, gfn, &page); if (is_error_pfn(pfn)) return false; mmu_set_spte(vcpu, slot, spte, pte_access, gfn, pfn, NULL); - kvm_release_pfn_clean(pfn); + kvm_release_pfn_noref_clean(pfn, page); return true; } @@ -830,7 +831,7 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault out_unlock: write_unlock(&vcpu->kvm->mmu_lock); - kvm_release_pfn_clean(fault->pfn); + kvm_release_pfn_noref_clean(fault->pfn, fault->page); return r; } diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 237c483b1230..53a8c9e776e5 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -8458,6 +8458,7 @@ static bool reexecute_instruction(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, { gpa_t gpa = cr2_or_gpa; kvm_pfn_t pfn; + struct page *page; if (!(emulation_type & EMULTYPE_ALLOW_RETRY_PF)) return false; @@ -8487,7 +8488,7 @@ static bool reexecute_instruction(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, * retry instruction -> write #PF -> emulation fail -> retry * instruction -> ... */ - pfn = gfn_to_pfn(vcpu->kvm, gpa_to_gfn(gpa)); + pfn = gfn_to_pfn_noref(vcpu->kvm, gpa_to_gfn(gpa), &page); /* * If the instruction failed on the error pfn, it can not be fixed, @@ -8496,7 +8497,7 @@ static bool reexecute_instruction(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, if (is_error_noslot_pfn(pfn)) return false; - kvm_release_pfn_clean(pfn); + kvm_release_pfn_noref_clean(pfn, page); /* The instructions are well-emulated on direct mmu. */ if (vcpu->arch.mmu->root_role.direct) { From patchwork Thu Mar 30 08:58:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Stevens X-Patchwork-Id: 76980 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp987478vqo; Thu, 30 Mar 2023 02:14:47 -0700 (PDT) X-Google-Smtp-Source: AKy350YczhkM/plyl2Mzhe+T4rbOixbqaxOXVpiQnN0TiTkY7Z1k29rwqhcqeSBXPo8etorAoZr5 X-Received: by 2002:a17:907:6b86:b0:933:4dc8:972d with SMTP id rg6-20020a1709076b8600b009334dc8972dmr17085392ejc.20.1680167687112; Thu, 30 Mar 2023 02:14:47 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680167687; cv=none; d=google.com; s=arc-20160816; b=YTtvKadtxByV5TNfIDg46+sy1JZ8BiPPimaEnAV/KPhS0ngiux9/YHS8UAVZDw6hfL U5QCi/N2VuhkNkxCPGxU8HgJls21WoJjqOJxdVsJEULvz1/e83tn683Lcd/v/b9pBQzh mLgp8XMM63Al30m1p/Rt0PO214oO13erHbLPwX7dgzQNXhYBJuCbTo1U3A/c1cL9imye jRWoE7754v0AtwHpps9vyKoZ93Jh9um1f90KWaoai7h/wskhGldWetDQby1X5JaNRgnO gDAU3gujOQ5KdM0CylDcczwtFeE2xBnNoJEsvKC8vsiM+2MbYAY2kJ+3zkqibWPoJhke nujw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=RprU1FQJyxxcIHKiSPrHpqU5Norly3mnaXVCeZUl59M=; b=jFT1qopFeqKdk6jfLZ9y0xTOgBbPp/oGQKuj+KaOo42B0ruKlGWw/X6A9ubIogvdOs UpzFEX/BoKDW+3FHtzo1duYNxpQ8D9JKMWz6ZZu91q+NX3ZFzeGrxknmi1oArcf2N2fS IozluKrVNo/I7ObTgX+hEyyMY83PlIwYR4di3+PyWLciq1DyH+kewxPRBmGokp3+Yapn JjIGhK2wRc5xtFyAWpbAyHDilsergcrWbQpHErRUWO79FQ4n9K+3Zu0+hPMFcq5OKR4X 4qV10Gial0Zygvwu3/YmNHGyECUUqW+xOyhA3Dbnbrpi2v0KV/0f342Kq6kO4KARM65w V6Kg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@chromium.org header.s=google header.b=QDHkG5g+; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=chromium.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id v5-20020a17090690c500b0091f8c8135c2si16649104ejw.551.2023.03.30.02.14.22; Thu, 30 Mar 2023 02:14:47 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@chromium.org header.s=google header.b=QDHkG5g+; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=chromium.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230427AbjC3I6n (ORCPT + 99 others); Thu, 30 Mar 2023 04:58:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59422 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230282AbjC3I6a (ORCPT ); Thu, 30 Mar 2023 04:58:30 -0400 Received: from mail-pl1-x62a.google.com (mail-pl1-x62a.google.com [IPv6:2607:f8b0:4864:20::62a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D4B0F7682 for ; Thu, 30 Mar 2023 01:58:24 -0700 (PDT) Received: by mail-pl1-x62a.google.com with SMTP id ix20so17473673plb.3 for ; Thu, 30 Mar 2023 01:58:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1680166704; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=RprU1FQJyxxcIHKiSPrHpqU5Norly3mnaXVCeZUl59M=; b=QDHkG5g+mhdDG+FzgKVq7FS5dKNc0YNfht7PzrIPGup07HPZi96FiixkgwPtKVIYGG YvA5YfJDQPGG3571GI68xmqSNKdRgYDNHzSXM1UF4Q6IX2GZf6W5zqMvZt/sbbPzmznM kDgvKePhP/3e133fLy2oCD4sa564mWqMu9USM= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1680166704; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=RprU1FQJyxxcIHKiSPrHpqU5Norly3mnaXVCeZUl59M=; b=wa5OZ2EVHLzh9utsdLPo/v9JslnBHos0JH07syKKrsx/fM/ru2pR/VbcOoie54boMt dHt2SpO0xbIjHUk14jMhgp9eCr7W6wZOFgFpry52RR/PIQvMLStDk1CYncV6nAE+m79N LAmPMZdWY6oZuQcj9nIo2zSFD4f7FiBn/kR+Winod79Dy4rflTJqqg7Z5Nh315XNnRn+ MRgRxMhgKPnq8PSn5bY8anduiB4rd2lSQz9HmDS1tiOzJF1uI5cAFIdSxTf9yuJd96D3 rm/h385p9eYA7fn6Dc8V8ZJQXekS1ae2/pEEz3Sccr7z6FlA4/KKgUiC6rgPYpo1gAlS Yheg== X-Gm-Message-State: AAQBX9c0P+PaHXoXPVV/RBF77LSgOgzMLHfB+jpoo5c9ujgZliHUo3wA JtyvCN/oZQ3hUJWSRSPNDdeybg== X-Received: by 2002:a17:902:c44c:b0:1a1:8edc:c5f8 with SMTP id m12-20020a170902c44c00b001a18edcc5f8mr18986019plm.56.1680166703992; Thu, 30 Mar 2023 01:58:23 -0700 (PDT) Received: from localhost ([2401:fa00:8f:203:1320:eef8:d0bb:b161]) by smtp.gmail.com with UTF8SMTPSA id bi11-20020a170902bf0b00b0019f11caf11asm24358430plb.166.2023.03.30.01.58.21 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 30 Mar 2023 01:58:23 -0700 (PDT) From: David Stevens X-Google-Original-From: David Stevens To: Marc Zyngier , Sean Christopherson Cc: Oliver Upton , Paolo Bonzini , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, David Stevens Subject: [PATCH v6 3/4] KVM: arm64/mmu: use gfn_to_pfn_noref Date: Thu, 30 Mar 2023 17:58:01 +0900 Message-Id: <20230330085802.2414466-4-stevensd@google.com> X-Mailer: git-send-email 2.40.0.348.gf938b09366-goog In-Reply-To: <20230330085802.2414466-1-stevensd@google.com> References: <20230330085802.2414466-1-stevensd@google.com> MIME-Version: 1.0 X-Spam-Status: No, score=-0.2 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761783512659813722?= X-GMAIL-MSGID: =?utf-8?q?1761783512659813722?= From: David Stevens Switch the arm64 mmu to the new gfn_to_pfn_noref functions. This allows IO and PFNMAP mappings backed with valid struct pages but without refcounting (e.g. tail pages of non-compound higher order allocations) to be mapped into the guest. Signed-off-by: David Stevens --- arch/arm64/kvm/mmu.c | 21 ++++++++++++--------- 1 file changed, 12 insertions(+), 9 deletions(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 7113587222ff..0fd726e82a19 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1082,7 +1082,7 @@ static bool fault_supports_stage2_huge_mapping(struct kvm_memory_slot *memslot, static unsigned long transparent_hugepage_adjust(struct kvm *kvm, struct kvm_memory_slot *memslot, unsigned long hva, kvm_pfn_t *pfnp, - phys_addr_t *ipap) + struct page **page, phys_addr_t *ipap) { kvm_pfn_t pfn = *pfnp; @@ -1091,7 +1091,8 @@ transparent_hugepage_adjust(struct kvm *kvm, struct kvm_memory_slot *memslot, * sure that the HVA and IPA are sufficiently aligned and that the * block map is contained within the memslot. */ - if (fault_supports_stage2_huge_mapping(memslot, hva, PMD_SIZE) && + if (*page && + fault_supports_stage2_huge_mapping(memslot, hva, PMD_SIZE) && get_user_mapping_size(kvm, hva) >= PMD_SIZE) { /* * The address we faulted on is backed by a transparent huge @@ -1112,10 +1113,11 @@ transparent_hugepage_adjust(struct kvm *kvm, struct kvm_memory_slot *memslot, * page accordingly. */ *ipap &= PMD_MASK; - kvm_release_pfn_clean(pfn); + kvm_release_page_clean(*page); pfn &= ~(PTRS_PER_PMD - 1); - get_page(pfn_to_page(pfn)); *pfnp = pfn; + *page = pfn_to_page(pfn); + get_page(*page); return PMD_SIZE; } @@ -1201,6 +1203,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, short vma_shift; gfn_t gfn; kvm_pfn_t pfn; + struct page *page; bool logging_active = memslot_is_logging(memslot); unsigned long fault_level = kvm_vcpu_trap_get_fault_level(vcpu); unsigned long vma_pagesize, fault_granule; @@ -1301,8 +1304,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, */ smp_rmb(); - pfn = __gfn_to_pfn_memslot(memslot, gfn, false, false, NULL, - write_fault, &writable, NULL); + pfn = __gfn_to_pfn_noref_memslot(memslot, gfn, false, false, NULL, + write_fault, &writable, NULL, &page); if (pfn == KVM_PFN_ERR_HWPOISON) { kvm_send_hwpoison_signal(hva, vma_shift); return 0; @@ -1348,7 +1351,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, vma_pagesize = fault_granule; else vma_pagesize = transparent_hugepage_adjust(kvm, memslot, - hva, &pfn, + hva, + &pfn, &page, &fault_ipa); } @@ -1395,8 +1399,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, out_unlock: read_unlock(&kvm->mmu_lock); - kvm_set_pfn_accessed(pfn); - kvm_release_pfn_clean(pfn); + kvm_release_pfn_noref_clean(pfn, page); return ret != -EAGAIN ? ret : 0; } From patchwork Thu Mar 30 08:58:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Stevens X-Patchwork-Id: 76979 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp986929vqo; Thu, 30 Mar 2023 02:13:32 -0700 (PDT) X-Google-Smtp-Source: AKy350bIOwVQjGBFj9grWEytmfBXk36w52gvRFyEUZ4tRRQnBzs0r6O/gzTmGA0L8wVlfz7UDH0T X-Received: by 2002:a17:906:7f1a:b0:8b1:29ed:e206 with SMTP id d26-20020a1709067f1a00b008b129ede206mr19348016ejr.28.1680167612434; Thu, 30 Mar 2023 02:13:32 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680167612; cv=none; d=google.com; s=arc-20160816; b=mWm4vyHsnKcQaI4sBDKw+ZiFaz1EshpZoi9fBSJvVbMPoKGzdiVW3Lwdrbe60xmMP2 VfpTQ+vDO6lbwFYO07Rf7Lmxpdhr5kn5U6pQ25CWOBEV7TRyqUOgn2PdPsEfhfFa1NLq QYZQuM9lanXRYZHRBLT7BzVcyc+Zbitd0svoyfHufTojeHtUgYXyiIcvq0Xz/RJRW56s 4zJPNXa+xiP9GJYmwlhiEpibHlGoXsC9iiigLJO0mXRBXdoVPrGy2isDmL96f5eEOSsO f3k9vwceikSHBdcR4yHJPbyuD+fqs1KLS+7Nrvl29nrG1LI/0MXFME7Lynkku4Wh1w9C E/Ag== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=DARmMfXxqHb2Ls37OggPINW15nuQ84yCV7KESHjLY2s=; b=iNW4oyVIZnDymZNHKC3fkm9jo0nhiVEe2KZXXeWlyp9qF/bx66zmVHNqAnH+hMttQQ YKYHgx8Kb1iUEWY84CTK1eQEjoQuutNZerTwBW3GuNa/ttA89Dsp/rQP5qrKirYIl1Qi pX+ztbPACzPQa0htL5USlvSoytVW2fcPzHq06QnqO+E/T8i70ogIZYX+lAYZZZayUBzG mcXCahTQxWw1Q7V2EXxjOd5QIwgW6vhCMZuVKLCrqznFHY56ZlVbaCZotccBjIYg+vIq CQv4zBezmMx5ZX1IzR60qDWsM7YAGILYK+93BL4K+5xtAy70g4+COyJOCUDAiZQe8wQi w1Tw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@chromium.org header.s=google header.b=OXoX2NCx; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=chromium.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id w3-20020a056402070300b004fbb864a75dsi35717339edx.407.2023.03.30.02.13.08; Thu, 30 Mar 2023 02:13:32 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@chromium.org header.s=google header.b=OXoX2NCx; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=chromium.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230463AbjC3I6w (ORCPT + 99 others); Thu, 30 Mar 2023 04:58:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59934 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230370AbjC3I6k (ORCPT ); Thu, 30 Mar 2023 04:58:40 -0400 Received: from mail-pj1-x102e.google.com (mail-pj1-x102e.google.com [IPv6:2607:f8b0:4864:20::102e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C625983E8 for ; Thu, 30 Mar 2023 01:58:29 -0700 (PDT) Received: by mail-pj1-x102e.google.com with SMTP id lr16-20020a17090b4b9000b0023f187954acso18907777pjb.2 for ; Thu, 30 Mar 2023 01:58:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1680166709; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=DARmMfXxqHb2Ls37OggPINW15nuQ84yCV7KESHjLY2s=; b=OXoX2NCxtPATlq8jnBH3sThb7ycsR+0Oe6TcPImZJi9tCnRpIHfnfMjFvh2OLeHSO7 M7F63PjkEBkJdxcACVhouJSazYLSPB0gy7jVYsu0xN+szgvXnNWoZ7Oh0kFFo4CWOLgC d+rHEFtEcJ4GCWJzO34ClqYb+5IULO9O4L5x8= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1680166709; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=DARmMfXxqHb2Ls37OggPINW15nuQ84yCV7KESHjLY2s=; b=hAXdlrP7JemcxZ1ioJbxRq0yjRhi3XJMOR9lU2QErTfHiqTNHfw9YaItx5Uh32uE0S ASyTMvEldTcAElHQtq+6XKh90vDis+ix9/b5JdipcLkR6VW1jEB3p8RWtDL72062BKlK 3BSXtHx1kG6u7Jc9FJ4tvxSVfRzXUICHmo3zA+AmPWRGEkDJ8hNKHsQL/OIDcWi0YsLK NC440GMzlxa2nHIbqw+luygGebqYRza4nkQ864Efq3v9GOScnP82FjqrvbjEMsgvTHBK iUEn972clgAva0mKd4jq/3c7qsLtW6/zX03xJD85EGcPtPqBIz4XFVCYGcDZIYOS3VkH nRRg== X-Gm-Message-State: AAQBX9cdZ5QKFCYT+SgkvMhNZdx4RE7WUdbH4tYjGG3vthZOvBx/suvU Vjh4pjtljTZ7Dm2arfOshmemlw== X-Received: by 2002:a17:902:fb85:b0:19e:b088:5900 with SMTP id lg5-20020a170902fb8500b0019eb0885900mr19155304plb.38.1680166709092; Thu, 30 Mar 2023 01:58:29 -0700 (PDT) Received: from localhost ([2401:fa00:8f:203:1320:eef8:d0bb:b161]) by smtp.gmail.com with UTF8SMTPSA id s5-20020a63e805000000b00502f1256674sm23134697pgh.41.2023.03.30.01.58.26 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 30 Mar 2023 01:58:28 -0700 (PDT) From: David Stevens X-Google-Original-From: David Stevens To: Marc Zyngier , Sean Christopherson Cc: Oliver Upton , Paolo Bonzini , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, David Stevens Subject: [PATCH v6 4/4] KVM: mmu: remove over-aggressive warnings Date: Thu, 30 Mar 2023 17:58:02 +0900 Message-Id: <20230330085802.2414466-5-stevensd@google.com> X-Mailer: git-send-email 2.40.0.348.gf938b09366-goog In-Reply-To: <20230330085802.2414466-1-stevensd@google.com> References: <20230330085802.2414466-1-stevensd@google.com> MIME-Version: 1.0 X-Spam-Status: No, score=-0.2 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761783434501678811?= X-GMAIL-MSGID: =?utf-8?q?1761783434501678811?= From: David Stevens Remove two warnings that require ref counts for pages to be non-zero, as mapped pfns from follow_pfn may not have an initialized ref count. Signed-off-by: David Stevens --- arch/x86/kvm/mmu/mmu.c | 10 ---------- virt/kvm/kvm_main.c | 5 ++--- 2 files changed, 2 insertions(+), 13 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 86b74e7bccfa..46b3d6c0ff27 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -555,7 +555,6 @@ static u64 mmu_spte_clear_track_bits(struct kvm *kvm, u64 *sptep) kvm_pfn_t pfn; u64 old_spte = *sptep; int level = sptep_to_sp(sptep)->role.level; - struct page *page; if (!is_shadow_present_pte(old_spte) || !spte_has_volatile_bits(old_spte)) @@ -570,15 +569,6 @@ static u64 mmu_spte_clear_track_bits(struct kvm *kvm, u64 *sptep) pfn = spte_to_pfn(old_spte); - /* - * KVM doesn't hold a reference to any pages mapped into the guest, and - * instead uses the mmu_notifier to ensure that KVM unmaps any pages - * before they are reclaimed. Sanity check that, if the pfn is backed - * by a refcounted page, the refcount is elevated. - */ - page = kvm_pfn_to_refcounted_page(pfn); - WARN_ON(page && !page_count(page)); - if (is_accessed_spte(old_spte)) kvm_set_pfn_accessed(pfn); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 007dd984eeea..a80070cb04d7 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -165,10 +165,9 @@ bool kvm_is_zone_device_page(struct page *page) /* * The metadata used by is_zone_device_page() to determine whether or * not a page is ZONE_DEVICE is guaranteed to be valid if and only if - * the device has been pinned, e.g. by get_user_pages(). WARN if the - * page_count() is zero to help detect bad usage of this helper. + * the device has been pinned, e.g. by get_user_pages(). */ - if (WARN_ON_ONCE(!page_count(page))) + if (!page_count(page)) return false; return is_zone_device_page(page);