From patchwork Tue Dec 19 16:10:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paul Durrant X-Patchwork-Id: 181074 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:24d3:b0:fb:cd0c:d3e with SMTP id r19csp2051100dyi; Tue, 19 Dec 2023 08:17:05 -0800 (PST) X-Google-Smtp-Source: AGHT+IFpIaGlRKWMqiAKJSIWKQTgJv2hH8K9aF+Jj0Fejs7lRgcxhzL483yEFARxbJkcQijE2Q3t X-Received: by 2002:a05:6a20:2927:b0:18f:97c:978e with SMTP id t39-20020a056a20292700b0018f097c978emr16948356pzf.118.1703002625437; Tue, 19 Dec 2023 08:17:05 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1703002625; cv=none; d=google.com; s=arc-20160816; b=juC3MbfMEcHEocT3YkaAOBqc83uHWKTY9venHUHG9lLFfyG8UigNXoGS1pg03OEa+U 5HdOxXZvI0Dmi7XlDwcNt+WfC6rDcPN4DhxvyvcYbfXwSmllf4j74fiWheDv+Zz6RQ7l 55qrkFutFuBXo+xRBFrMT468+p9Vsoo7uNKjo8WS7z97LHOn8Lp8DTW/aHhY7pPGRXA5 QdeTN7k2Gw0LruFeRBevPTHZOyvsTe1OEBh5iAdGcRoL6l5SPWN60iY0wJIxBEh9KOSK 4NpvHtCBAzBDztAb1+AkpUCk9AzgAekZ2aqDu873597lfbiocQhGmZYDFG1qfw0iOyhA hlMw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:to:from:dkim-signature; bh=GTgc51CwwP5OFPUZUUCI9U+qwiR8A/sPgASQKUlolzo=; fh=aAPqvXm40pY9772UiBOljEePwtz/3Q0zIOpw238tj3M=; b=ySHlDwbDYWTHh54KtYmWfPs/RCfp6J0vKVYHPeQishdoL8LdloIq9/O3qDBViCqp1r 9WiZWJ1QanylE5LiExTWhkwMmPOLt7Xdw4ZB+zkjZPRxlMQiSdvSXtVI9feXbq9h2fA/ HO1rn5gISmUGtfrq3AgEteZyLSMX6t78jJf39SonrvGVt+KHvlBrEJXiAmM88ar6pHCg PQXToWttlMG81AyTyXDV199CNb6nvdveWD1g+DjQF0rDgmyOM+k2V6e/cn85G6tKpTWZ f1ps8nIAFGmiIuW0V2cc3Oj7AQ5hN4hxAvVDKcnPeIWBvQ8Op9qbwb4M+kii6IORhklF PxBQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@xen.org header.s=20200302mail header.b=M1+Rkn3f; spf=pass (google.com: domain of linux-kernel+bounces-5543-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:40f1:3f00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-5543-ouuuleilei=gmail.com@vger.kernel.org" Received: from sy.mirrors.kernel.org (sy.mirrors.kernel.org. [2604:1380:40f1:3f00::1]) by mx.google.com with ESMTPS id s24-20020a63f058000000b005c6b4e190b0si19877748pgj.530.2023.12.19.08.17.05 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 19 Dec 2023 08:17:05 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-5543-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:40f1:3f00::1 as permitted sender) client-ip=2604:1380:40f1:3f00::1; Authentication-Results: mx.google.com; dkim=pass header.i=@xen.org header.s=20200302mail header.b=M1+Rkn3f; spf=pass (google.com: domain of linux-kernel+bounces-5543-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:40f1:3f00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-5543-ouuuleilei=gmail.com@vger.kernel.org" Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sy.mirrors.kernel.org (Postfix) with ESMTPS id 0BF3AB24B01 for ; Tue, 19 Dec 2023 16:16:28 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 711AF2D609; Tue, 19 Dec 2023 16:15:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=xen.org header.i=@xen.org header.b="M1+Rkn3f" X-Original-To: linux-kernel@vger.kernel.org Received: from mail.xenproject.org (mail.xenproject.org [104.130.215.37]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3D4D831752; Tue, 19 Dec 2023 16:15:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=xen.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=xen.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References: In-Reply-To:Message-Id:Date:Subject:To:From; bh=GTgc51CwwP5OFPUZUUCI9U+qwiR8A/sPgASQKUlolzo=; b=M1+Rkn3fo+fTOsX0KJ0NcTrvLH QA2mSQ7oRD1Axz5iMXOmdV3U+ZR0Yxt3C1iefVt2lPdqkEpw1CeGgqnqkTIy0W+zXGkWBnnvMH7iE 6FDfW05n6Q/6YKcEvMY6oE9meBFul1pEZfj1NloA5p3rwQ/3Row9jQYeX489DJ1TOk1o=; Received: from xenbits.xenproject.org ([104.239.192.120]) by mail.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rFck5-0005N2-Q1; Tue, 19 Dec 2023 16:14:49 +0000 Received: from 54-240-197-226.amazon.com ([54.240.197.226] helo=REM-PW02S00X.ant.amazon.com) by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1rFck5-0005h9-HX; Tue, 19 Dec 2023 16:14:49 +0000 From: Paul Durrant To: Paolo Bonzini , Jonathan Corbet , Sean Christopherson , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , David Woodhouse , Paul Durrant , Shuah Khan , kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org Subject: [PATCH v11 08/19] KVM: pfncache: allow a cache to be activated with a fixed (userspace) HVA Date: Tue, 19 Dec 2023 16:10:58 +0000 Message-Id: <20231219161109.1318-9-paul@xen.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20231219161109.1318-1-paul@xen.org> References: <20231219161109.1318-1-paul@xen.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1785727681073985336 X-GMAIL-MSGID: 1785727681073985336 From: Paul Durrant Some pfncache pages may actually be overlays on guest memory that have a fixed HVA within the VMM. It's pointless to invalidate such cached mappings if the overlay is moved so allow a cache to be activated directly with the HVA to cater for such cases. A subsequent patch will make use of this facility. Signed-off-by: Paul Durrant Reviewed-by: David Woodhouse --- Cc: Sean Christopherson Cc: Paolo Bonzini Cc: David Woodhouse v11: - Fixed kvm_gpc_check() to ignore memslot generation if the cache is not activated with a GPA. (This breakage occured during the re-work for v8). v9: - Pass both GPA and HVA into __kvm_gpc_refresh() rather than overloading the address paraneter and using a bool flag to indicated what it is. v8: - Re-worked to avoid messing with struct gfn_to_pfn_cache. --- include/linux/kvm_host.h | 20 +++++++++++++++++++- virt/kvm/pfncache.c | 40 +++++++++++++++++++++++++++++++--------- 2 files changed, 50 insertions(+), 10 deletions(-) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 6097f076a7b0..8120674b87b0 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1345,6 +1345,22 @@ void kvm_gpc_init(struct gfn_to_pfn_cache *gpc, struct kvm *kvm); */ int kvm_gpc_activate(struct gfn_to_pfn_cache *gpc, gpa_t gpa, unsigned long len); +/** + * kvm_gpc_activate_hva - prepare a cached kernel mapping and HPA for a given HVA. + * + * @gpc: struct gfn_to_pfn_cache object. + * @hva: userspace virtual address to map. + * @len: sanity check; the range being access must fit a single page. + * + * @return: 0 for success. + * -EINVAL for a mapping which would cross a page boundary. + * -EFAULT for an untranslatable guest physical address. + * + * The semantics of this function are the same as those of kvm_gpc_activate(). It + * merely bypasses a layer of address translation. + */ +int kvm_gpc_activate_hva(struct gfn_to_pfn_cache *gpc, unsigned long hva, unsigned long len); + /** * kvm_gpc_check - check validity of a gfn_to_pfn_cache. * @@ -1399,7 +1415,9 @@ void kvm_gpc_deactivate(struct gfn_to_pfn_cache *gpc); static inline void kvm_gpc_mark_dirty(struct gfn_to_pfn_cache *gpc) { lockdep_assert_held(&gpc->lock); - mark_page_dirty_in_slot(gpc->kvm, gpc->memslot, gpc->gpa >> PAGE_SHIFT); + + if (gpc->gpa != KVM_XEN_INVALID_GPA) + mark_page_dirty_in_slot(gpc->kvm, gpc->memslot, gpc->gpa >> PAGE_SHIFT); } void kvm_sigset_activate(struct kvm_vcpu *vcpu); diff --git a/virt/kvm/pfncache.c b/virt/kvm/pfncache.c index 97eec8ee3449..ae822bff812f 100644 --- a/virt/kvm/pfncache.c +++ b/virt/kvm/pfncache.c @@ -48,7 +48,10 @@ bool kvm_gpc_check(struct gfn_to_pfn_cache *gpc, unsigned long len) if (!gpc->active) return false; - if (gpc->generation != slots->generation || kvm_is_error_hva(gpc->uhva)) + if (gpc->gpa != KVM_XEN_INVALID_GPA && gpc->generation != slots->generation) + return false; + + if (kvm_is_error_hva(gpc->uhva)) return false; if (offset_in_page(gpc->uhva) + len > PAGE_SIZE) @@ -209,11 +212,13 @@ static kvm_pfn_t hva_to_pfn_retry(struct gfn_to_pfn_cache *gpc) return -EFAULT; } -static int __kvm_gpc_refresh(struct gfn_to_pfn_cache *gpc, gpa_t gpa, +static int __kvm_gpc_refresh(struct gfn_to_pfn_cache *gpc, gpa_t gpa, unsigned long uhva, unsigned long len) { struct kvm_memslots *slots = kvm_memslots(gpc->kvm); - unsigned long page_offset = offset_in_page(gpa); + unsigned long page_offset = (gpa != KVM_XEN_INVALID_GPA) ? + offset_in_page(gpa) : + offset_in_page(uhva); bool unmap_old = false; unsigned long old_uhva; kvm_pfn_t old_pfn; @@ -246,9 +251,15 @@ static int __kvm_gpc_refresh(struct gfn_to_pfn_cache *gpc, gpa_t gpa, old_khva = (void *)PAGE_ALIGN_DOWN((uintptr_t)gpc->khva); old_uhva = PAGE_ALIGN_DOWN(gpc->uhva); - /* Refresh the userspace HVA if necessary */ - if (gpc->gpa != gpa || gpc->generation != slots->generation || - kvm_is_error_hva(gpc->uhva)) { + if (gpa == KVM_XEN_INVALID_GPA) { + gpc->gpa = KVM_XEN_INVALID_GPA; + gpc->uhva = PAGE_ALIGN_DOWN(uhva); + + if (gpc->uhva != old_uhva) + hva_change = true; + } else if (gpc->gpa != gpa || + gpc->generation != slots->generation || + kvm_is_error_hva(gpc->uhva)) { gfn_t gfn = gpa_to_gfn(gpa); gpc->gpa = gpa; @@ -319,7 +330,7 @@ static int __kvm_gpc_refresh(struct gfn_to_pfn_cache *gpc, gpa_t gpa, int kvm_gpc_refresh(struct gfn_to_pfn_cache *gpc, unsigned long len) { - return __kvm_gpc_refresh(gpc, gpc->gpa, len); + return __kvm_gpc_refresh(gpc, gpc->gpa, gpc->uhva, len); } void kvm_gpc_init(struct gfn_to_pfn_cache *gpc, struct kvm *kvm) @@ -332,7 +343,8 @@ void kvm_gpc_init(struct gfn_to_pfn_cache *gpc, struct kvm *kvm) gpc->uhva = KVM_HVA_ERR_BAD; } -int kvm_gpc_activate(struct gfn_to_pfn_cache *gpc, gpa_t gpa, unsigned long len) +static int __kvm_gpc_activate(struct gfn_to_pfn_cache *gpc, gpa_t gpa, unsigned long uhva, + unsigned long len) { struct kvm *kvm = gpc->kvm; @@ -353,7 +365,17 @@ int kvm_gpc_activate(struct gfn_to_pfn_cache *gpc, gpa_t gpa, unsigned long len) gpc->active = true; write_unlock_irq(&gpc->lock); } - return __kvm_gpc_refresh(gpc, gpa, len); + return __kvm_gpc_refresh(gpc, gpa, uhva, len); +} + +int kvm_gpc_activate(struct gfn_to_pfn_cache *gpc, gpa_t gpa, unsigned long len) +{ + return __kvm_gpc_activate(gpc, gpa, KVM_HVA_ERR_BAD, len); +} + +int kvm_gpc_activate_hva(struct gfn_to_pfn_cache *gpc, unsigned long uhva, unsigned long len) +{ + return __kvm_gpc_activate(gpc, KVM_XEN_INVALID_GPA, uhva, len); } void kvm_gpc_deactivate(struct gfn_to_pfn_cache *gpc)