From patchwork Tue Dec 19 16:10:57 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paul Durrant X-Patchwork-Id: 181073 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:24d3:b0:fb:cd0c:d3e with SMTP id r19csp2050826dyi; Tue, 19 Dec 2023 08:16:44 -0800 (PST) X-Google-Smtp-Source: AGHT+IEbQzExMI7MJdonqKP1N05DWq4xrcrz/ag4jyRnRat5+ZYUn/khSWvhIXC1hBrXVOv8Gl4e X-Received: by 2002:a50:ccdd:0:b0:553:50d7:f749 with SMTP id b29-20020a50ccdd000000b0055350d7f749mr2056620edj.38.1703002603866; Tue, 19 Dec 2023 08:16:43 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1703002603; cv=none; d=google.com; s=arc-20160816; b=DBjtJmP5rnGDgtXpeRVCnsHFSM1EGsrnETbc2WnPCEUX965kL+onhlDde6NSTrDIZB xdGN1VEYv4I2WpWCulozDpc8sStV0mqP/nQ27i/MMQuWrwnlPAFvlupeIkmXszvgxGwl VXUiSt21G8tlwTlTMScr0clwERt8tVQhwzXksCHrB8tlbocwoO8rDKElS9kz1hpkWStW VM1js9pJcUYzMDa0gjmEoNzi2eFiGRcYoGU5f1ThqAhGgp9gG7ZI0wuumOBNx92BSTjd qQGeDKCm1T0FuHibmd20Xqlt8/fJr/qzwaAZC32Ey+d1Uts9lWq0XaZaWh7jDcdQwqbt A8iw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:to:from:dkim-signature; bh=TQCkPB1sujZxJovh7KF/VJS4UU1Zw/7OzvD4MgUIPi4=; fh=aAPqvXm40pY9772UiBOljEePwtz/3Q0zIOpw238tj3M=; b=dWBNWP/yZaVcIslAzyFDvjRffxaejM8hvP6xxjHnLQDVwHh60CrJlOBx63TyB/g+5y LJMPKpnzprdP+cH71B3/2XZwmwYGlQZ/JAJtWFoyRY40LhlIEJmfkst5kKXQFU1VuF1r p3wL8Qzo5ajMbOTBMO1yQoyhE3HvA/WOKY4CaU5jdlMpW9JPHTX2cOwi83P3G3sQLNit Xu26Q4YSJkkvklynYrpdZNeBhBSyL3tOJVdoOewbBUg8qcmxIqpSZ1NnOoULJBJJK8Lm kb0h9gidj+oj72GvKyPIjPlrm36+USwcc3SNABz339ZekfQfJh2DlJ9vzEEWUf7Zj3s6 eAvw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@xen.org header.s=20200302mail header.b="53/NYWPX"; spf=pass (google.com: domain of linux-kernel+bounces-5545-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-5545-ouuuleilei=gmail.com@vger.kernel.org" Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [2604:1380:4601:e00::3]) by mx.google.com with ESMTPS id y70-20020a50bb4c000000b005524922c136si5663115ede.612.2023.12.19.08.16.43 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 19 Dec 2023 08:16:43 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-5545-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) client-ip=2604:1380:4601:e00::3; Authentication-Results: mx.google.com; dkim=pass header.i=@xen.org header.s=20200302mail header.b="53/NYWPX"; spf=pass (google.com: domain of linux-kernel+bounces-5545-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-5545-ouuuleilei=gmail.com@vger.kernel.org" Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id 60CEB1F25595 for ; Tue, 19 Dec 2023 16:16:43 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id BC1D831A76; Tue, 19 Dec 2023 16:15:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=xen.org header.i=@xen.org header.b="53/NYWPX" X-Original-To: linux-kernel@vger.kernel.org Received: from mail.xenproject.org (mail.xenproject.org [104.130.215.37]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B06341DA58; Tue, 19 Dec 2023 16:15:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=xen.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=xen.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References: In-Reply-To:Message-Id:Date:Subject:To:From; bh=TQCkPB1sujZxJovh7KF/VJS4UU1Zw/7OzvD4MgUIPi4=; b=53/NYWPX6KOz01YvbXC1XZ5qPu JtvzIuDdoxytgUHzBvBEUsnUGkb4C2TWAf2xiHiQ7zuSIyWqxNpgZWy1lQfPqcg4h/nJHT94Jpahj QJsEH3LOSWM1IyeW7DKAIPE50Hn7c/dlk5z1eM6BkOanF4tS7Cxaj6cl2/M9UbbsLuAA=; Received: from xenbits.xenproject.org ([104.239.192.120]) by mail.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1rFck3-0005Mn-Ij; Tue, 19 Dec 2023 16:14:47 +0000 Received: from 54-240-197-226.amazon.com ([54.240.197.226] helo=REM-PW02S00X.ant.amazon.com) by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1rFck3-0005h9-AF; Tue, 19 Dec 2023 16:14:47 +0000 From: Paul Durrant To: Paolo Bonzini , Jonathan Corbet , Sean Christopherson , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , David Woodhouse , Paul Durrant , Shuah Khan , kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org Subject: [PATCH v11 07/19] KVM: pfncache: include page offset in uhva and use it consistently Date: Tue, 19 Dec 2023 16:10:57 +0000 Message-Id: <20231219161109.1318-8-paul@xen.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20231219161109.1318-1-paul@xen.org> References: <20231219161109.1318-1-paul@xen.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1785727658431804062 X-GMAIL-MSGID: 1785727658431804062 From: Paul Durrant Currently the pfncache page offset is sometimes determined using the gpa and sometimes the khva, whilst the uhva is always page-aligned. After a subsequent patch is applied the gpa will not always be valid so adjust the code to include the page offset in the uhva and use it consistently as the source of truth. Also, where a page-aligned address is required, use PAGE_ALIGN_DOWN() for clarity. No functional change intended. Signed-off-by: Paul Durrant Reviewed-by: David Woodhouse --- Cc: Sean Christopherson Cc: Paolo Bonzini Cc: David Woodhouse v8: - New in this version. --- virt/kvm/pfncache.c | 29 +++++++++++++++++++++-------- 1 file changed, 21 insertions(+), 8 deletions(-) diff --git a/virt/kvm/pfncache.c b/virt/kvm/pfncache.c index 0eeb034d0674..97eec8ee3449 100644 --- a/virt/kvm/pfncache.c +++ b/virt/kvm/pfncache.c @@ -48,10 +48,10 @@ bool kvm_gpc_check(struct gfn_to_pfn_cache *gpc, unsigned long len) if (!gpc->active) return false; - if (offset_in_page(gpc->gpa) + len > PAGE_SIZE) + if (gpc->generation != slots->generation || kvm_is_error_hva(gpc->uhva)) return false; - if (gpc->generation != slots->generation || kvm_is_error_hva(gpc->uhva)) + if (offset_in_page(gpc->uhva) + len > PAGE_SIZE) return false; if (!gpc->valid) @@ -119,7 +119,7 @@ static inline bool mmu_notifier_retry_cache(struct kvm *kvm, unsigned long mmu_s static kvm_pfn_t hva_to_pfn_retry(struct gfn_to_pfn_cache *gpc) { /* Note, the new page offset may be different than the old! */ - void *old_khva = gpc->khva - offset_in_page(gpc->khva); + void *old_khva = (void *)PAGE_ALIGN_DOWN((uintptr_t)gpc->khva); kvm_pfn_t new_pfn = KVM_PFN_ERR_FAULT; void *new_khva = NULL; unsigned long mmu_seq; @@ -192,7 +192,7 @@ static kvm_pfn_t hva_to_pfn_retry(struct gfn_to_pfn_cache *gpc) gpc->valid = true; gpc->pfn = new_pfn; - gpc->khva = new_khva + offset_in_page(gpc->gpa); + gpc->khva = new_khva + offset_in_page(gpc->uhva); /* * Put the reference to the _new_ pfn. The pfn is now tracked by the @@ -217,6 +217,7 @@ static int __kvm_gpc_refresh(struct gfn_to_pfn_cache *gpc, gpa_t gpa, bool unmap_old = false; unsigned long old_uhva; kvm_pfn_t old_pfn; + bool hva_change = false; void *old_khva; int ret; @@ -242,10 +243,10 @@ static int __kvm_gpc_refresh(struct gfn_to_pfn_cache *gpc, gpa_t gpa, } old_pfn = gpc->pfn; - old_khva = gpc->khva - offset_in_page(gpc->khva); - old_uhva = gpc->uhva; + old_khva = (void *)PAGE_ALIGN_DOWN((uintptr_t)gpc->khva); + old_uhva = PAGE_ALIGN_DOWN(gpc->uhva); - /* If the userspace HVA is invalid, refresh that first */ + /* Refresh the userspace HVA if necessary */ if (gpc->gpa != gpa || gpc->generation != slots->generation || kvm_is_error_hva(gpc->uhva)) { gfn_t gfn = gpa_to_gfn(gpa); @@ -259,13 +260,25 @@ static int __kvm_gpc_refresh(struct gfn_to_pfn_cache *gpc, gpa_t gpa, ret = -EFAULT; goto out; } + + /* + * Even if the GPA and/or the memslot generation changed, the + * HVA may still be the same. + */ + if (gpc->uhva != old_uhva) + hva_change = true; + } else { + gpc->uhva = old_uhva; } + /* Note: the offset must be correct before calling hva_to_pfn_retry() */ + gpc->uhva += page_offset; + /* * If the userspace HVA changed or the PFN was already invalid, * drop the lock and do the HVA to PFN lookup again. */ - if (!gpc->valid || old_uhva != gpc->uhva) { + if (!gpc->valid || hva_change) { ret = hva_to_pfn_retry(gpc); } else { /*