From patchwork Thu Feb 15 15:29:02 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paul Durrant X-Patchwork-Id: 201563 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:b825:b0:106:860b:bbdd with SMTP id da37csp480199dyb; Thu, 15 Feb 2024 07:33:45 -0800 (PST) X-Forwarded-Encrypted: i=3; AJvYcCUW+g70noTESka4P7oD7EkujPMadgD0O6zUGiAkUPeVRn7o55rbmAILF3oTCzels4I7F/wzFDQJOoclUH5kmihQfY3yAQ== X-Google-Smtp-Source: AGHT+IFqd29L0oYS4vM2laO23SReJ7eFqqElR4NrQvD8jPET/+zD0zEk/TcRSveySXsw5AJh7yiW X-Received: by 2002:a05:6a20:b91a:b0:19e:a520:105 with SMTP id fe26-20020a056a20b91a00b0019ea5200105mr1844809pzb.34.1708011225626; Thu, 15 Feb 2024 07:33:45 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1708011225; cv=pass; d=google.com; s=arc-20160816; b=zNm6rxd2momP7dWq+JmU99wfXu6xRsdy0s3uLJMNwb5p+g0+8r6jzv3Rk3zq65pwEw EX5sPYqbf5SQTFt4QI03f+HHEqIF7p8kBKKBc9gZ/jhYbkLMUNBckKufA+pPGvuRDy/8 zgH2wV79aoh2qn8h+bBVIj8QX8QOHGYGKKahpiphU7TBfrS6YgmN5/O7H8BUSl8N5UGM vIdbJY1Kc0uyFojnIjmwhUQUnQrdGw197OrwiXTlW0hPQFGh+lju3LjkHSapGiqrhsRT TvmfhhAwjHeKfiTJeT0EDTwg2AkC6LIymKhalkTpEXjUq2oThUvnQsiwWC3nPEZZtaUU ElFA== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:to:from:dkim-signature; bh=TQCkPB1sujZxJovh7KF/VJS4UU1Zw/7OzvD4MgUIPi4=; fh=NrOggL3VZvuSFh5W9Wj2+kLgS4v0nogfnAWvBJoouXQ=; b=KRzCmoojqxnMg7bnFjtpTFZiJKfxd3+DgzzZyFNsLMWOzKP0NQL0JuN6tkU/It1889 ecC6VzV7Ty+QBduU6AWJm/hsa9CsX3jNxpvK8V/+zAqPFHL0NNz5g6gXuYsJ4Pb1l505 MJDePk9MYF6tvGGnDyD+blc+dQTEb6vswy81r69hTWWv+W0ZnH7I1RE8HHWLu2OOtgaE xQDlCky3hSM/IzDPu+ab1BvWlHlgKtG4nFDLmyTUy4pU6omyJjtqjbTuY5HlPf2zKSH7 Umu+I3CyafRIrB5xTtx7wNOrbctithM/S/gAmkMJpeyoP//CKfREgTseeiJzPsXAWSxk lmkQ==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@xen.org header.s=20200302mail header.b=T53UTQn1; arc=pass (i=1 spf=pass spfdomain=xen.org dkim=pass dkdomain=xen.org); spf=pass (google.com: domain of linux-kernel+bounces-67190-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-67190-ouuuleilei=gmail.com@vger.kernel.org" Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [139.178.88.99]) by mx.google.com with ESMTPS id fb40-20020a056a002da800b006e10e456de7si1360208pfb.50.2024.02.15.07.33.45 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 15 Feb 2024 07:33:45 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-67190-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) client-ip=139.178.88.99; Authentication-Results: mx.google.com; dkim=pass header.i=@xen.org header.s=20200302mail header.b=T53UTQn1; arc=pass (i=1 spf=pass spfdomain=xen.org dkim=pass dkdomain=xen.org); spf=pass (google.com: domain of linux-kernel+bounces-67190-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-67190-ouuuleilei=gmail.com@vger.kernel.org" Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id 708312843AA for ; Thu, 15 Feb 2024 15:33:45 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 22D9013667A; Thu, 15 Feb 2024 15:30:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=xen.org header.i=@xen.org header.b="T53UTQn1" Received: from mail.xenproject.org (mail.xenproject.org [104.130.215.37]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DBB36135415; Thu, 15 Feb 2024 15:30:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=104.130.215.37 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708011046; cv=none; b=o3vwp2S4FYdQAh2dqrCeeT3NAKBRapffoWiG8nTC87GG8WFRXdSc74h+VH1ZJBkj5eSN+XerUScnWHxLuGuzsP98227VnrcRtDQMIu1R5duCVSQYmKPyZGIKVGgcmNkwXzsMa+R2xPcNKCi8/Wbi7poSeT5IrhNdM24OFX4DHgo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708011046; c=relaxed/simple; bh=vi1T6IhyqldodRETWHSDH1iiJsgIF+uNc74DVNJPlwg=; h=From:To:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=OkzJEe0hJyxniyoR/j6Gmi1FSMSYNanFN7V/ho0E2L+8Bj2zno2W3ep5e7ClRynt2LBvHaU4GRCmk3+vRrWpYJKj6nWdbDJ/72CBpvgtD5hLpOgs07jZZmLtXhFWoBrkmtWB8MYDvPXq2Mq/ftk8hRwMOIeDAOPFGfmDqbzLDnA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=xen.org; spf=pass smtp.mailfrom=xen.org; dkim=pass (1024-bit key) header.d=xen.org header.i=@xen.org header.b=T53UTQn1; arc=none smtp.client-ip=104.130.215.37 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=xen.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=xen.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References: In-Reply-To:Message-Id:Date:Subject:To:From; bh=TQCkPB1sujZxJovh7KF/VJS4UU1Zw/7OzvD4MgUIPi4=; b=T53UTQn1iYtfBAvej0FGk6Ex+h typAQdFveZLy+NCDOtIpQHj67X1puZHmPB61LiAqPXfR88KZwvl1uCdyXTiadu4ZaEXDyNr6CJoqp 5PY+eQFpWRluHEcoWcE6BFIjhn72HONttl6eVqbRHJEQQsIHR4Cjfw8DqGGID2C3CYp4=; Received: from xenbits.xenproject.org ([104.239.192.120]) by mail.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1radgx-0001Fa-EE; Thu, 15 Feb 2024 15:30:27 +0000 Received: from 54-240-197-226.amazon.com ([54.240.197.226] helo=REM-PW02S00X.ant.amazon.com) by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1radgx-00089r-5V; Thu, 15 Feb 2024 15:30:27 +0000 From: Paul Durrant To: Paolo Bonzini , Jonathan Corbet , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , David Hildenbrand , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Sven Schnelle , Sean Christopherson , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , David Woodhouse , Paul Durrant , Shuah Khan , kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-s390@vger.kernel.org, linux-kselftest@vger.kernel.org Subject: [PATCH v13 07/21] KVM: pfncache: include page offset in uhva and use it consistently Date: Thu, 15 Feb 2024 15:29:02 +0000 Message-Id: <20240215152916.1158-8-paul@xen.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240215152916.1158-1-paul@xen.org> References: <20240215152916.1158-1-paul@xen.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1790979579400301652 X-GMAIL-MSGID: 1790979579400301652 From: Paul Durrant Currently the pfncache page offset is sometimes determined using the gpa and sometimes the khva, whilst the uhva is always page-aligned. After a subsequent patch is applied the gpa will not always be valid so adjust the code to include the page offset in the uhva and use it consistently as the source of truth. Also, where a page-aligned address is required, use PAGE_ALIGN_DOWN() for clarity. No functional change intended. Signed-off-by: Paul Durrant Reviewed-by: David Woodhouse --- Cc: Sean Christopherson Cc: Paolo Bonzini Cc: David Woodhouse v8: - New in this version. --- virt/kvm/pfncache.c | 29 +++++++++++++++++++++-------- 1 file changed, 21 insertions(+), 8 deletions(-) diff --git a/virt/kvm/pfncache.c b/virt/kvm/pfncache.c index 0eeb034d0674..97eec8ee3449 100644 --- a/virt/kvm/pfncache.c +++ b/virt/kvm/pfncache.c @@ -48,10 +48,10 @@ bool kvm_gpc_check(struct gfn_to_pfn_cache *gpc, unsigned long len) if (!gpc->active) return false; - if (offset_in_page(gpc->gpa) + len > PAGE_SIZE) + if (gpc->generation != slots->generation || kvm_is_error_hva(gpc->uhva)) return false; - if (gpc->generation != slots->generation || kvm_is_error_hva(gpc->uhva)) + if (offset_in_page(gpc->uhva) + len > PAGE_SIZE) return false; if (!gpc->valid) @@ -119,7 +119,7 @@ static inline bool mmu_notifier_retry_cache(struct kvm *kvm, unsigned long mmu_s static kvm_pfn_t hva_to_pfn_retry(struct gfn_to_pfn_cache *gpc) { /* Note, the new page offset may be different than the old! */ - void *old_khva = gpc->khva - offset_in_page(gpc->khva); + void *old_khva = (void *)PAGE_ALIGN_DOWN((uintptr_t)gpc->khva); kvm_pfn_t new_pfn = KVM_PFN_ERR_FAULT; void *new_khva = NULL; unsigned long mmu_seq; @@ -192,7 +192,7 @@ static kvm_pfn_t hva_to_pfn_retry(struct gfn_to_pfn_cache *gpc) gpc->valid = true; gpc->pfn = new_pfn; - gpc->khva = new_khva + offset_in_page(gpc->gpa); + gpc->khva = new_khva + offset_in_page(gpc->uhva); /* * Put the reference to the _new_ pfn. The pfn is now tracked by the @@ -217,6 +217,7 @@ static int __kvm_gpc_refresh(struct gfn_to_pfn_cache *gpc, gpa_t gpa, bool unmap_old = false; unsigned long old_uhva; kvm_pfn_t old_pfn; + bool hva_change = false; void *old_khva; int ret; @@ -242,10 +243,10 @@ static int __kvm_gpc_refresh(struct gfn_to_pfn_cache *gpc, gpa_t gpa, } old_pfn = gpc->pfn; - old_khva = gpc->khva - offset_in_page(gpc->khva); - old_uhva = gpc->uhva; + old_khva = (void *)PAGE_ALIGN_DOWN((uintptr_t)gpc->khva); + old_uhva = PAGE_ALIGN_DOWN(gpc->uhva); - /* If the userspace HVA is invalid, refresh that first */ + /* Refresh the userspace HVA if necessary */ if (gpc->gpa != gpa || gpc->generation != slots->generation || kvm_is_error_hva(gpc->uhva)) { gfn_t gfn = gpa_to_gfn(gpa); @@ -259,13 +260,25 @@ static int __kvm_gpc_refresh(struct gfn_to_pfn_cache *gpc, gpa_t gpa, ret = -EFAULT; goto out; } + + /* + * Even if the GPA and/or the memslot generation changed, the + * HVA may still be the same. + */ + if (gpc->uhva != old_uhva) + hva_change = true; + } else { + gpc->uhva = old_uhva; } + /* Note: the offset must be correct before calling hva_to_pfn_retry() */ + gpc->uhva += page_offset; + /* * If the userspace HVA changed or the PFN was already invalid, * drop the lock and do the HVA to PFN lookup again. */ - if (!gpc->valid || old_uhva != gpc->uhva) { + if (!gpc->valid || hva_change) { ret = hva_to_pfn_retry(gpc); } else { /*