From patchwork Sat Dec 2 09:28:50 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yan Zhao X-Patchwork-Id: 172785 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:bcd1:0:b0:403:3b70:6f57 with SMTP id r17csp1676494vqy; Sat, 2 Dec 2023 01:57:54 -0800 (PST) X-Google-Smtp-Source: AGHT+IHkT7FArIyvtRmcQ2LNE+pVFfqHaE+86iTVp4y+qXkVXTo9FEOBgsNLjHL2q2ZBuPDjBA/c X-Received: by 2002:a05:6a20:9381:b0:18f:9c4:d333 with SMTP id x1-20020a056a20938100b0018f09c4d333mr1065617pzh.35.1701511074451; Sat, 02 Dec 2023 01:57:54 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1701511074; cv=none; d=google.com; s=arc-20160816; b=MP/igNPGNBfdsTncqARvYD7p1k/WbQOEFX0p/OKmhtVt7k0nQe7fPnW4Xgu1lTqXUE BtjIns1dd+61LGPFEof6m410VpFjDswziRCifO9Gy9AzgU39Sk8C5MqZE+bnOjWDaKZe IAIEz/TJObe19WreEY24qjLpdKXLyBY7HYB5hj8CVY3BbJDM5vzogZ5rePyvl03s/C5y 5Ja610WTZOKONZS3pslHN80ZJIFqUVMIOdsduIsXSY3mtiJT3lN+Mt9HrCOARAMBdUnI vwtKWN5kaUT5+kbLSy1yVusVbV1Ma57aM7kCTlPgzeYwWuN9mixzMP3cVdkH5KWzCYCC FOcQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from:dkim-signature; bh=RozKmTJhpc9xSwbs47yT8hiUYV4mfwxOPpVi9Upxre0=; fh=+WI4m5k3dRLR+dR3neThuZkNBTzIm/a8HgtddERL9fA=; b=TJp16A1wLl4ITQfHe7Pnzz0rJKiU/IfQgUwhZDe9dmm9yRCbCSC+0+1rzc2CStyfDL a+tYa3NBEttUc83hZ3fNpX0iFWXHmvog9qA6ev+naMh5FmcRgPeC4R/wRiCbLGdnfWqm 7q4GDNkWiYCXrtn1Syp9bsaLtO+cXkdR/MUKbgJ6YtdIFgOJKWdpOwK8N/UNy6qMl54G M+Yk0mkMih8fEkNYC30KI4/rsxcA/Q/3lyJg61TyJHonjV6uaJajTvWCn8v0v8vL+O2O kviE+GjuC6MYK5RyzSUyz+01fkfI+Dfr+1p7qXBBKkHB1M3iJ3XXaLtTyqXzRAkGenft z1+w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=lXEjt0Fz; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from snail.vger.email (snail.vger.email. [23.128.96.37]) by mx.google.com with ESMTPS id k18-20020a6568d2000000b00584e731e7fbsi4595054pgt.280.2023.12.02.01.57.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 02 Dec 2023 01:57:54 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) client-ip=23.128.96.37; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=lXEjt0Fz; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by snail.vger.email (Postfix) with ESMTP id EEFF3804E7A1; Sat, 2 Dec 2023 01:57:52 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at snail.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232508AbjLBJ5n (ORCPT + 99 others); Sat, 2 Dec 2023 04:57:43 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38180 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229451AbjLBJ5m (ORCPT ); Sat, 2 Dec 2023 04:57:42 -0500 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.9]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B0B9419F; Sat, 2 Dec 2023 01:57:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1701511069; x=1733047069; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=55tsKcT+FAhok+Xkgfzjx0zHipfnMqIv8/jCkGwDm9g=; b=lXEjt0FzZuDXYIEH5JkY5eNLzil4BbGWwfV3tA/c7zAhNUmH52SdTQML 8LN9jgr8bSOeCaVsSUR/IhgSLpE7fcQKMucT3nfEhEWDf/2G3E9j6E+3d 34H05kRy0mqf5g/M3GPfgSCzomefEK/iDuggEVVNlmXAOH1B1Y3w68zrl 6OojjSfWMI4vLPLKOtAbyE7mT2hES98aAfRr7dbXc1g4RUf/QmD23Hcxs K/803m2dIrNKg/tyYArCI0VL/kDl5uj3M2hUwDbY9ahEstDvG2wFyfgxv JnYQbRNj7tjjcpQYDQ1wQm88u7EgblUvZSyOznhaNaTnNLcgPSt6oWLdb Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10911"; a="12304410" X-IronPort-AV: E=Sophos;i="6.04,245,1695711600"; d="scan'208";a="12304410" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by orvoesa101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Dec 2023 01:57:49 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10911"; a="1101537132" X-IronPort-AV: E=Sophos;i="6.04,245,1695711600"; d="scan'208";a="1101537132" Received: from yzhao56-desk.sh.intel.com ([10.239.159.62]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Dec 2023 01:57:44 -0800 From: Yan Zhao To: iommu@lists.linux.dev, kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: alex.williamson@redhat.com, jgg@nvidia.com, pbonzini@redhat.com, seanjc@google.com, joro@8bytes.org, will@kernel.org, robin.murphy@arm.com, kevin.tian@intel.com, baolu.lu@linux.intel.com, dwmw2@infradead.org, yi.l.liu@intel.com, Yan Zhao Subject: [RFC PATCH 27/42] KVM: x86/mmu: change param "vcpu" to "kvm" in kvm_mmu_hugepage_adjust() Date: Sat, 2 Dec 2023 17:28:50 +0800 Message-Id: <20231202092850.15107-1-yan.y.zhao@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20231202091211.13376-1-yan.y.zhao@intel.com> References: <20231202091211.13376-1-yan.y.zhao@intel.com> X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (snail.vger.email [0.0.0.0]); Sat, 02 Dec 2023 01:57:53 -0800 (PST) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1784163676570986561 X-GMAIL-MSGID: 1784163676570986561 kvm_mmu_hugepage_adjust() requires "vcpu" only to get "vcpu->kvm". Switch to pass in "kvm" directly. No functional changes expected. Signed-off-by: Yan Zhao --- arch/x86/kvm/mmu/mmu.c | 8 ++++---- arch/x86/kvm/mmu/mmu_internal.h | 2 +- arch/x86/kvm/mmu/paging_tmpl.h | 2 +- arch/x86/kvm/mmu/tdp_mmu.c | 2 +- 4 files changed, 7 insertions(+), 7 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index cfeb066f38687..b461bab51255e 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3159,7 +3159,7 @@ int kvm_mmu_max_mapping_level(struct kvm *kvm, return min(host_level, max_level); } -void kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) +void kvm_mmu_hugepage_adjust(struct kvm *kvm, struct kvm_page_fault *fault) { struct kvm_memory_slot *slot = fault->slot; kvm_pfn_t mask; @@ -3179,8 +3179,8 @@ void kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault * Enforce the iTLB multihit workaround after capturing the requested * level, which will be used to do precise, accurate accounting. */ - fault->req_level = kvm_mmu_max_mapping_level(vcpu->kvm, slot, - fault->gfn, fault->max_level); + fault->req_level = kvm_mmu_max_mapping_level(kvm, slot, fault->gfn, + fault->max_level); if (fault->req_level == PG_LEVEL_4K || fault->huge_page_disallowed) return; @@ -3222,7 +3222,7 @@ static int direct_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) int ret; gfn_t base_gfn = fault->gfn; - kvm_mmu_hugepage_adjust(vcpu, fault); + kvm_mmu_hugepage_adjust(vcpu->kvm, fault); trace_kvm_mmu_spte_requested(fault); for_each_shadow_entry(vcpu, fault->addr, it) { diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index 7699596308386..1e9be0604e348 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -339,7 +339,7 @@ static inline int kvm_mmu_do_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, int kvm_mmu_max_mapping_level(struct kvm *kvm, const struct kvm_memory_slot *slot, gfn_t gfn, int max_level); -void kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault); +void kvm_mmu_hugepage_adjust(struct kvm *kvm, struct kvm_page_fault *fault); void disallowed_hugepage_adjust(struct kvm_page_fault *fault, u64 spte, int cur_level); void *mmu_memory_cache_alloc(struct kvm_mmu_memory_cache *mc); diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 84509af0d7f9d..13c6390824a3e 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -716,7 +716,7 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault, * are being shadowed by KVM, i.e. allocating a new shadow page may * affect the allowed hugepage size. */ - kvm_mmu_hugepage_adjust(vcpu, fault); + kvm_mmu_hugepage_adjust(vcpu->kvm, fault); trace_kvm_mmu_spte_requested(fault); diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 6657685a28709..5d76d4849e8aa 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -1047,7 +1047,7 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) struct kvm_mmu_page *sp; int ret = RET_PF_RETRY; - kvm_mmu_hugepage_adjust(vcpu, fault); + kvm_mmu_hugepage_adjust(vcpu->kvm, fault); trace_kvm_mmu_spte_requested(fault);