Commit Message
Isaku Yamahata
Oct. 30, 2022, 6:22 a.m. UTC
From: Isaku Yamahata <isaku.yamahata@intel.com> Factor out non-leaf SPTE population logic from kvm_tdp_mmu_map(). MapGPA hypercall needs to populate non-leaf SPTE to record which GPA, private or shared, is allowed in the leaf EPT entry. Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com> --- arch/x86/kvm/mmu/tdp_mmu.c | 26 +++++++++++++++++++------- 1 file changed, 19 insertions(+), 7 deletions(-)
Comments
On Sat, 2022-10-29 at 23:22 -0700, isaku.yamahata@intel.com wrote: > From: Isaku Yamahata <isaku.yamahata@intel.com> > > Factor out non-leaf SPTE population logic from kvm_tdp_mmu_map(). MapGPA > hypercall needs to populate non-leaf SPTE to record which GPA, private or > shared, is allowed in the leaf EPT entry. Is this patch still valid/needed since you have changed to use XArray to store whether GFN is private or shared? > > Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com> > --- > arch/x86/kvm/mmu/tdp_mmu.c | 26 +++++++++++++++++++------- > 1 file changed, 19 insertions(+), 7 deletions(-) > > diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c > index 3325633b1cb5..11b0ec8aebe2 100644 > --- a/arch/x86/kvm/mmu/tdp_mmu.c > +++ b/arch/x86/kvm/mmu/tdp_mmu.c > @@ -1157,6 +1157,24 @@ static int tdp_mmu_link_sp(struct kvm *kvm, struct tdp_iter *iter, > return 0; > } > > +static int tdp_mmu_populate_nonleaf(struct kvm_vcpu *vcpu, struct tdp_iter *iter, > + bool account_nx) > +{ > + struct kvm_mmu_page *sp; > + int ret; > + > + KVM_BUG_ON(is_shadow_present_pte(iter->old_spte), vcpu->kvm); > + KVM_BUG_ON(is_removed_spte(iter->old_spte), vcpu->kvm); > + > + sp = tdp_mmu_alloc_sp(vcpu); > + tdp_mmu_init_child_sp(sp, iter); > + > + ret = tdp_mmu_link_sp(vcpu->kvm, iter, sp, account_nx, true); > + if (ret) > + tdp_mmu_free_sp(sp); > + return ret; > +} > + > /* > * Handle a TDP page fault (NPT/EPT violation/misconfiguration) by installing > * page tables and SPTEs to translate the faulting guest physical address. > @@ -1165,7 +1183,6 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) > { > struct kvm_mmu *mmu = vcpu->arch.mmu; > struct tdp_iter iter; > - struct kvm_mmu_page *sp; > int ret; > > kvm_mmu_hugepage_adjust(vcpu, fault); > @@ -1211,13 +1228,8 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) > if (is_removed_spte(iter.old_spte)) > break; > > - sp = tdp_mmu_alloc_sp(vcpu); > - tdp_mmu_init_child_sp(sp, &iter); > - > - if (tdp_mmu_link_sp(vcpu->kvm, &iter, sp, account_nx, true)) { > - tdp_mmu_free_sp(sp); > + if (tdp_mmu_populate_nonleaf(vcpu, &iter, account_nx)) > break; > - } > } > } >
On Wed, Nov 16, 2022 at 09:42:34AM +0000, "Huang, Kai" <kai.huang@intel.com> wrote: > On Sat, 2022-10-29 at 23:22 -0700, isaku.yamahata@intel.com wrote: > > From: Isaku Yamahata <isaku.yamahata@intel.com> > > > > Factor out non-leaf SPTE population logic from kvm_tdp_mmu_map(). MapGPA > > hypercall needs to populate non-leaf SPTE to record which GPA, private or > > shared, is allowed in the leaf EPT entry. > > Is this patch still valid/needed since you have changed to use XArray to store > whether GFN is private or shared? Because tdp_mmu_populate_nonleaf() won't be touched any more, this patch doesn't have any benefit. Will drop it.
diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 3325633b1cb5..11b0ec8aebe2 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -1157,6 +1157,24 @@ static int tdp_mmu_link_sp(struct kvm *kvm, struct tdp_iter *iter, return 0; } +static int tdp_mmu_populate_nonleaf(struct kvm_vcpu *vcpu, struct tdp_iter *iter, + bool account_nx) +{ + struct kvm_mmu_page *sp; + int ret; + + KVM_BUG_ON(is_shadow_present_pte(iter->old_spte), vcpu->kvm); + KVM_BUG_ON(is_removed_spte(iter->old_spte), vcpu->kvm); + + sp = tdp_mmu_alloc_sp(vcpu); + tdp_mmu_init_child_sp(sp, iter); + + ret = tdp_mmu_link_sp(vcpu->kvm, iter, sp, account_nx, true); + if (ret) + tdp_mmu_free_sp(sp); + return ret; +} + /* * Handle a TDP page fault (NPT/EPT violation/misconfiguration) by installing * page tables and SPTEs to translate the faulting guest physical address. @@ -1165,7 +1183,6 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) { struct kvm_mmu *mmu = vcpu->arch.mmu; struct tdp_iter iter; - struct kvm_mmu_page *sp; int ret; kvm_mmu_hugepage_adjust(vcpu, fault); @@ -1211,13 +1228,8 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) if (is_removed_spte(iter.old_spte)) break; - sp = tdp_mmu_alloc_sp(vcpu); - tdp_mmu_init_child_sp(sp, &iter); - - if (tdp_mmu_link_sp(vcpu->kvm, &iter, sp, account_nx, true)) { - tdp_mmu_free_sp(sp); + if (tdp_mmu_populate_nonleaf(vcpu, &iter, account_nx)) break; - } } }