From patchwork Fri Jun 16 02:32:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yan Zhao X-Patchwork-Id: 108795 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp1055872vqr; Thu, 15 Jun 2023 20:05:32 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ5Vzk45udQrVWchM0EX1lfiEev/7zaj4L+sTPw8HGAoeJ3Qz5oLe5FJgnl+h4+WiTIHDJEW X-Received: by 2002:a05:6a21:2c98:b0:119:9d1d:1500 with SMTP id ua24-20020a056a212c9800b001199d1d1500mr1212980pzb.26.1686884731811; Thu, 15 Jun 2023 20:05:31 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1686884731; cv=none; d=google.com; s=arc-20160816; b=nHFBPskKnRY06osTvwTu+8xv2F1waGgwdqjV5VqF/EmyAKa/OVlWdUz7hLv3xsgtry 65+xkj0FbvFDC7pR+yCFnvmkkyUqpZCA7Z5s7LDrxpe8wEjIMWdEZ5KBiwcRcu8UzN6h tOJX1pV/10NY57RYSn6kAEYJ0+ByI/8JNFENiOXPdVCimckA0gvdj97QD9b6O/HMu4zM G96de9oIhKCxBfeW+Xgo9jaP1qP3JdQooNLhHoJzDp+zqbfXEGcfSghqzdVLTEt9bK2D Zqh42DEyTyjk29LwX5SRORCuXieYgScxht9tHtOQcS03iOReBb7NTT7xOfkYUvU56Cjx JoVw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from:dkim-signature; bh=tq5Q9F2gLxJLnmUdRuEyz1Rj9zYdg2CJUzYnXrBGmrg=; b=tu+NjJ3Knz96DxgUhcZFsL85yyaS4/tkGcslrKsrck+5piM2hSxjpIyfmCLAK6vkhH LTlQcAafXtzDLX1zy5kzzy3aYhribwF4vHv0r6ZyXwiAkgYqOdIkj4qRsW0/g6HLdDK1 CC+YhNMZ4uR4OyQrnt9wO+awyBqgzg6hMEA4D93f5KiZbXFEXxsCIk9bgTE2PljMHuNx etcBNAFtybBueUDshxmtsYw8H7BOjAD4If/mi5yokZr9LJYqjenooECcSP5TsV+eipBl MV/YidWTQ6KfDyLUhhFanjZ4dSucWjYo5yv03i/RZbERp/18QyOXP4kcsfVWuEjjIdUd QM7w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=VLl4mwSK; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id i2-20020a625402000000b006665e070b9csi4573977pfb.137.2023.06.15.20.05.18; Thu, 15 Jun 2023 20:05:31 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=VLl4mwSK; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239971AbjFPC5h (ORCPT + 99 others); Thu, 15 Jun 2023 22:57:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57036 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229509AbjFPC5f (ORCPT ); Thu, 15 Jun 2023 22:57:35 -0400 Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5C045297C; Thu, 15 Jun 2023 19:57:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1686884254; x=1718420254; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=ut2e3J98JDctUs6MMT7+OPH8GV0+N2ela+vuLlRy0YU=; b=VLl4mwSK3nWsjmnxllFM8zr+8D5U9sdl83C3d2ytVfcTFAy8qi/trJlf oJeHDf3OOglPkPuTuLhjiN1FbIYJUPIQc8EqZD1+QviozFeV7GyXvebjL Qiuqj6Us2NMCpHLy/+GhlQ1WJx6oyvpo8cDvVvZr3tYaORregT4xYPcvk W2RIjKFWznRK9BEl49GMasnurTqwn3TYAfocb1WlXzHtaS8mTvQsdHMgG ZdmaediC5yeLiw64QJhn/Es/qpN7sDurqJQQV3vf+njOIEJs1bOh++cb0 zuvaXMo6/5vE8xSD+BboIHqH9RYrYB91OXI4S8364Ild4p2/kFGO5/wTW A==; X-IronPort-AV: E=McAfee;i="6600,9927,10742"; a="387732923" X-IronPort-AV: E=Sophos;i="6.00,246,1681196400"; d="scan'208";a="387732923" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Jun 2023 19:57:34 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10742"; a="802653734" X-IronPort-AV: E=Sophos;i="6.00,246,1681196400"; d="scan'208";a="802653734" Received: from yzhao56-desk.sh.intel.com ([10.239.159.62]) by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Jun 2023 19:57:32 -0700 From: Yan Zhao To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: pbonzini@redhat.com, seanjc@google.com, chao.gao@intel.com, kai.huang@intel.com, robert.hoo.linux@gmail.com, Yan Zhao Subject: [PATCH v3 01/11] KVM: x86/mmu: helpers to return if KVM honors guest MTRRs Date: Fri, 16 Jun 2023 10:32:17 +0800 Message-Id: <20230616023217.7081-1-yan.y.zhao@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230616023101.7019-1-yan.y.zhao@intel.com> References: <20230616023101.7019-1-yan.y.zhao@intel.com> X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, RCVD_IN_MSPIKE_H3,RCVD_IN_MSPIKE_WL,SPF_HELO_NONE,SPF_NONE, T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1768826844114172928?= X-GMAIL-MSGID: =?utf-8?q?1768826844114172928?= Added helpers to check if KVM honors guest MTRRs. The inner helper __kvm_mmu_honors_guest_mtrrs() is also provided to outside callers for the purpose of checking if guest MTRRs were honored before stopping non-coherent DMA. Suggested-by: Sean Christopherson Signed-off-by: Yan Zhao --- arch/x86/kvm/mmu.h | 7 +++++++ arch/x86/kvm/mmu/mmu.c | 15 +++++++++++++++ 2 files changed, 22 insertions(+) diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h index 92d5a1924fc1..38bd449226f6 100644 --- a/arch/x86/kvm/mmu.h +++ b/arch/x86/kvm/mmu.h @@ -235,6 +235,13 @@ static inline u8 permission_fault(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, return -(u32)fault & errcode; } +bool __kvm_mmu_honors_guest_mtrrs(struct kvm *kvm, bool vm_has_noncoherent_dma); + +static inline bool kvm_mmu_honors_guest_mtrrs(struct kvm *kvm) +{ + return __kvm_mmu_honors_guest_mtrrs(kvm, kvm_arch_has_noncoherent_dma(kvm)); +} + void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end); int kvm_arch_write_log_dirty(struct kvm_vcpu *vcpu); diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 1e5db621241f..b4f89f015c37 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4516,6 +4516,21 @@ static int kvm_tdp_mmu_page_fault(struct kvm_vcpu *vcpu, } #endif +bool __kvm_mmu_honors_guest_mtrrs(struct kvm *kvm, bool vm_has_noncoherent_dma) +{ + /* + * If the TDP is enabled, the host MTRRs are ignored by TDP + * (shadow_memtype_mask is non-zero), and the VM has non-coherent DMA + * (DMA doesn't snoop CPU caches), KVM's ABI is to honor the memtype + * from the guest's MTRRs so that guest accesses to memory that is + * DMA'd aren't cached against the guest's wishes. + * + * Note, KVM may still ultimately ignore guest MTRRs for certain PFNs, + * e.g. KVM will force UC memtype for host MMIO. + */ + return vm_has_noncoherent_dma && tdp_enabled && shadow_memtype_mask; +} + int kvm_tdp_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) { /* From patchwork Fri Jun 16 02:34:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yan Zhao X-Patchwork-Id: 108814 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp1077588vqr; Thu, 15 Jun 2023 21:13:02 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ67waVqHCu7a8FvaSB8EjXLTe4OynpHY/L22Fa1ZkxmScmUFGmIGpgDeIbEB2UMMOvKTS8q X-Received: by 2002:a17:902:d489:b0:1b0:113e:1047 with SMTP id c9-20020a170902d48900b001b0113e1047mr752263plg.62.1686888782452; Thu, 15 Jun 2023 21:13:02 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1686888782; cv=none; d=google.com; s=arc-20160816; b=zbA7EHyQoYODMu7N+U3QdZzj7eHHbrNqf4XlIIY7J1jpBOWdi1kvkQ6r3Y3nSy5hpG nAxK86RDM6GldroPtVdpIeeJ9Fl8AzvQbLipGSOqz83h9Ro66r3WyRV9vb0UkMDdXjJf lVbF/oJw37xcgktqhXjNV/UqhuMOEsJNBzYcz/8XzKD8NHak9USbLj1Wkb7ySO5FaQfr kVyaKBKoQ5+NPvMr+H0WkHIemiDgOKfbs3UBdHOP8QE6lRKFZJpyyjx5JR2L0YR/H/vC C7nb5yISOwtw3xaaFnY8SSSnainQizto4Yu7lzH1JBBZotREh8qvcbxIyYWvyycOqHF2 4v/Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from:dkim-signature; bh=U6ZrHyKCEoL2hYaOj4OX6561KOcphkx90hTStb8bNEY=; b=vVPEXd/EiImrNUCSYUnSNXyZ7hucH0TCN5EaCwU6vkXWtuP4QjPCkgdHCKdC+/kxCO S1BItvLzjvj6YIdKa4TJmt2rS3R/5me0SnX5ojzhVvPBP7OkGBDv9ghd3AHB8vyvcH2s 0pelFR25N4uDMTT0pyFUJKH7HsQUKwhiDwKkGDhDIyIWR7gZL6BSr5TzPysSVkwZp//j nfe3ucLVgyrJRAkZQt6zjgmWn/t115bTpK1w4L2B17OEqbN3xSRjG/tslhC4XjpvyNwp R2AYjlZjURnfgdYLJSNPn5hoC9H18FoglNpx/8+xhk7MRZ5bT2g8ANU/gCgv5QG2n7HU aNgg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=D7u16eJR; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id k2-20020a170902d58200b001b041114a62si10525765plh.355.2023.06.15.21.12.37; Thu, 15 Jun 2023 21:13:02 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=D7u16eJR; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241268AbjFPDAO (ORCPT + 99 others); Thu, 15 Jun 2023 23:00:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57680 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229509AbjFPC7s (ORCPT ); Thu, 15 Jun 2023 22:59:48 -0400 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2D16D2953; Thu, 15 Jun 2023 19:59:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1686884388; x=1718420388; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=nOdi2KVC4OwTsVt3a1I9Bdhj8qUZvzNWXLp63uGaNfw=; b=D7u16eJR0Pi7k+mPyq5IjSsihPjvex7ZG8JibahMQZsI7pzJpXeSpko3 bGUFtVdrUYhD6TkeFQS+wcVhJN/UAXwQGC5bIyX9gzI2vCKKgUXRkEoZa F9Ftez1U2h+wXRXLnvNZgL+V0ZZlv6hO+8IhpKuJqOu/0HVhbx62ummYL Hv0/s+HMSmfZ8e2LDvN6ggz77t+0TFzocepskCq0RC3efyHBtdk2yMlea 1zepPptz0WJwnVb62AIVvuN56hxejbhR6PM7ZVg0jOdS4R2lkcytckA0b ry5VGvH0qTEHtA89HkGOb2L5wGEqs3WRPhKviy+POL/pS4A+eC4weeI/b g==; X-IronPort-AV: E=McAfee;i="6600,9927,10742"; a="359104326" X-IronPort-AV: E=Sophos;i="6.00,246,1681196400"; d="scan'208";a="359104326" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Jun 2023 19:59:47 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10742"; a="690036199" X-IronPort-AV: E=Sophos;i="6.00,246,1681196400"; d="scan'208";a="690036199" Received: from yzhao56-desk.sh.intel.com ([10.239.159.62]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Jun 2023 19:59:45 -0700 From: Yan Zhao To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: pbonzini@redhat.com, seanjc@google.com, chao.gao@intel.com, kai.huang@intel.com, robert.hoo.linux@gmail.com, Yan Zhao Subject: [PATCH v3 02/11] KVM: x86/mmu: Use KVM honors guest MTRRs helper in kvm_tdp_page_fault() Date: Fri, 16 Jun 2023 10:34:35 +0800 Message-Id: <20230616023435.7142-1-yan.y.zhao@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230616023101.7019-1-yan.y.zhao@intel.com> References: <20230616023101.7019-1-yan.y.zhao@intel.com> X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1768831091907581056?= X-GMAIL-MSGID: =?utf-8?q?1768831091907581056?= Let kvm_tdp_page_fault() use helper function kvm_mmu_honors_guest_mtrrs() to decide if it needs to consult guest MTRR to check GFN range consistency. No functional change intended. Suggested-by: Sean Christopherson Signed-off-by: Yan Zhao --- arch/x86/kvm/mmu/mmu.c | 11 ++--------- 1 file changed, 2 insertions(+), 9 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index b4f89f015c37..7f52bbe013b3 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4536,16 +4536,9 @@ int kvm_tdp_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) /* * If the guest's MTRRs may be used to compute the "real" memtype, * restrict the mapping level to ensure KVM uses a consistent memtype - * across the entire mapping. If the host MTRRs are ignored by TDP - * (shadow_memtype_mask is non-zero), and the VM has non-coherent DMA - * (DMA doesn't snoop CPU caches), KVM's ABI is to honor the memtype - * from the guest's MTRRs so that guest accesses to memory that is - * DMA'd aren't cached against the guest's wishes. - * - * Note, KVM may still ultimately ignore guest MTRRs for certain PFNs, - * e.g. KVM will force UC memtype for host MMIO. + * across the entire mapping. */ - if (shadow_memtype_mask && kvm_arch_has_noncoherent_dma(vcpu->kvm)) { + if (kvm_mmu_honors_guest_mtrrs(vcpu->kvm)) { for ( ; fault->max_level > PG_LEVEL_4K; --fault->max_level) { int page_num = KVM_PAGES_PER_HPAGE(fault->max_level); gfn_t base = gfn_round_for_level(fault->gfn, From patchwork Fri Jun 16 02:35:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yan Zhao X-Patchwork-Id: 108798 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp1059436vqr; Thu, 15 Jun 2023 20:15:55 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ4H3WId8jlru+GMVgkfXdw/6YpgvXbOJL2H2S1dVoZv2IbSW0uAJzu/eD1RyApOJqdPjIuZ X-Received: by 2002:a17:902:ce92:b0:1b1:82a6:7c82 with SMTP id f18-20020a170902ce9200b001b182a67c82mr822958plg.27.1686885354751; Thu, 15 Jun 2023 20:15:54 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1686885354; cv=none; d=google.com; s=arc-20160816; b=DDo+5NOMlDBuhkGSS0T3wq//ZdOAwQkLSaiJ9mzkgZqgYzwMoYK86kyNhL9w8oVCc4 ILZgaSjBbj7gCtsefXBntQ48ZmPXdpQ4Hgu3kH2eKhKPw9Cnz2loIQh0OzmRlRPAq9xP VoktPLvC/uXQusDTTYTmmS4OaHU1RyIZCr8KcH4HhPspDFoi/G5Lm6dMjfk8iVtkKMiv Hy1IJ0ck+r4tulZKcMD2Knh+uzaQdi832+VKvu0hDW0K+mFp/G5P1ztdXMQ/ZK8TLpCy vLhKI7aGmSswDbGWO9y1dePS9URJAJz+kS134RZ8Qon0Qhvoeyr+HLbIpq4Kt+0dU9l4 N8/A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from:dkim-signature; bh=l6BLSNXxImieQZF7frj1KA4k5f6r5cDWwlyJuRMctXI=; b=vZxJDeDig+xVSzAqvimjOyhgOdedgMCAzBBMu1Pjkj59o6GOEcPdYFW1Wc+91Oz1jl ASGl4+qrZOIg+rIkfKhWsAco1tkTCGJPjnX9UC45G4gXWdAVcWzL+jaG7CfLdbyQ2PjZ kmlrXuKzo1gHY/IDC0OkMhYOMRLz//v1CD9ykr2ZOUh//yQGcetWcDcgHs2fmGcXzxM/ OFaXm5ZwvjCHiKZHVm6Le/zsXqYBAdjkMvCvIL0L1MMAHQk0+RelMJa/rw3vKFl0vdWN Udyo/tH+e+NphUwtWTot7TLCNQe12g8n1ynU3hDgxLjRYKzORXgsUmJBrhNIJV1T1sVP PHYQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b="M41zP/Ec"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id f5-20020a170902ce8500b001a6e98a5f21si9280535plg.586.2023.06.15.20.15.39; Thu, 15 Jun 2023 20:15:54 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b="M41zP/Ec"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241354AbjFPDBN (ORCPT + 99 others); Thu, 15 Jun 2023 23:01:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58690 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241279AbjFPDBH (ORCPT ); Thu, 15 Jun 2023 23:01:07 -0400 Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B30B32D57; Thu, 15 Jun 2023 20:00:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1686884457; x=1718420457; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=fDH9F5ik6wgWY9hy79+dGA7B1WWmfkgvQ9/kTzs5dJw=; b=M41zP/EcGdC1LY0p8tdEJLrlcBz8q2oN1r1awh4D0Oue60KXHJgH4yBS V9i++zVBKRasHTcMSGh72nReqELBLnVLt4nK3bahxkA0xFiik75hY/cAS Lp7JZU274sN/MfvbHm2SVLWTRzjHvz/lfIcpI05S48tIVblAeKfFB97Ew whsD/sQQcFpIt9DNq+V7FBvBZxDMsq9VdM6JKrNSvG/FvLRsCLacZzo4R ZyPTy3I83wpQtbgBf4yoFXHj8JwsqNATg6lGPHRzkzdWRk9SWxUoV5qKK TTIkaa/DVK5SZcaKpBTwGlxYtaJIrT6CYCRPqpvTKr8gFKzYrVw9LUNBL Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10742"; a="425031714" X-IronPort-AV: E=Sophos;i="6.00,246,1681196400"; d="scan'208";a="425031714" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Jun 2023 20:00:37 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10742"; a="706914642" X-IronPort-AV: E=Sophos;i="6.00,246,1681196400"; d="scan'208";a="706914642" Received: from yzhao56-desk.sh.intel.com ([10.239.159.62]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Jun 2023 20:00:33 -0700 From: Yan Zhao To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: pbonzini@redhat.com, seanjc@google.com, chao.gao@intel.com, kai.huang@intel.com, robert.hoo.linux@gmail.com, Yan Zhao Subject: [PATCH v3 03/11] KVM: x86/mmu: Use KVM honors guest MTRRs helper when CR0.CD toggles Date: Fri, 16 Jun 2023 10:35:24 +0800 Message-Id: <20230616023524.7203-1-yan.y.zhao@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230616023101.7019-1-yan.y.zhao@intel.com> References: <20230616023101.7019-1-yan.y.zhao@intel.com> X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1768827497656672613?= X-GMAIL-MSGID: =?utf-8?q?1768827497656672613?= Call helper to check if guest MTRRs are honored by KVM MMU before zapping, as values of guest CR0.CD will only affect memory types of KVM TDP when guest MTRRs are honored. Suggested-by: Chao Gao Signed-off-by: Yan Zhao --- arch/x86/kvm/x86.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 9e7186864542..6693daeb5686 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -942,7 +942,7 @@ void kvm_post_set_cr0(struct kvm_vcpu *vcpu, unsigned long old_cr0, unsigned lon kvm_mmu_reset_context(vcpu); if (((cr0 ^ old_cr0) & X86_CR0_CD) && - kvm_arch_has_noncoherent_dma(vcpu->kvm) && + kvm_mmu_honors_guest_mtrrs(vcpu->kvm) && !kvm_check_has_quirk(vcpu->kvm, KVM_X86_QUIRK_CD_NW_CLEARED)) kvm_zap_gfn_range(vcpu->kvm, 0, ~0ULL); } From patchwork Fri Jun 16 02:36:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yan Zhao X-Patchwork-Id: 108810 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp1070601vqr; Thu, 15 Jun 2023 20:53:14 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ5RISfUywQHq8pGOBkJ30DKXmaCk4HZqKVwwYnmUFfntcTo/Tr+aO0xA/oGI5l6Ycj3HJSC X-Received: by 2002:a17:902:db08:b0:1b1:e88b:f63 with SMTP id m8-20020a170902db0800b001b1e88b0f63mr781562plx.61.1686887594011; Thu, 15 Jun 2023 20:53:14 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1686887593; cv=none; d=google.com; s=arc-20160816; b=gp6R9l2nObibOLoKVBIPHnVfxV789ngha2lWD6yuX5qwOYYLGbFceRjsECr4F/0FVu i9GNi1otnVlFgi0TQEQeDHIOtcIY8UH0PM8DmF2Sl7Gbaadrg0AfSP7aVPZ+fUAwVT5C jOVD1mH6C8DlccZtR7DS4eclN54e3VenyzhDpQj8KCGcHbhxP3om7Zdkus8jTTtJMJU/ /oBKatw1lXpsM0aQJ8uy6brT2551XZGwEwcLGKn1cIvvLVbwTVwDMJKgjCOnX2WXK42u hRP+yI8NSZmzcgqD5MAO0RQv2u98YCdJDYXT/XTFkfXcNiXlacqQziIv6XzC3LIPFyb6 aLMw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from:dkim-signature; bh=U2ErjbX7EBw0e8HHSrKEUNHzvWQtsibcN2HjvsyCgFI=; b=pEiljjmNJttAwVZ+mPqCCR+uGIh3ggdGQRD/1Wp/kpEZCduBQGh3U13sItch1Waxxr mmdBG3GQVMrLpWkfImIF1c19fEVdSjQODLCDq8cqrF9ZCQnxT2EN26cCDWzkUnROUdzE tuyQ1hGNHew3W7OoPOiKLsDg+YBIT/8cB9R3P74QQzVviolqTNLXeXRhaYeCNBVzpDr7 BRq13prOGwfgDdUsf4ekKdiXffAanjFO7vpo5hCTXCHukbuICVrLObWah13T7aU3ohk/ iPCcrt/QIqitWcMWf4K3cBszSglGg6dbJCfieJb0XKeBQm2pER0WQ0VwlrwHrbxh+jl2 ciQg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=Ia6WMJWV; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id jx17-20020a170903139100b001affb6b00f1si8377522plb.455.2023.06.15.20.52.58; Thu, 15 Jun 2023 20:53:13 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=Ia6WMJWV; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241550AbjFPDB4 (ORCPT + 99 others); Thu, 15 Jun 2023 23:01:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58906 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239232AbjFPDB3 (ORCPT ); Thu, 15 Jun 2023 23:01:29 -0400 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 40D5A2D5D; Thu, 15 Jun 2023 20:01:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1686884487; x=1718420487; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=a7dmPZNYlw0iNOFy5IcfL4+fWnLVVuU7MOzfbP4iSGY=; b=Ia6WMJWV6tXEYCm5UdM5e2keTCJaQvAMkQMdKdw1x4otOE7w1FpYLfIP 0jlCj/GrIwfRrHrHGZVZMQSaICC1JmI0TnI43VaVBkyIwz5Ppm04qE0eD iD+jnY45hoC+GtHllHeXIH0U0AQMc4k1BMQQj+m6hyF05/ubanXibBzH9 F7VuTE6Ogr3oMz1rGKClBdW/9mz5aBSfH08sOX11Ys2/nDk7bj4tehavF ojOgnpfcMx/Ln0YBEED9xXjOVXPkhTBFrmtHnvQNdVL+vjUAo+wi+Nl2E yDrdTS3nn9iXzP8IcoWCTPRD+ednb+TgDVzGvhuM29EvZE+Kq42BopcfF A==; X-IronPort-AV: E=McAfee;i="6600,9927,10742"; a="348809733" X-IronPort-AV: E=Sophos;i="6.00,246,1681196400"; d="scan'208";a="348809733" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Jun 2023 20:01:26 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10742"; a="857213580" X-IronPort-AV: E=Sophos;i="6.00,246,1681196400"; d="scan'208";a="857213580" Received: from yzhao56-desk.sh.intel.com ([10.239.159.62]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Jun 2023 20:01:24 -0700 From: Yan Zhao To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: pbonzini@redhat.com, seanjc@google.com, chao.gao@intel.com, kai.huang@intel.com, robert.hoo.linux@gmail.com, Yan Zhao Subject: [PATCH v3 04/11] KVM: x86/mmu: Use KVM honors guest MTRRs helper when update mtrr Date: Fri, 16 Jun 2023 10:36:14 +0800 Message-Id: <20230616023614.7261-1-yan.y.zhao@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230616023101.7019-1-yan.y.zhao@intel.com> References: <20230616023101.7019-1-yan.y.zhao@intel.com> X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, RCVD_IN_MSPIKE_H3,RCVD_IN_MSPIKE_WL,SPF_HELO_NONE,SPF_NONE, T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1768829845957753408?= X-GMAIL-MSGID: =?utf-8?q?1768829845957753408?= Call helper to check if guest MTRRs are honored by KVM MMU before calculation and zapping. Guest MTRRs only affect TDP memtypes when TDP honors guest MTRRs, there's no meaning to do the calculation and zapping otherwise. Suggested-by: Chao Gao Suggested-by: Sean Christopherson Cc: Kai Huang Signed-off-by: Yan Zhao --- arch/x86/kvm/mtrr.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/kvm/mtrr.c b/arch/x86/kvm/mtrr.c index 3eb6e7f47e96..a67c28a56417 100644 --- a/arch/x86/kvm/mtrr.c +++ b/arch/x86/kvm/mtrr.c @@ -320,7 +320,7 @@ static void update_mtrr(struct kvm_vcpu *vcpu, u32 msr) struct kvm_mtrr *mtrr_state = &vcpu->arch.mtrr_state; gfn_t start, end; - if (!tdp_enabled || !kvm_arch_has_noncoherent_dma(vcpu->kvm)) + if (!kvm_mmu_honors_guest_mtrrs(vcpu->kvm)) return; if (!mtrr_is_enabled(mtrr_state) && msr != MSR_MTRRdefType) From patchwork Fri Jun 16 02:37:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yan Zhao X-Patchwork-Id: 108813 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp1073990vqr; Thu, 15 Jun 2023 21:03:14 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ5lAK1zxigU0XRnkQzBEMup1Sn3fs+HBv/k656alDTPrb4s2zCrYUSSbhjoV/F2oivmv+U7 X-Received: by 2002:a05:6a20:a125:b0:114:af27:41dd with SMTP id q37-20020a056a20a12500b00114af2741ddmr1326948pzk.34.1686888194197; Thu, 15 Jun 2023 21:03:14 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1686888194; cv=none; d=google.com; s=arc-20160816; b=GMeVypoFTKmnyfw/LMNRvC6xx7X2MIiWf/g5N97vQm0vzFqDwMS5ZNjBgFm6lmi9GJ AVT0RiMNiNV2kLrk8K3xI8rtyoPzMUJZlt1OFrbPNGKS6AFILqckQdxVuMF8PgH05K32 3ggHQFVTFX9EjoCdPSC0P7Fs1fpP5GV0VvMDPo5Wm4vEK2kWjZUgiqBmx/gwZskUCBAV 6IeDEgFY0kc/yzg+IznaKHQnGuFNCvlSJgkyOa+EpVVL22QGp4m+cDngnQSec34llI3d 6qGLFFIZyad9I5EItdSrZ1sqmDB6KO2iYcWMuWy5j2oxKTvSNJ5hwy8wr/RaJFQN9Dlq Ikeg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from:dkim-signature; bh=R3n1GEot05nOcwcu+MAz9apcLO2phCCWui8K3Q36Nrw=; b=MkGz+tmGBJR1hljNg1y7R0kYytHu15oGjsbl4bnvOyt/HTBEvx1I8WX+AHGvmdL/tF L3xZyUPZYddb8aNDcLt933/266tHZBMNnqsFUH8J/Ha7hIgBBjOPRA8LF/sdbMqAQSf4 7Fr1APSi9pEqJVsOjddCDmDc4s3qL0yr/YOXL4T0A91njsfcvfxze9TueRr1ldLkib3q UPalnqFsM6yFhyncYZgvQVMLrEemdeg/qnLFM69hEZ+6nBWNCAU83YyhW4zTVXJMsn2/ G/2rop7uuoeJF5KgHDr5+nSRUJRGD7t5Re4Vp1yIt0cqGU8z4OGWsNtbTbQi8lI2A14y tCng== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=UXVqXD2t; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id k2-20020a170902d58200b001b041114a62si10525765plh.355.2023.06.15.21.02.57; Thu, 15 Jun 2023 21:03:14 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=UXVqXD2t; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239978AbjFPDCv (ORCPT + 99 others); Thu, 15 Jun 2023 23:02:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59512 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241505AbjFPDCe (ORCPT ); Thu, 15 Jun 2023 23:02:34 -0400 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3EF672D4C; Thu, 15 Jun 2023 20:02:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1686884541; x=1718420541; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=mQjyugDM9ae/+xsV/na3qkdiKTAXgzQ9oL683PGryPA=; b=UXVqXD2tI9XLBeR26o/oTh7rmsvzXwV9xse9p+8HHkFxLjpkwzspqw1l NcCPL5TuDRh8OA9Td07yzA0FHlg/VRAxis1E0Ke8dvjyXyP8yby+u2T0W 8xZovVbcitfFju+5ltpmHyEHo/JTC9ndpALc1wGghyDyIxHEMsJsTGQcR oJ4bWyVZU26Jjtk5rDz5mv7jlgJVTN7QPJpjtThTrt5mZeblDMgTdOdtI G6pIKavuEUNhlKcfzwFQzv3/5IH1+hNzdRFUJUyjmMmoBPjbar5JdsKBO oe9d0kJZLJDxCQd7Le/s42oRCqdreMQ+askczP8yXxOu4zPeyyH8ywhD/ A==; X-IronPort-AV: E=McAfee;i="6600,9927,10742"; a="348809867" X-IronPort-AV: E=Sophos;i="6.00,246,1681196400"; d="scan'208";a="348809867" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Jun 2023 20:02:20 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10742"; a="857213767" X-IronPort-AV: E=Sophos;i="6.00,246,1681196400"; d="scan'208";a="857213767" Received: from yzhao56-desk.sh.intel.com ([10.239.159.62]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Jun 2023 20:02:18 -0700 From: Yan Zhao To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: pbonzini@redhat.com, seanjc@google.com, chao.gao@intel.com, kai.huang@intel.com, robert.hoo.linux@gmail.com, Yan Zhao Subject: [PATCH v3 05/11] KVM: x86/mmu: zap KVM TDP when noncoherent DMA assignment starts/stops Date: Fri, 16 Jun 2023 10:37:09 +0800 Message-Id: <20230616023709.7318-1-yan.y.zhao@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230616023101.7019-1-yan.y.zhao@intel.com> References: <20230616023101.7019-1-yan.y.zhao@intel.com> X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, RCVD_IN_MSPIKE_H3,RCVD_IN_MSPIKE_WL,SPF_HELO_NONE,SPF_NONE, T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1768830475120274586?= X-GMAIL-MSGID: =?utf-8?q?1768830475120274586?= Zap KVM TDP when noncoherent DMA assignment starts (noncoherent dma count transitions from 0 to 1) or stops (noncoherent dma count transistions from 1 to 0). Before the zap, test if guest MTRR is to be honored after the assignment starts or was honored before the assignment stops. When there's no noncoherent DMA device, EPT memory type is ((MTRR_TYPE_WRBACK << VMX_EPT_MT_EPTE_SHIFT) | VMX_EPT_IPAT_BIT) When there're noncoherent DMA devices, EPT memory type needs to honor guest CR0.CD and MTRR settings. So, if noncoherent DMA count transitions between 0 and 1, EPT leaf entries need to be zapped to clear stale memory type. This issue might be hidden when the device is statically assigned with VFIO adding/removing MMIO regions of the noncoherent DMA devices for several times during guest boot, and current KVM MMU will call kvm_mmu_zap_all_fast() on the memslot removal. But if the device is hot-plugged, or if the guest has mmio_always_on for the device, the MMIO regions of it may only be added for once, then there's no path to do the EPT entries zapping to clear stale memory type. Therefore do the EPT zapping when noncoherent assignment starts/stops to ensure stale entries cleaned away. Signed-off-by: Yan Zhao --- arch/x86/kvm/x86.c | 20 ++++++++++++++++++-- 1 file changed, 18 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 6693daeb5686..ac9548efa76f 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -13164,15 +13164,31 @@ bool noinstr kvm_arch_has_assigned_device(struct kvm *kvm) } EXPORT_SYMBOL_GPL(kvm_arch_has_assigned_device); +static void kvm_noncoherent_dma_assignment_start_or_stop(struct kvm *kvm) +{ + /* + * Non-coherent DMA assignement and de-assignment will affect + * whether KVM honors guest MTRRs and cause changes in memtypes + * in TDP. + * So, specify the second parameter as true here to indicate + * non-coherent DMAs are/were involved and TDP zap might be + * necessary. + */ + if (__kvm_mmu_honors_guest_mtrrs(kvm, true)) + kvm_zap_gfn_range(kvm, gpa_to_gfn(0), gpa_to_gfn(~0ULL)); +} + void kvm_arch_register_noncoherent_dma(struct kvm *kvm) { - atomic_inc(&kvm->arch.noncoherent_dma_count); + if (atomic_inc_return(&kvm->arch.noncoherent_dma_count) == 1) + kvm_noncoherent_dma_assignment_start_or_stop(kvm); } EXPORT_SYMBOL_GPL(kvm_arch_register_noncoherent_dma); void kvm_arch_unregister_noncoherent_dma(struct kvm *kvm) { - atomic_dec(&kvm->arch.noncoherent_dma_count); + if (!atomic_dec_return(&kvm->arch.noncoherent_dma_count)) + kvm_noncoherent_dma_assignment_start_or_stop(kvm); } EXPORT_SYMBOL_GPL(kvm_arch_unregister_noncoherent_dma); From patchwork Fri Jun 16 02:37:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yan Zhao X-Patchwork-Id: 108797 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp1058679vqr; Thu, 15 Jun 2023 20:13:40 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6WlkjCIEOXCDHS4oCk1E0xJeZwyMEKPuAyRgrnrFUUwRChrZAMGl1DslumSncF6NYO/1Ex X-Received: by 2002:a05:6a20:6a2b:b0:10f:1d33:d667 with SMTP id p43-20020a056a206a2b00b0010f1d33d667mr1590506pzk.5.1686885219695; Thu, 15 Jun 2023 20:13:39 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1686885219; cv=none; d=google.com; s=arc-20160816; b=b75YmW06ANjzZL+HZPXlEcw+Rj1W6n43hePx1bRCiBMhPFgFhYQvKkrCMIutU1Vd37 Vnpj3htQUgvtWwSUPxyJJVmKfJ0rBWXt6hHOfwNgdJVb9mYarZzmnrlJG6XWcBUFRd9b HJpFDk/4aYfkJKR9qkhXKy6nQBysUZ55npyIhoYGIrDpp+6A15NIJUNRaWEF5pJWGJSE bfDefp8NbD7Kt3oe2w7HbyfGYNMB0rGu6gFX9lwXbsjd/8ObtftWa1WPfRByujSZhlve EOuBB+9ZUSp1RSzyn3KMXgzar74iB3Y7P95e4aFDb+S5MQwbNW/l1X4NHS+sXyocfzXZ SvhQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from:dkim-signature; bh=64ZOjWn+3KcHUmU2AaxAHXDOw3E55jO3xSpLnppNF7g=; b=Odt257b88U8pp2+ldCaQZ6Jq38J6OTRXqQnczE9+5ExKsWYWCY589/zkk0NHQl6DP3 4C1l5d2ZFclyVPl1I6OCMNR7cHSyrwPVXzA5j0yd34Vsw20Go+diKTLhz+e1BaVa6VnB syXnSNo9SQvhx7KrtSFnd1i+mSSmKPV8lril69ENuS3BCBvtkC+4eMcjwsxv2jfRwx4V 5Ox7mkmC0EBHLyAAiJC13pvIYnallnFsh5Wo0kGT6u5ZwAluuHvt1nib9a0HHCGYgr+7 V2ZsmMHaTbnVHx358CMogrbzsUQCKTWXbekV1kltHUblBBxdp586mS/SzDg20bXVnRDQ kRzQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=Kc5nxSVo; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id t4-20020a170902e84400b001adec0d4391si6508078plg.48.2023.06.15.20.13.27; Thu, 15 Jun 2023 20:13:39 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=Kc5nxSVo; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241531AbjFPDDM (ORCPT + 99 others); Thu, 15 Jun 2023 23:03:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60176 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241326AbjFPDC7 (ORCPT ); Thu, 15 Jun 2023 23:02:59 -0400 Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 59A2F2D4C; Thu, 15 Jun 2023 20:02:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1686884575; x=1718420575; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=rpvZF8/E2sO1ceRHFD/Pgi1BwdLKFTmV/M8xCwRXvyg=; b=Kc5nxSVobPUSZi6EYjYfYQV2iVc5pCfheydzrAeaXzWVm+BPWondcfzh NvoD3GX6SNUxLZVQzxl0V8cN62y9GAav4zkR3qmdY7T1q9W87ndscot4S tbto2soFrqTjuKPzSS/geiI++0TY3ATV3hw3UGmt/fjz7qwRcdcJ5Z8nm HIxRcDBgmvMXM3OyZzTfpz/ZgOVmIurX8g9u+6siFtTQvGcBXglvyowNM PvS1tSFZf3QGbEUHrKR71qNrJv6bgUUdINWzEZ2xkoqkjjhm4Fjc+R/8y y7Q0+Vfjk846YDdmwV7JSuI2Y/APAdaEMWvRqC70LPfyNXcd/fUzwQoNb g==; X-IronPort-AV: E=McAfee;i="6600,9927,10742"; a="361621625" X-IronPort-AV: E=Sophos;i="6.00,246,1681196400"; d="scan'208";a="361621625" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Jun 2023 20:02:54 -0700 X-IronPort-AV: E=McAfee;i="6600,9927,10742"; a="745963368" X-IronPort-AV: E=Sophos;i="6.00,246,1681196400"; d="scan'208";a="745963368" Received: from yzhao56-desk.sh.intel.com ([10.239.159.62]) by orsmga001-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Jun 2023 20:02:52 -0700 From: Yan Zhao To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: pbonzini@redhat.com, seanjc@google.com, chao.gao@intel.com, kai.huang@intel.com, robert.hoo.linux@gmail.com, Yan Zhao Subject: [PATCH v3 06/11] KVM: x86/mmu: move TDP zaps from guest MTRRs update to CR0.CD toggling Date: Fri, 16 Jun 2023 10:37:42 +0800 Message-Id: <20230616023742.7379-1-yan.y.zhao@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230616023101.7019-1-yan.y.zhao@intel.com> References: <20230616023101.7019-1-yan.y.zhao@intel.com> X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, RCVD_IN_MSPIKE_H3,RCVD_IN_MSPIKE_WL,SPF_HELO_NONE,SPF_NONE, T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1768827356107464005?= X-GMAIL-MSGID: =?utf-8?q?1768827356107464005?= If guest MTRRs are honored, always zap TDP when CR0.CD toggles and don't do it if guest MTRRs are updated under CR0.CD=1. This is because CR0.CD=1 takes precedence over guest MTRRs to decide TDP memory types, TDP memtypes are not changed if guest MTRRs update under CR0.CD=1. Instead, always do the TDP zapping when CR0.CD toggles, because even with the quirk KVM_X86_QUIRK_CD_NW_CLEARED, TDP memory types may change after guest CR0.CD toggles. Suggested-by: Sean Christopherson Signed-off-by: Yan Zhao --- arch/x86/kvm/mtrr.c | 3 +++ arch/x86/kvm/x86.c | 3 +-- 2 files changed, 4 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/mtrr.c b/arch/x86/kvm/mtrr.c index a67c28a56417..3ce58734ad22 100644 --- a/arch/x86/kvm/mtrr.c +++ b/arch/x86/kvm/mtrr.c @@ -323,6 +323,9 @@ static void update_mtrr(struct kvm_vcpu *vcpu, u32 msr) if (!kvm_mmu_honors_guest_mtrrs(vcpu->kvm)) return; + if (kvm_is_cr0_bit_set(vcpu, X86_CR0_CD)) + return; + if (!mtrr_is_enabled(mtrr_state) && msr != MSR_MTRRdefType) return; diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index ac9548efa76f..32cc8bfaa5f1 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -942,8 +942,7 @@ void kvm_post_set_cr0(struct kvm_vcpu *vcpu, unsigned long old_cr0, unsigned lon kvm_mmu_reset_context(vcpu); if (((cr0 ^ old_cr0) & X86_CR0_CD) && - kvm_mmu_honors_guest_mtrrs(vcpu->kvm) && - !kvm_check_has_quirk(vcpu->kvm, KVM_X86_QUIRK_CD_NW_CLEARED)) + kvm_mmu_honors_guest_mtrrs(vcpu->kvm)) kvm_zap_gfn_range(vcpu->kvm, 0, ~0ULL); } EXPORT_SYMBOL_GPL(kvm_post_set_cr0); From patchwork Fri Jun 16 02:38:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yan Zhao X-Patchwork-Id: 108805 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp1060439vqr; Thu, 15 Jun 2023 20:18:50 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6GWrmseII0G1nGH+VLzFfqBTAS7hjRXCfTpugXhpiDwdaLL0myMXTDEr2FznFXieBp9Sho X-Received: by 2002:a17:902:6a85:b0:1b5:2fdf:5bd8 with SMTP id n5-20020a1709026a8500b001b52fdf5bd8mr445575plk.8.1686885530190; Thu, 15 Jun 2023 20:18:50 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1686885530; cv=none; d=google.com; s=arc-20160816; b=AWhQkhlZcgC11wT6ySpp1Nwz07GCIRVd4r5aEgHOnP5XjNIR76FwTMXiZQ2AH6hmSv LfBcsrnlfNw7I9ALrH8mPpdaFMnGWe/Ua9pHdPyJtjiiT8LV2Y3e17kz1Tg1t1hiJ87s fXyQUfPcBRHadHsYdty64QHR2PgIWD2AcuiYSOQV54wehbgfmj2gopG+K6vpiUZd77sk aB8HaVXXdRwAqHEQjKqGDYLab3CGyP1qBJd1vBzcSqOgRb25xHwHJ8fwtcDrt7boqin3 UYPZADTAWb+xVwP39zOzLo557yUh/bGBG4WCSycxSkr19NdFLPgX3Nj4GurcEBMpWVCI dMLA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from:dkim-signature; bh=wbJsOGmI/nT2KllmvTDf95NYFlsin399M0Ni4eYW7r0=; b=UDfeM/p1YmkLSLEZe22J4XArcUESsB/+qeMw4vcc2p6Y5dZGcUuivsyG/De8gauvBI cO9ZDVyWN+WePD0DTWM3+NR3/1cbBt5W+quX65VTZYHm0VsennACGQiJmhspgL66gdcC RyMs0MK/QsclV5FLjAF5yTE2fCKzU3Hn29b6mO3TcPpLY8oo4pHHVbETRMrqFU7zPQhQ TDYJTJQFmCQTrIxtDAamGMa/W9mvbAudoi3S3RmfEAnLQQ5VbBGCYDBHet4ELGuyL3Wd 6oXSkfUiIEUKpt+2DpaggsH2MbOAGCpeNrc3FrQ9tr6idXFPk5WZkSJMX8nJlXdmeUB8 CjRA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=DCilRLKj; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id t3-20020a170902bc4300b001ae38227983si13826359plz.199.2023.06.15.20.18.37; Thu, 15 Jun 2023 20:18:50 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=DCilRLKj; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241318AbjFPDEE (ORCPT + 99 others); Thu, 15 Jun 2023 23:04:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32972 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229654AbjFPDD7 (ORCPT ); Thu, 15 Jun 2023 23:03:59 -0400 Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F30AC2D47; Thu, 15 Jun 2023 20:03:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1686884638; x=1718420638; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=tApTo48fmEWmOt000/vMI+RpIzA2pYKklHGoAN2lprc=; b=DCilRLKjlPcbjNYarsrjYWgkn934h+4jjB88EP+z5vlvBftB7ok9v1Mw HydxmHlgzrXTb4w/ksrTd3wuhEQD82n4fYODyBb+Gfzg+dRxDruGSv4Mu AQ2z+cZx38yCvX3J201s/CbEzHU+/iLY/Dy+4msQDelUQHLNy3DkYHzOY HhzjGxgyZr8H6kHLauEC3MMQ3ZZY+zkyJCd1hlkvk0y3Ip9gS0j1F9CKz 9qjCRhwtjgNtwfVkgKfIzH8dGFiORlBjWWSDVy5EBQH63I4JWJh1tdSo8 Fef0qiupUYibaur28516osfkm0nqtAo4xPgYqSSVKJhVf4qFiSm8T9Enl w==; X-IronPort-AV: E=McAfee;i="6600,9927,10742"; a="357976726" X-IronPort-AV: E=Sophos;i="6.00,246,1681196400"; d="scan'208";a="357976726" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Jun 2023 20:03:58 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10742"; a="782724568" X-IronPort-AV: E=Sophos;i="6.00,246,1681196400"; d="scan'208";a="782724568" Received: from yzhao56-desk.sh.intel.com ([10.239.159.62]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Jun 2023 20:03:34 -0700 From: Yan Zhao To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: pbonzini@redhat.com, seanjc@google.com, chao.gao@intel.com, kai.huang@intel.com, robert.hoo.linux@gmail.com, Yan Zhao Subject: [PATCH v3 07/11] KVM: VMX: drop IPAT in memtype when CD=1 for KVM_X86_QUIRK_CD_NW_CLEARED Date: Fri, 16 Jun 2023 10:38:15 +0800 Message-Id: <20230616023815.7439-1-yan.y.zhao@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230616023101.7019-1-yan.y.zhao@intel.com> References: <20230616023101.7019-1-yan.y.zhao@intel.com> X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1768827682032173413?= X-GMAIL-MSGID: =?utf-8?q?1768827682032173413?= For KVM_X86_QUIRK_CD_NW_CLEARED, remove the ignore PAT bit in EPT memory types when cache is disabled and non-coherent DMA are present. With the quirk KVM_X86_QUIRK_CD_NW_CLEARED, WB + IPAT are returned as the EPT memory type when guest cache is disabled before this patch. Removing the IPAT bit in this patch will allow effective memory type to honor PAT values as well, which will make the effective memory type stronger than WB as WB is the weakest memtype. However, this change is acceptable as it doesn't make the memory type weaker, consider without this quirk, the original memtype for cache disabled is UC + IPAT. Suggested-by: Sean Christopherson Signed-off-by: Yan Zhao --- arch/x86/kvm/vmx/vmx.c | 9 +++------ 1 file changed, 3 insertions(+), 6 deletions(-) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 0ecf4be2c6af..c1e93678cea4 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -7548,8 +7548,6 @@ static int vmx_vm_init(struct kvm *kvm) static u8 vmx_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio) { - u8 cache; - /* We wanted to honor guest CD/MTRR/PAT, but doing so could result in * memory aliases with conflicting memory types and sometimes MCEs. * We have to be careful as to what are honored and when. @@ -7576,11 +7574,10 @@ static u8 vmx_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio) if (kvm_read_cr0_bits(vcpu, X86_CR0_CD)) { if (kvm_check_has_quirk(vcpu->kvm, KVM_X86_QUIRK_CD_NW_CLEARED)) - cache = MTRR_TYPE_WRBACK; + return MTRR_TYPE_WRBACK << VMX_EPT_MT_EPTE_SHIFT; else - cache = MTRR_TYPE_UNCACHABLE; - - return (cache << VMX_EPT_MT_EPTE_SHIFT) | VMX_EPT_IPAT_BIT; + return (MTRR_TYPE_UNCACHABLE << VMX_EPT_MT_EPTE_SHIFT) | + VMX_EPT_IPAT_BIT; } return kvm_mtrr_get_guest_memory_type(vcpu, gfn) << VMX_EPT_MT_EPTE_SHIFT; From patchwork Fri Jun 16 02:38:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yan Zhao X-Patchwork-Id: 108815 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp1078680vqr; Thu, 15 Jun 2023 21:16:32 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ7styIW1KtUfkrKVO8d10cafwXrAPvWNhCeiEkP7BOL/VYivj3et/eqyVtJfYvOy/xMfVFS X-Received: by 2002:a17:902:f7ce:b0:1b1:9b59:fc68 with SMTP id h14-20020a170902f7ce00b001b19b59fc68mr729137plw.13.1686888992208; Thu, 15 Jun 2023 21:16:32 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1686888992; cv=none; d=google.com; s=arc-20160816; b=yMfhg6e1qrAr8//O2QZ/3YfKzRywsA8OLTqZkshmfbg46vixE5iZLyrzqcOQuj5VF0 ucaztAvl8lcN3D5Y1kdnDKAueDwEycWI8cyZbapIu90QWpdz2w4PbU0H95yQ9k4HLXje PPt5l1o0EMHdSJm0BFL1jF4EiGzLt/EXz1MJnTQk+tcUX5ULsXeglqAyJ/uqKAcy4cph pNTK9DhdgLWow6CyGYnGXuiZdOCF0nfnMsdUfPP//NSH0+fTRsMxBuRgSRTZcy3Dze44 rCVIJRyCSH5x1Gbxd+HsvAohBh27Ra2dM2yDkOmmj6IKCXYC+7kD1G8GjYj3T1Q7/Di5 i7aQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from:dkim-signature; bh=SHhTE8flg//moecTBoxhIzJqyq8trVddxtUhrhhMHas=; b=LZjc88bJUTF/z2w2eQ3j84Bnnc7k5EvuIQSfrhIkcLKxd094VF1hko6Nld1S/JxN+3 ddObKSJrWY+OO7+gmimOkvJvuL0WTk7hXt2i+D4m+ISnp0TpiaKee3TKmPBYSP48qiMf ohDJfk89RRvG4eqRcxEIhUF0ZGLV5UkPfbvGfppobtOm18/m3+hCZTyGjt0Qz44G1GvD aVXHjCJ47thOiZvYl4749u2k58tGWdQQXuLQUyt45+y3eFPV3RR+7trNAvevSHfnbz7H nKAKxkZT0eDi+c8bcGkLs03cFeX+ZVaSan8yrVkkm8hClXvkKf3v210zkmnkDx5XVFWd zFQw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=ZN73HhKE; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id t18-20020a170902d21200b001b199c98cf4si10157097ply.280.2023.06.15.21.16.17; Thu, 15 Jun 2023 21:16:32 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=ZN73HhKE; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241407AbjFPDEU (ORCPT + 99 others); Thu, 15 Jun 2023 23:04:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33234 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241534AbjFPDEQ (ORCPT ); Thu, 15 Jun 2023 23:04:16 -0400 Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D06F02D60; Thu, 15 Jun 2023 20:04:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1686884653; x=1718420653; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=BbhsWAIbuhRQ7z0GhXvz/jeDOJUZBkuqg3ejQ3AlksA=; b=ZN73HhKEUJ7uhNxX2fVuQgSYasuT2ipt5XCONxF1vCe5aPaDmDcck1/L xLdcVUcxQ+T/YMnDzCzsP81kHQj+6BrFiXovIcRXHvzofaIDQ6yLzVZeO QaT6cVdpeJhjQ7vb0igiL7mKDSxfNQmkW/O8hV1kFnsCAo7jKjVJQz88c n/tmBbLEms9XzrWc8mVJMyQVLFdpJuOrPIHeQWYzRCkWEFhLJ2lgJ6Wbl NT7i5qaYo+Cw9ahDfkk9Yrd6QZRmO2A4mmqNoxTacSHYXTo3N6OIlMtgG atCm3lC+My4w37+XNvIhCtH9VNc2YrFhiPqoCc1wximuzBVg1kpG8UkXQ A==; X-IronPort-AV: E=McAfee;i="6600,9927,10742"; a="445482463" X-IronPort-AV: E=Sophos;i="6.00,246,1681196400"; d="scan'208";a="445482463" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Jun 2023 20:04:11 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10742"; a="690037834" X-IronPort-AV: E=Sophos;i="6.00,246,1681196400"; d="scan'208";a="690037834" Received: from yzhao56-desk.sh.intel.com ([10.239.159.62]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Jun 2023 20:04:08 -0700 From: Yan Zhao To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: pbonzini@redhat.com, seanjc@google.com, chao.gao@intel.com, kai.huang@intel.com, robert.hoo.linux@gmail.com, Yan Zhao Subject: [PATCH v3 08/11] KVM: x86: move vmx code to get EPT memtype when CR0.CD=1 to x86 common code Date: Fri, 16 Jun 2023 10:38:58 +0800 Message-Id: <20230616023858.7503-1-yan.y.zhao@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230616023101.7019-1-yan.y.zhao@intel.com> References: <20230616023101.7019-1-yan.y.zhao@intel.com> X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1768831311601739154?= X-GMAIL-MSGID: =?utf-8?q?1768831311601739154?= Move code in vmx.c to get cache disabled memtype when non-coherent DMA present to x86 common code. This is the preparation patch for later implementation of fine-grained gfn zap for CR0.CD toggles when guest MTRRs are honored. No functional change intended. Signed-off-by: Yan Zhao --- arch/x86/kvm/mtrr.c | 19 +++++++++++++++++++ arch/x86/kvm/vmx/vmx.c | 10 +++++----- arch/x86/kvm/x86.h | 1 + 3 files changed, 25 insertions(+), 5 deletions(-) diff --git a/arch/x86/kvm/mtrr.c b/arch/x86/kvm/mtrr.c index 3ce58734ad22..b35dd0bc9cad 100644 --- a/arch/x86/kvm/mtrr.c +++ b/arch/x86/kvm/mtrr.c @@ -721,3 +721,22 @@ bool kvm_mtrr_check_gfn_range_consistency(struct kvm_vcpu *vcpu, gfn_t gfn, return type == mtrr_default_type(mtrr_state); } + +void kvm_mtrr_get_cd_memory_type(struct kvm_vcpu *vcpu, u8 *type, bool *ipat) +{ + /* + * this routine is supposed to be called when guest mtrrs are honored + */ + if (unlikely(!kvm_mmu_honors_guest_mtrrs(vcpu->kvm))) { + *type = MTRR_TYPE_WRBACK; + *ipat = true; + } else if (unlikely(!kvm_check_has_quirk(vcpu->kvm, + KVM_X86_QUIRK_CD_NW_CLEARED))) { + *type = MTRR_TYPE_UNCACHABLE; + *ipat = true; + } else { + *type = MTRR_TYPE_WRBACK; + *ipat = false; + } +} +EXPORT_SYMBOL_GPL(kvm_mtrr_get_cd_memory_type); diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index c1e93678cea4..6414c5a6e892 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -7573,11 +7573,11 @@ static u8 vmx_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio) return (MTRR_TYPE_WRBACK << VMX_EPT_MT_EPTE_SHIFT) | VMX_EPT_IPAT_BIT; if (kvm_read_cr0_bits(vcpu, X86_CR0_CD)) { - if (kvm_check_has_quirk(vcpu->kvm, KVM_X86_QUIRK_CD_NW_CLEARED)) - return MTRR_TYPE_WRBACK << VMX_EPT_MT_EPTE_SHIFT; - else - return (MTRR_TYPE_UNCACHABLE << VMX_EPT_MT_EPTE_SHIFT) | - VMX_EPT_IPAT_BIT; + bool ipat; + u8 cache; + + kvm_mtrr_get_cd_memory_type(vcpu, &cache, &ipat); + return cache << VMX_EPT_MT_EPTE_SHIFT | (ipat ? VMX_EPT_IPAT_BIT : 0); } return kvm_mtrr_get_guest_memory_type(vcpu, gfn) << VMX_EPT_MT_EPTE_SHIFT; diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h index 82e3dafc5453..9781b4b32d68 100644 --- a/arch/x86/kvm/x86.h +++ b/arch/x86/kvm/x86.h @@ -313,6 +313,7 @@ int kvm_mtrr_set_msr(struct kvm_vcpu *vcpu, u32 msr, u64 data); int kvm_mtrr_get_msr(struct kvm_vcpu *vcpu, u32 msr, u64 *pdata); bool kvm_mtrr_check_gfn_range_consistency(struct kvm_vcpu *vcpu, gfn_t gfn, int page_num); +void kvm_mtrr_get_cd_memory_type(struct kvm_vcpu *vcpu, u8 *type, bool *ipat); bool kvm_vector_hashing_enabled(void); void kvm_fixup_and_inject_pf_error(struct kvm_vcpu *vcpu, gva_t gva, u16 error_code); int x86_decode_emulated_instruction(struct kvm_vcpu *vcpu, int emulation_type, From patchwork Fri Jun 16 02:39:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yan Zhao X-Patchwork-Id: 108804 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp1060314vqr; Thu, 15 Jun 2023 20:18:32 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ4hK0FNON+olgoplis9srJai/9PkXc1VpH7CJO5b1CZ43zRB07N0PrUyH2YrKkvNZnUZZT7 X-Received: by 2002:a05:6871:6a87:b0:19b:8564:6aac with SMTP id zf7-20020a0568716a8700b0019b85646aacmr830979oab.28.1686885512443; Thu, 15 Jun 2023 20:18:32 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1686885512; cv=none; d=google.com; s=arc-20160816; b=MsVqF1tpgoy0dP1uLwUufnpPYcTIj4zWSSxiHcgjnbwTw/3J4QSFSCDcL3szY1nQri HjuYV40SqIyeuRx+pfyt5csVVoAj7IOh7AybBMpMWouEo4J6Vvn/XAc53HJCPxPg6hiS kv1sBuMiao3w9j3yf6xHr6r9JjkfJj/BbwUKnJX6jkU9CQWbdlQoQFTRtotNHNKQ0R/n HL5JsyHr28sbZZ8pMHfrzvcS0qU9WZSa4BHBybLDBALsNmxW9jszwgEavaVww33KGigs duon2Tdh35KF2ISAp3nGg78yEf2s/HdVcjxtFEPakvEyUW6TwIGn43NqrnmuC+JAYfiH yZ9w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from:dkim-signature; bh=aTL6wibxCrRG5gvi3e3omXxxfbj52gP/hZf8an/VQb0=; b=RgAA7y6qEKsN0Dd89u5pru2+UbqcyaEqydKFMKMMxT2vhc1rm9WWd548tRY/g3xx0R +lbqEzSsppikv7DEl8Cj12tz4H+q3RqmvRIOQQHFqWxPeju/DMCfBgTmok4V1sk8zOp5 uiotyrx7iOS6V4W9srn+vVhD7p6QBFg2x/rmTw+msEtWbmQ21fAuP5N7bC88DSbtdbrw UPZHN8z7Dvo+s2D+xgDGt7Rm0oNN7zOoA124OtVGzIro8B8xuF5Md4lvAqEgksKWfT40 g75M36A05M9+KiVst1zA5alFzb7tnmGPezJrshbU+mW7nBKljfPE1xo6IV9RjHuuKGHv 92ew== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=ZWwKHmhM; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id a27-20020aa794bb000000b0062565210347si11467916pfl.275.2023.06.15.20.18.20; Thu, 15 Jun 2023 20:18:32 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=ZWwKHmhM; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241465AbjFPDFS (ORCPT + 99 others); Thu, 15 Jun 2023 23:05:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33894 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241691AbjFPDE7 (ORCPT ); Thu, 15 Jun 2023 23:04:59 -0400 Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4E70F2D58; Thu, 15 Jun 2023 20:04:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1686884698; x=1718420698; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=gRnQaSa7yFgKDt0BHybvJWmtcW9MjixKOmKtXyz6GPg=; b=ZWwKHmhMMME9YavITx8fIQ/iTbKqkY7U6XV8N9HnQ6B92ZXEygNd7I8s ph22Qt/jSo5DdvwASY25JDAtIQUQD7qLR/XQBvF0JA+d2IcqlntxBrpFa Hd/QVz4wgmk9Q6DRf4853gq8P8bLY7Pc803/btuBquOXVV4YwwHWGV/lo 5DVVrjGeAU6IZUzMtWhXwjZ3aB9Q21G2DTq6W8M9ahLZsehbjlze2YIXY 3+LVP9o4oosNju25EACMA/vD1yy5vyCAgUVJ9ph6cy3JPdkta3VFyg373 83bGBK66494qc+V923Ho3nh4hO/ySN805T+E6EgVmpoYdQjfR2ZEqNnue A==; X-IronPort-AV: E=McAfee;i="6600,9927,10742"; a="445482660" X-IronPort-AV: E=Sophos;i="6.00,246,1681196400"; d="scan'208";a="445482660" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Jun 2023 20:04:57 -0700 X-IronPort-AV: E=McAfee;i="6600,9927,10742"; a="745964267" X-IronPort-AV: E=Sophos;i="6.00,246,1681196400"; d="scan'208";a="745964267" Received: from yzhao56-desk.sh.intel.com ([10.239.159.62]) by orsmga001-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Jun 2023 20:04:54 -0700 From: Yan Zhao To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: pbonzini@redhat.com, seanjc@google.com, chao.gao@intel.com, kai.huang@intel.com, robert.hoo.linux@gmail.com, Yan Zhao Subject: [PATCH v3 09/11] KVM: x86/mmu: serialize vCPUs to zap gfn when guest MTRRs are honored Date: Fri, 16 Jun 2023 10:39:45 +0800 Message-Id: <20230616023945.7570-1-yan.y.zhao@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230616023101.7019-1-yan.y.zhao@intel.com> References: <20230616023101.7019-1-yan.y.zhao@intel.com> X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1768827663268728572?= X-GMAIL-MSGID: =?utf-8?q?1768827663268728572?= Serialize concurrent and repeated calls of kvm_zap_gfn_range() from every vCPU for CR0.CD toggles and MTRR updates when guest MTRRs are honored. During guest boot-up, if guest MTRRs are honored by TDP, TDP zaps are triggered several times by each vCPU for CR0.CD toggles and MTRRs updates. This will take unexpected longer CPU cycles because of the contention of kvm->mmu_lock. Therefore, introduce a mtrr_zap_list to remove duplicated zap and an atomic mtrr_zapping to allow only one vCPU to do the real zap work at one time. Suggested-by: Sean Christopherson Co-developed-by: Sean Christopherson Signed-off-by: Sean Christopherson Signed-off-by: Yan Zhao --- arch/x86/include/asm/kvm_host.h | 4 + arch/x86/kvm/mtrr.c | 141 +++++++++++++++++++++++++++++++- arch/x86/kvm/x86.c | 2 +- arch/x86/kvm/x86.h | 1 + 4 files changed, 146 insertions(+), 2 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 28bd38303d70..8da1517a1513 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1444,6 +1444,10 @@ struct kvm_arch { */ #define SPLIT_DESC_CACHE_MIN_NR_OBJECTS (SPTE_ENT_PER_PAGE + 1) struct kvm_mmu_memory_cache split_desc_cache; + + struct list_head mtrr_zap_list; + spinlock_t mtrr_zap_list_lock; + atomic_t mtrr_zapping; }; struct kvm_vm_stat { diff --git a/arch/x86/kvm/mtrr.c b/arch/x86/kvm/mtrr.c index b35dd0bc9cad..688748e3a4d2 100644 --- a/arch/x86/kvm/mtrr.c +++ b/arch/x86/kvm/mtrr.c @@ -25,6 +25,8 @@ #define IA32_MTRR_DEF_TYPE_FE (1ULL << 10) #define IA32_MTRR_DEF_TYPE_TYPE_MASK (0xff) +static void kvm_mtrr_zap_gfn_range(struct kvm_vcpu *vcpu, + gfn_t gfn_start, gfn_t gfn_end); static bool is_mtrr_base_msr(unsigned int msr) { /* MTRR base MSRs use even numbers, masks use odd numbers. */ @@ -341,7 +343,7 @@ static void update_mtrr(struct kvm_vcpu *vcpu, u32 msr) var_mtrr_range(var_mtrr_msr_to_range(vcpu, msr), &start, &end); } - kvm_zap_gfn_range(vcpu->kvm, gpa_to_gfn(start), gpa_to_gfn(end)); + kvm_mtrr_zap_gfn_range(vcpu, gpa_to_gfn(start), gpa_to_gfn(end)); } static bool var_mtrr_range_is_valid(struct kvm_mtrr_range *range) @@ -437,6 +439,11 @@ int kvm_mtrr_get_msr(struct kvm_vcpu *vcpu, u32 msr, u64 *pdata) void kvm_vcpu_mtrr_init(struct kvm_vcpu *vcpu) { INIT_LIST_HEAD(&vcpu->arch.mtrr_state.head); + + if (vcpu->vcpu_id == 0) { + spin_lock_init(&vcpu->kvm->arch.mtrr_zap_list_lock); + INIT_LIST_HEAD(&vcpu->kvm->arch.mtrr_zap_list); + } } struct mtrr_iter { @@ -740,3 +747,135 @@ void kvm_mtrr_get_cd_memory_type(struct kvm_vcpu *vcpu, u8 *type, bool *ipat) } } EXPORT_SYMBOL_GPL(kvm_mtrr_get_cd_memory_type); + +struct mtrr_zap_range { + gfn_t start; + /* end is exclusive */ + gfn_t end; + struct list_head node; +}; + +static void kvm_clear_mtrr_zap_list(struct kvm *kvm) +{ + struct list_head *head = &kvm->arch.mtrr_zap_list; + struct mtrr_zap_range *tmp, *n; + + spin_lock(&kvm->arch.mtrr_zap_list_lock); + list_for_each_entry_safe(tmp, n, head, node) { + list_del(&tmp->node); + kfree(tmp); + } + spin_unlock(&kvm->arch.mtrr_zap_list_lock); +} + +/* + * Add @range into kvm->arch.mtrr_zap_list and sort the list in + * "length" ascending + "start" descending order, so that + * ranges consuming more zap cycles can be dequeued later and their + * chances of being found duplicated are increased. + */ +static void kvm_add_mtrr_zap_list(struct kvm *kvm, struct mtrr_zap_range *range) +{ + struct list_head *head = &kvm->arch.mtrr_zap_list; + u64 len = range->end - range->start; + struct mtrr_zap_range *cur, *n; + bool added = false; + + spin_lock(&kvm->arch.mtrr_zap_list_lock); + + if (list_empty(head)) { + list_add(&range->node, head); + spin_unlock(&kvm->arch.mtrr_zap_list_lock); + return; + } + + list_for_each_entry_safe(cur, n, head, node) { + u64 cur_len = cur->end - cur->start; + + if (len < cur_len) + break; + + if (len > cur_len) + continue; + + if (range->start > cur->start) + break; + + if (range->start < cur->start) + continue; + + /* equal len & start, no need to add */ + added = true; + kfree(range); + break; + } + + if (!added) + list_add_tail(&range->node, &cur->node); + + spin_unlock(&kvm->arch.mtrr_zap_list_lock); +} + +static void kvm_zap_mtrr_zap_list(struct kvm *kvm) +{ + struct list_head *head = &kvm->arch.mtrr_zap_list; + struct mtrr_zap_range *cur = NULL; + + spin_lock(&kvm->arch.mtrr_zap_list_lock); + + while (!list_empty(head)) { + u64 start, end; + + cur = list_first_entry(head, typeof(*cur), node); + start = cur->start; + end = cur->end; + list_del(&cur->node); + kfree(cur); + spin_unlock(&kvm->arch.mtrr_zap_list_lock); + + kvm_zap_gfn_range(kvm, start, end); + + spin_lock(&kvm->arch.mtrr_zap_list_lock); + } + + spin_unlock(&kvm->arch.mtrr_zap_list_lock); +} + +static void kvm_zap_or_wait_mtrr_zap_list(struct kvm *kvm) +{ + if (atomic_cmpxchg_acquire(&kvm->arch.mtrr_zapping, 0, 1) == 0) { + kvm_zap_mtrr_zap_list(kvm); + atomic_set_release(&kvm->arch.mtrr_zapping, 0); + return; + } + + while (atomic_read(&kvm->arch.mtrr_zapping)) + cpu_relax(); +} + +static void kvm_mtrr_zap_gfn_range(struct kvm_vcpu *vcpu, + gfn_t gfn_start, gfn_t gfn_end) +{ + struct mtrr_zap_range *range; + + range = kmalloc(sizeof(*range), GFP_KERNEL_ACCOUNT); + if (!range) + goto fail; + + range->start = gfn_start; + range->end = gfn_end; + + kvm_add_mtrr_zap_list(vcpu->kvm, range); + + kvm_zap_or_wait_mtrr_zap_list(vcpu->kvm); + return; + +fail: + kvm_clear_mtrr_zap_list(vcpu->kvm); + kvm_zap_gfn_range(vcpu->kvm, gfn_start, gfn_end); +} + +void kvm_zap_gfn_range_on_cd_toggle(struct kvm_vcpu *vcpu) +{ + return kvm_mtrr_zap_gfn_range(vcpu, gpa_to_gfn(0), gpa_to_gfn(~0ULL)); +} diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 32cc8bfaa5f1..74aac14a3c0b 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -943,7 +943,7 @@ void kvm_post_set_cr0(struct kvm_vcpu *vcpu, unsigned long old_cr0, unsigned lon if (((cr0 ^ old_cr0) & X86_CR0_CD) && kvm_mmu_honors_guest_mtrrs(vcpu->kvm)) - kvm_zap_gfn_range(vcpu->kvm, 0, ~0ULL); + kvm_zap_gfn_range_on_cd_toggle(vcpu); } EXPORT_SYMBOL_GPL(kvm_post_set_cr0); diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h index 9781b4b32d68..be946aba2bf0 100644 --- a/arch/x86/kvm/x86.h +++ b/arch/x86/kvm/x86.h @@ -314,6 +314,7 @@ int kvm_mtrr_get_msr(struct kvm_vcpu *vcpu, u32 msr, u64 *pdata); bool kvm_mtrr_check_gfn_range_consistency(struct kvm_vcpu *vcpu, gfn_t gfn, int page_num); void kvm_mtrr_get_cd_memory_type(struct kvm_vcpu *vcpu, u8 *type, bool *ipat); +void kvm_zap_gfn_range_on_cd_toggle(struct kvm_vcpu *vcpu); bool kvm_vector_hashing_enabled(void); void kvm_fixup_and_inject_pf_error(struct kvm_vcpu *vcpu, gva_t gva, u16 error_code); int x86_decode_emulated_instruction(struct kvm_vcpu *vcpu, int emulation_type, From patchwork Fri Jun 16 02:41:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yan Zhao X-Patchwork-Id: 108811 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp1071390vqr; Thu, 15 Jun 2023 20:55:54 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ78GrRINn1nrcuFAsiDRx/gm+RmSi4XrcPL7TxPuBciYVcS4Fy2OS0/zZK91N6MeGzaJs+9 X-Received: by 2002:a05:6a21:9818:b0:101:2160:ff8f with SMTP id ue24-20020a056a21981800b001012160ff8fmr1039751pzb.11.1686887753689; Thu, 15 Jun 2023 20:55:53 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1686887753; cv=none; d=google.com; s=arc-20160816; b=NpZHVjkZax+l9r/spAX8WtCSwUE0sQ8SvxKEcz7bzhuc7eMRloe/saqiOqRuOAaonZ 0e3D7/CECU5ykzzwLRG7TyYfr2zx8M6ekucKVJQsLO5/7S/7rHkJuc8xf3FZrqfm9j60 9+KpO3UjT9KeZih+8wjX4NrcfkcvUaCbUjAzkOdFrjC5hzF8FT0hilK2JQ4C9Hros+PQ FEQBUeORhkpmsTIbUR9+Ocnu9GuVn8/7qJCDtxvmyAHgDL6YwAItOf2h7O1YThqiLEI7 3yPVJo8y1lOTeUkvzUJ+48TqRSOnLmL0PpnWdZAYm5N0HGk9QJuP0WHb/FYKzCxnllhX TlcA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from:dkim-signature; bh=azmPYrFibSHimXl2FVOVbe50sza87Bb3ubYV/1d6EQs=; b=B1iyEi/XJGA2FDmBZabo+eMz2y5vCfOFP9sek/VB2nvCZRfJUefUGIW8117EiMlkAB zpczOmNq+HIs5k+Wj1s6GsUAzqrBOIjfh+MGDIZ36LsDnqbcQwvjzWJ4qtcRXxIIM9aK 8lhjGxq5hfCJr/Y86J1CokJ3msEUSL65fz4eD0pvQzsXhfoXWdjbptvfHx8Q+/pkgvMi 2XbskJAafbPRnsxFW+32ReUcIZ0XXE+Hziuw2Jg60g3waArM9jrCodP9o6XTC5DXyHo3 mitKva/eb0vIPvKCWQ9XgLLySnJcgWMkVQ9QhIMhd98VhxepeLbjcgMwXLDWpYTASYWq buAw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=nLsJNV6L; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id q11-20020a170902bd8b00b001ae141947acsi9547828pls.183.2023.06.15.20.55.38; Thu, 15 Jun 2023 20:55:53 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=nLsJNV6L; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241513AbjFPDGu (ORCPT + 99 others); Thu, 15 Jun 2023 23:06:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34788 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241420AbjFPDGr (ORCPT ); Thu, 15 Jun 2023 23:06:47 -0400 Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CCC4D297D; Thu, 15 Jun 2023 20:06:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1686884806; x=1718420806; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=haZIAE2eU8HEF6uKNlr28ZLP3izTWpZyD2lkLgNpwh4=; b=nLsJNV6LnXMu6N56RugHPPdb7hG8Fa2WzNQKSXUotMSnmke0ANq1QfpP ZPyPcBIZWnfmwC8WTT2JI6hWXDEIo/v6PHTRovi9IUMd2473nK+doyV3Q T3mJPRYjs0L9u1orcsFRVlsE1PYp/ObgqCGcApYcnUsz/Nj6qY6UhVf7Y EdHs1HNF0t2GzXVMKAW/MlVm/JyMuyV2KkxQUL1medJPdzn3E4DRmeleM Of7L35iixx32N9RWpSc1sRKCEOTCnEhKq1k3BeX/Y/HaRuTzoBdGGIrey XxneMQKHpK6QcZcr5nMeEaNZM5pYTmQLFHq4QNdKo4LJQgCpyPvU/5L/K Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10742"; a="445482981" X-IronPort-AV: E=Sophos;i="6.00,246,1681196400"; d="scan'208";a="445482981" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Jun 2023 20:06:46 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10742"; a="690038845" X-IronPort-AV: E=Sophos;i="6.00,246,1681196400"; d="scan'208";a="690038845" Received: from yzhao56-desk.sh.intel.com ([10.239.159.62]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Jun 2023 20:06:43 -0700 From: Yan Zhao To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: pbonzini@redhat.com, seanjc@google.com, chao.gao@intel.com, kai.huang@intel.com, robert.hoo.linux@gmail.com, Yan Zhao Subject: [PATCH v3 10/11] KVM: x86/mmu: fine-grained gfn zap when guest MTRRs are honored Date: Fri, 16 Jun 2023 10:41:34 +0800 Message-Id: <20230616024134.7649-1-yan.y.zhao@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230616023101.7019-1-yan.y.zhao@intel.com> References: <20230616023101.7019-1-yan.y.zhao@intel.com> X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1768830012911320490?= X-GMAIL-MSGID: =?utf-8?q?1768830012911320490?= Find out guest MTRR ranges of non-default type and only zap those ranges when guest MTRRs are enabled and MTRR default type equals to the memtype of CR0.CD=1. This allows fine-grained and faster gfn zap because of precise and shorter zap ranges, and increases chances for concurent vCPUs to find existing ranges to zap in zap list. Incidentally, fix a typo in the original comment. Suggested-by: Sean Christopherson Co-developed-by: Sean Christopherson Signed-off-by: Sean Christopherson Signed-off-by: Yan Zhao --- arch/x86/kvm/mtrr.c | 108 +++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 106 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/mtrr.c b/arch/x86/kvm/mtrr.c index 688748e3a4d2..e2a097822a62 100644 --- a/arch/x86/kvm/mtrr.c +++ b/arch/x86/kvm/mtrr.c @@ -179,7 +179,7 @@ static struct fixed_mtrr_segment fixed_seg_table[] = { { .start = 0xc0000, .end = 0x100000, - .range_shift = 12, /* 12K */ + .range_shift = 12, /* 4K */ .range_start = 24, } }; @@ -816,6 +816,67 @@ static void kvm_add_mtrr_zap_list(struct kvm *kvm, struct mtrr_zap_range *range) spin_unlock(&kvm->arch.mtrr_zap_list_lock); } +/* + * Fixed ranges are only 256 pages in total. + * After balancing between reducing overhead of zap multiple ranges + * and increasing chances of finding duplicated ranges, + * just add fixed mtrr ranges as a whole to the mtrr zap list + * if memory type of one of them is not the specified type. + */ +static int prepare_zaplist_fixed_mtrr_of_non_type(struct kvm_vcpu *vcpu, u8 type) +{ + struct kvm_mtrr *mtrr_state = &vcpu->arch.mtrr_state; + struct mtrr_zap_range *range; + int index, seg_end; + u8 mem_type; + + for (index = 0; index < KVM_NR_FIXED_MTRR_REGION; index++) { + mem_type = mtrr_state->fixed_ranges[index]; + + if (mem_type == type) + continue; + + range = kmalloc(sizeof(*range), GFP_KERNEL_ACCOUNT); + if (!range) + return -ENOMEM; + + seg_end = ARRAY_SIZE(fixed_seg_table) - 1; + range->start = gpa_to_gfn(fixed_seg_table[0].start); + range->end = gpa_to_gfn(fixed_seg_table[seg_end].end); + kvm_add_mtrr_zap_list(vcpu->kvm, range); + break; + } + return 0; +} + +/* + * Add var mtrr ranges to the mtrr zap list + * if its memory type does not equal to type + */ +static int prepare_zaplist_var_mtrr_of_non_type(struct kvm_vcpu *vcpu, u8 type) +{ + struct kvm_mtrr *mtrr_state = &vcpu->arch.mtrr_state; + struct mtrr_zap_range *range; + struct kvm_mtrr_range *tmp; + u8 mem_type; + + list_for_each_entry(tmp, &mtrr_state->head, node) { + mem_type = tmp->base & 0xff; + if (mem_type == type) + continue; + + range = kmalloc(sizeof(*range), GFP_KERNEL_ACCOUNT); + if (!range) + return -ENOMEM; + + var_mtrr_range(tmp, &range->start, &range->end); + range->start = gpa_to_gfn(range->start); + range->end = gpa_to_gfn(range->end); + kvm_add_mtrr_zap_list(vcpu->kvm, range); + } + return 0; +} + static void kvm_zap_mtrr_zap_list(struct kvm *kvm) { struct list_head *head = &kvm->arch.mtrr_zap_list; @@ -875,7 +936,50 @@ static void kvm_mtrr_zap_gfn_range(struct kvm_vcpu *vcpu, kvm_zap_gfn_range(vcpu->kvm, gfn_start, gfn_end); } +/* + * Zap GFN ranges when CR0.CD toggles between 0 and 1. + * With noncoherent DMA present, + * when CR0.CD=1, TDP memtype is WB or UC + IPAT; + * when CR0.CD=0, TDP memtype is determined by guest MTRR. + * Therefore, if the cache disabled memtype is different from default memtype + * in guest MTRR, everything is zapped; + * if the cache disabled memtype is equal to default memtype in guest MTRR, + * only MTRR ranges of non-default-memtype are required to be zapped. + */ void kvm_zap_gfn_range_on_cd_toggle(struct kvm_vcpu *vcpu) { - return kvm_mtrr_zap_gfn_range(vcpu, gpa_to_gfn(0), gpa_to_gfn(~0ULL)); + struct kvm_mtrr *mtrr_state = &vcpu->arch.mtrr_state; + bool mtrr_enabled = mtrr_is_enabled(mtrr_state); + u8 default_type; + u8 cd_type; + bool ipat; + + kvm_mtrr_get_cd_memory_type(vcpu, &cd_type, &ipat); + + default_type = mtrr_enabled ? mtrr_default_type(mtrr_state) : + mtrr_disabled_type(vcpu); + + if (cd_type != default_type || ipat) + return kvm_mtrr_zap_gfn_range(vcpu, gpa_to_gfn(0), gpa_to_gfn(~0ULL)); + + /* + * If mtrr is not enabled, it will go to zap all above if the default + * type does not equal to cd_type; + * Or it has no need to zap if the default type equals to cd_type. + */ + if (mtrr_enabled) { + if (prepare_zaplist_fixed_mtrr_of_non_type(vcpu, default_type)) + goto fail; + + if (prepare_zaplist_var_mtrr_of_non_type(vcpu, default_type)) + goto fail; + + kvm_zap_or_wait_mtrr_zap_list(vcpu->kvm); + } + return; +fail: + kvm_clear_mtrr_zap_list(vcpu->kvm); + /* resort to zapping all on failure*/ + kvm_zap_gfn_range(vcpu->kvm, 0, ~0ULL); + return; } From patchwork Fri Jun 16 02:42:13 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yan Zhao X-Patchwork-Id: 108803 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp1060254vqr; Thu, 15 Jun 2023 20:18:22 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ4nqn+mfom/OiidNc+v0TdNojfxViaHTUGoJD80Zju4DwPRGM8GsfqaC7PARpOmzb3GAadn X-Received: by 2002:a05:6a00:35ca:b0:662:3de1:2861 with SMTP id dc10-20020a056a0035ca00b006623de12861mr8788545pfb.6.1686885502537; Thu, 15 Jun 2023 20:18:22 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1686885502; cv=none; d=google.com; s=arc-20160816; b=aJ/w29RRfIo6bGsp/K618Fi+pvOOnOeL+n7Li00wPcx7ihmCi6SYKVKydKvFAaZb4J 9XRktstGEri6ITaEYXTL/GwKrSpWVhlcEdzWmajEhKH2MYufKpez805waCfQaIBadyIl FUMLjJQz3BI6e+8SPDltYbWyXFikcXKDZtSsduE/Z0pGDb5OHbK5IgyaSMN+UCkaZ7lT cG9Mvcp0jh0JbyuC2giQ75ZHTh973Qi9HwypzX1BOI7ksGG0RfqWsINETaJO7LhVRtMS c9bcgfRZXZNEDkcuZZqoNtz/OUG8Ire7KbZiYkURgwODkLYhJD2m3FZHoIRTu6ofh/BQ xTOg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from:dkim-signature; bh=lEsCrmmg2R/u8LdCvcG+4rtvSmFknkiE6frBCaEUcnc=; b=xJ/fLdtqmLaNSxEJ/EemlGJzTTXie02ifktnW4yWFcSP3FypweIvBd9IOvCBwQy+ZI uRmLsXKfwLXVAWuVX9hMPhsQGW8WskpAGMyGaNBmn9aXFmIBfYCBfTBHNiLjGjz2gkFz VGwf95euoyDRGa7iR9ATHHGmcD/PeLlpPR6fHB4nnrtzVNkiZJWAETddxGr7zVMU0Xo+ 5nbOVwpDl5BQcylouAsKOHP7NBUTBuocKLXaKAjqTlR1oNgv9ZGaBoNZ05wkaIdxzY8R dpxbeFAEGaWjfrmS/5lxi/FXI0hXGGpU81UKNaZXxKnmJZwchOffkkyHr/+ytb8ElrPq eO7A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=cIfU7OqX; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id b20-20020aa79514000000b0066675b3f21esi3891048pfp.36.2023.06.15.20.18.10; Thu, 15 Jun 2023 20:18:22 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=cIfU7OqX; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241663AbjFPDHt (ORCPT + 99 others); Thu, 15 Jun 2023 23:07:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35478 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241646AbjFPDHi (ORCPT ); Thu, 15 Jun 2023 23:07:38 -0400 Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C9C8F2D40; Thu, 15 Jun 2023 20:07:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1686884846; x=1718420846; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=0qVLWf99lh56KC7DV9RlLyblgh+kyME9c2tLdLIauOU=; b=cIfU7OqXRv4tSSIg6RzmWDwBxNdocnnBfmbPh5goFNgwEiQHQGPMwzJT RPIm+bVBzkkuKwdTrwzKQIobQ0C6JUP0Vil7uM6ml0jAwlQIIbEJcagLE 1thwxWiKnbTln/zFLb00fdZO2VMO3KHv3suOUkAuq09YFGLJ8jp7tpbOo zs49FiaRmpvGaJrUvuyXoR6KM9lEELZLxksLhlk4NUQyaNQ0D3Fo/8SeB QmD6awNgGLROs+bp9hI2sUWc1RI/3UnL4q7tVe4+OwvZYagZn+LHzX0E2 kP2omGVAnI13R9o2pYNdjummuH0nDS11XK28UKW51ts1GICPcTGnEFfXg A==; X-IronPort-AV: E=McAfee;i="6600,9927,10742"; a="387736540" X-IronPort-AV: E=Sophos;i="6.00,246,1681196400"; d="scan'208";a="387736540" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Jun 2023 20:07:26 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10742"; a="742483474" X-IronPort-AV: E=Sophos;i="6.00,246,1681196400"; d="scan'208";a="742483474" Received: from yzhao56-desk.sh.intel.com ([10.239.159.62]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Jun 2023 20:07:23 -0700 From: Yan Zhao To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: pbonzini@redhat.com, seanjc@google.com, chao.gao@intel.com, kai.huang@intel.com, robert.hoo.linux@gmail.com, Yan Zhao Subject: [PATCH v3 11/11] KVM: x86/mmu: split a single gfn zap range when guest MTRRs are honored Date: Fri, 16 Jun 2023 10:42:13 +0800 Message-Id: <20230616024213.7706-1-yan.y.zhao@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230616023101.7019-1-yan.y.zhao@intel.com> References: <20230616023101.7019-1-yan.y.zhao@intel.com> X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, RCVD_IN_MSPIKE_H3,RCVD_IN_MSPIKE_WL,SPF_HELO_NONE,SPF_NONE, T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1768827652641239080?= X-GMAIL-MSGID: =?utf-8?q?1768827652641239080?= Split a single gfn zap range (specifially range [0, ~0UL)) to smaller ranges according to current memslot layout when guest MTRRs are honored. Though vCPUs have been serialized to perform kvm_zap_gfn_range() for MTRRs updates and CR0.CD toggles, contention caused rescheduling cost is still huge when there're concurrent page fault caused read locks of kvm->mmu_lock. Split a single huge zap range according to the actual memslot layout can reduce unnecessary transversal and scheduling cost in tdp mmu. Also, it can increase the chances for larger ranges to find existing ranges to zap in zap list. Signed-off-by: Yan Zhao --- arch/x86/kvm/mtrr.c | 31 +++++++++++++++++++++++++------ 1 file changed, 25 insertions(+), 6 deletions(-) diff --git a/arch/x86/kvm/mtrr.c b/arch/x86/kvm/mtrr.c index e2a097822a62..b83abd14ccb1 100644 --- a/arch/x86/kvm/mtrr.c +++ b/arch/x86/kvm/mtrr.c @@ -917,21 +917,40 @@ static void kvm_zap_or_wait_mtrr_zap_list(struct kvm *kvm) static void kvm_mtrr_zap_gfn_range(struct kvm_vcpu *vcpu, gfn_t gfn_start, gfn_t gfn_end) { + int idx = srcu_read_lock(&vcpu->kvm->srcu); + const struct kvm_memory_slot *memslot; struct mtrr_zap_range *range; + struct kvm_memslot_iter iter; + struct kvm_memslots *slots; + gfn_t start, end; + int i; + + for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) { + slots = __kvm_memslots(vcpu->kvm, i); + kvm_for_each_memslot_in_gfn_range(&iter, slots, gfn_start, gfn_end) { + memslot = iter.slot; + start = max(gfn_start, memslot->base_gfn); + end = min(gfn_end, memslot->base_gfn + memslot->npages); + if (WARN_ON_ONCE(start >= end)) + continue; - range = kmalloc(sizeof(*range), GFP_KERNEL_ACCOUNT); - if (!range) - goto fail; + range = kmalloc(sizeof(*range), GFP_KERNEL_ACCOUNT); + if (!range) + goto fail; - range->start = gfn_start; - range->end = gfn_end; + range->start = start; + range->end = end; - kvm_add_mtrr_zap_list(vcpu->kvm, range); + kvm_add_mtrr_zap_list(vcpu->kvm, range); + } + } + srcu_read_unlock(&vcpu->kvm->srcu, idx); kvm_zap_or_wait_mtrr_zap_list(vcpu->kvm); return; fail: + srcu_read_unlock(&vcpu->kvm->srcu, idx); kvm_clear_mtrr_zap_list(vcpu->kvm); kvm_zap_gfn_range(vcpu->kvm, gfn_start, gfn_end); }