From patchwork Tue Nov 1 14:53:45 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vitaly Kuznetsov X-Patchwork-Id: 13697 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp3020282wru; Tue, 1 Nov 2022 07:57:12 -0700 (PDT) X-Google-Smtp-Source: AMsMyM4PXzQKNPPVQONfAJqgvr/a7Y66mqBQa5sh0Goimm/HJHvVIUdB9WrJ4m/QkK1dOVEQLM0l X-Received: by 2002:a17:906:da86:b0:7ad:dc94:1b7 with SMTP id xh6-20020a170906da8600b007addc9401b7mr8617025ejb.288.1667314632268; Tue, 01 Nov 2022 07:57:12 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1667314632; cv=none; d=google.com; s=arc-20160816; b=r+ubIICbkAALgn2XPL/+O6akkAOQDhEEETb6V+y5bdLUNaEjnLY7vsb//Dvbwc6x9o 3FNMbNulc/6y3nf8NvJ8kzXhRoHmue2j/rhX65mLO5MWZtdIKy5fp2wPZuTgFDVlZwK7 TUoL8+1F9ab45ydaRPzrHniv4MPWNDOJsOZlH7yJZpPHF+nxT2NR6maeXMb4t6brjhcq eMMg7wCmVOGz7Jg4md+1T6/ko1EPtez/1S6SWT1rCZHPquILtyMWvdY7YaVUaKUzDY5K 7tkiSb63/LJpV50jgCuSL+k51xrmaf5yNkFlWzQ0MaGIXOItk9omlT/0G4Vpoi5Nigbq EAHg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=f++jtCVV3Bzf3njGlDjTrMlhnT3CSdLKyjbOGm7F9fM=; b=W7TmAgSYpbtf6He7l7+SSs8vLBZ4hXN4q1Lycy7QScg+ZXNWL6kY5RGoHLJfIGKYR2 dZpfKVPMuWuDq3AKBf28oWRAlPaavS+OT6WZgrYa27/D6PyU5WQVaeaE7MVNhv7iRlBw CwAZ2r24BiLi47/UMKjRY9zUrytB1U784l+iqBHLuIKufhcSTiQU1Ro6GTEvc5IySbH4 ugcn9EkGWPF9SV4o3okoWJcxgtwZzfOh5jCj5FxD0mqzgSh9VpGJp0uJq2SctR60gNDV t/zyoT4FvSE+48yh09iu70R3h3QvHeNqO4TG6AfR+EwTtDuWpkeqZgaHlYXtNnQ7Jzk0 /Ktw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=Mxbk191N; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id gs37-20020a1709072d2500b00787ad97302asi14919257ejc.863.2022.11.01.07.56.48; Tue, 01 Nov 2022 07:57:12 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=Mxbk191N; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231128AbiKAO4U (ORCPT + 99 others); Tue, 1 Nov 2022 10:56:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40270 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230400AbiKAOz7 (ORCPT ); Tue, 1 Nov 2022 10:55:59 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 64E7264DA for ; Tue, 1 Nov 2022 07:54:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1667314495; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=f++jtCVV3Bzf3njGlDjTrMlhnT3CSdLKyjbOGm7F9fM=; b=Mxbk191NMU99cwSz62L6Vg5Ut7pWiHWPjyIN/5WHt86xnh1sDF8gEOXnMCD2dPQdWaM82w w1n3H5guRbZGti5zX1NmDtfOvUhOPLhTva0stGHmikhG66lM0DRHXSdlGWkhCc6zdDSS3X nDDbuuNedhpTLD2tYD3Q/bJxYFWa+hg= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-137-5IFOCk5GOsmr1X6dXyFgkA-1; Tue, 01 Nov 2022 10:54:51 -0400 X-MC-Unique: 5IFOCk5GOsmr1X6dXyFgkA-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 39FDD29ABA13; Tue, 1 Nov 2022 14:54:50 +0000 (UTC) Received: from ovpn-194-149.brq.redhat.com (ovpn-194-149.brq.redhat.com [10.40.194.149]) by smtp.corp.redhat.com (Postfix) with ESMTP id 7D35AC15BA5; Tue, 1 Nov 2022 14:54:47 +0000 (UTC) From: Vitaly Kuznetsov To: kvm@vger.kernel.org, Paolo Bonzini , Sean Christopherson Cc: Wanpeng Li , Jim Mattson , Michael Kelley , Siddharth Chandrasekaran , Yuan Yao , Maxim Levitsky , linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v13 07/48] KVM: x86: Move clearing of TLB_FLUSH_CURRENT to kvm_vcpu_flush_tlb_all() Date: Tue, 1 Nov 2022 15:53:45 +0100 Message-Id: <20221101145426.251680-8-vkuznets@redhat.com> In-Reply-To: <20221101145426.251680-1-vkuznets@redhat.com> References: <20221101145426.251680-1-vkuznets@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.8 X-Spam-Status: No, score=-3.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1748306108242764324?= X-GMAIL-MSGID: =?utf-8?q?1748306108242764324?= From: Sean Christopherson Clear KVM_REQ_TLB_FLUSH_CURRENT in kvm_vcpu_flush_tlb_all() instead of in its sole caller that processes KVM_REQ_TLB_FLUSH. Regardless of why/when kvm_vcpu_flush_tlb_all() is called, flushing "all" TLB entries also flushes "current" TLB entries. Ideally, there will never be another caller of kvm_vcpu_flush_tlb_all(), and moving the handling "requires" extra work to document the ordering requirement, but future Hyper-V paravirt TLB flushing support will add similar logic for flush "guest" (Hyper-V can flush a subset of "guest" entries). And in the Hyper-V case, KVM needs to do more than just clear the request, the queue of GPAs to flush also needs to purged, and doing all only in the request path is undesirable as kvm_vcpu_flush_tlb_guest() does have multiple callers (though it's unlikely KVM's paravirt TLB flush will coincide with Hyper-V's paravirt TLB flush). Move the logic even though it adds extra "work" so that KVM will be consistent with how flush requests are processed when the Hyper-V support lands. No functional change intended. Signed-off-by: Sean Christopherson Signed-off-by: Vitaly Kuznetsov --- arch/x86/kvm/x86.c | 14 ++++++++++---- 1 file changed, 10 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index f618187585b1..fdda5f447f87 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -3396,6 +3396,9 @@ static void kvm_vcpu_flush_tlb_all(struct kvm_vcpu *vcpu) { ++vcpu->stat.tlb_flush; static_call(kvm_x86_flush_tlb_all)(vcpu); + + /* Flushing all ASIDs flushes the current ASID... */ + kvm_clear_request(KVM_REQ_TLB_FLUSH_CURRENT, vcpu); } static void kvm_vcpu_flush_tlb_guest(struct kvm_vcpu *vcpu) @@ -10477,12 +10480,15 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) kvm_mmu_sync_roots(vcpu); if (kvm_check_request(KVM_REQ_LOAD_MMU_PGD, vcpu)) kvm_mmu_load_pgd(vcpu); - if (kvm_check_request(KVM_REQ_TLB_FLUSH, vcpu)) { + + /* + * Note, the order matters here, as flushing "all" TLB entries + * also flushes the "current" TLB entries, i.e. servicing the + * flush "all" will clear any request to flush "current". + */ + if (kvm_check_request(KVM_REQ_TLB_FLUSH, vcpu)) kvm_vcpu_flush_tlb_all(vcpu); - /* Flushing all ASIDs flushes the current ASID... */ - kvm_clear_request(KVM_REQ_TLB_FLUSH_CURRENT, vcpu); - } kvm_service_local_tlb_flush_requests(vcpu); if (kvm_check_request(KVM_REQ_REPORT_TPR_ACCESS, vcpu)) {