Message ID | 20231204144334.910-19-paul@xen.org |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:bcd1:0:b0:403:3b70:6f57 with SMTP id r17csp2819745vqy; Mon, 4 Dec 2023 07:01:46 -0800 (PST) X-Google-Smtp-Source: AGHT+IH2YEwojQG58zYtP/DChi54sNcd9FEt6hgRbRaagzHUXMSWMFAbEmSH4wMMqosUvGPDc8Qb X-Received: by 2002:a17:902:8bc2:b0:1cf:cc23:eff with SMTP id r2-20020a1709028bc200b001cfcc230effmr9431971plo.52.1701702106365; Mon, 04 Dec 2023 07:01:46 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1701702106; cv=none; d=google.com; s=arc-20160816; b=wSkTsNZ7ylXFi4p/2Uq7G3y70fXA+9Lg52vWBVCuoYbf9+R1ZMeyIZgEmmypyAtymj 4bn0vTf3cuPTj5X/c7UBjt7ctZ8E9GuoilV6a/d60FyVYRfT+b6UBBEXqEhv2OjJo2+w bp7ppIX3WqR6/mnwmjLc1WcpYb9gilUAttyHKNQhbUl0+OlG7mJwzqI++pyLMLonX4/M 8vLu7torQmi8FqHPKgWKW5a2h7psBu0D+t38Mf59PxdHL2fCT/rTSssMozDKrELE2VeG hjxQy9Ikthyv0NoEva1R/5UfOYT2saeYkKWWFAVyNho1eLFbXqegIdLyAk8tVSC8VSt0 ifYg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=phFzjY56QcX+1gb6i8CFML6EdpSEnZzV/NxGbDVl3Vg=; fh=aAPqvXm40pY9772UiBOljEePwtz/3Q0zIOpw238tj3M=; b=nV9WOnTHaSXw3898j7Trg6wXYDRQkTqzw0+V1Mt6OEKaFG8UpEskDBXDIQQQ8xcooG EMgPVMwjODLY+hrfT5Z44F4XzVvamRm+8IpFQe+rYDbXxS+ZpOWOuK32zNo6meJX7eky 9cWHwbLGw3/cJjN9BqGQmX3rLJsdpTQNvkxvBbzwssDuF0pDurvMeMMlKjdMok++0SuO wRcCHs6Yf2k9UhRBdhbKDVuDRPWcJVPKF5EPqD4h8tA5XZBDJSzCRhqWa13Nxbke4tMq d3uwbnJn1Q5xa9BcbvdCUnUzfxSXupQ6W1V+LA144gy77y/SxpOGQOgBdcidDoQWiYXD tBaQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@xen.org header.s=20200302mail header.b=eQtjgR7H; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:1 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from morse.vger.email (morse.vger.email. [2620:137:e000::3:1]) by mx.google.com with ESMTPS id f12-20020a170902ce8c00b001d0b5032ec5si931162plg.438.2023.12.04.07.01.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Dec 2023 07:01:46 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:1 as permitted sender) client-ip=2620:137:e000::3:1; Authentication-Results: mx.google.com; dkim=pass header.i=@xen.org header.s=20200302mail header.b=eQtjgR7H; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:1 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by morse.vger.email (Postfix) with ESMTP id 5B56980A9AAA; Mon, 4 Dec 2023 07:01:43 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at morse.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234094AbjLDPBK (ORCPT <rfc822;chrisfriedt@gmail.com> + 99 others); Mon, 4 Dec 2023 10:01:10 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53674 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233772AbjLDPA1 (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Mon, 4 Dec 2023 10:00:27 -0500 Received: from mail.xenproject.org (mail.xenproject.org [104.130.215.37]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D0754D3; Mon, 4 Dec 2023 07:00:32 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References: In-Reply-To:Message-Id:Date:Subject:To:From; bh=phFzjY56QcX+1gb6i8CFML6EdpSEnZzV/NxGbDVl3Vg=; b=eQtjgR7HOmnVRKsIRii04iBYvd R2F6jefB8+JSCKgI9+fk9rltQGNnbyGj5aURRkMIXuAFr5t1bvKQ0Rc7StotMXVjlqo8yKUX0U3eW QqzT7iQWClPK8DW1l1jgv523aJ3Yp5S/bw2l791crcW7G9+nbWrJ/iwAZDm54FUZu2Bk=; Received: from xenbits.xenproject.org ([104.239.192.120]) by mail.xenproject.org with esmtp (Exim 4.92) (envelope-from <paul@xen.org>) id 1rAAQk-00043b-KN; Mon, 04 Dec 2023 15:00:18 +0000 Received: from 54-240-197-231.amazon.com ([54.240.197.231] helo=REM-PW02S00X.ant.amazon.com) by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from <paul@xen.org>) id 1rAABZ-00088g-IS; Mon, 04 Dec 2023 14:44:37 +0000 From: Paul Durrant <paul@xen.org> To: Paolo Bonzini <pbonzini@redhat.com>, Jonathan Corbet <corbet@lwn.net>, Sean Christopherson <seanjc@google.com>, Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>, Dave Hansen <dave.hansen@linux.intel.com>, x86@kernel.org, "H. Peter Anvin" <hpa@zytor.com>, David Woodhouse <dwmw2@infradead.org>, Paul Durrant <paul@xen.org>, Shuah Khan <shuah@kernel.org>, kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org Subject: [PATCH v10 18/19] KVM: pfncache: check the need for invalidation under read lock first Date: Mon, 4 Dec 2023 14:43:33 +0000 Message-Id: <20231204144334.910-19-paul@xen.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20231204144334.910-1-paul@xen.org> References: <20231204144334.910-1-paul@xen.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-0.9 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on morse.vger.email Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (morse.vger.email [0.0.0.0]); Mon, 04 Dec 2023 07:01:43 -0800 (PST) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1784363988135160124 X-GMAIL-MSGID: 1784363988135160124 |
Series |
KVM: xen: update shared_info and vcpu_info handling
|
|
Commit Message
Paul Durrant
Dec. 4, 2023, 2:43 p.m. UTC
From: Paul Durrant <pdurrant@amazon.com> Taking a write lock on a pfncache will be disruptive if the cache is heavily used (which only requires a read lock). Hence, in the MMU notifier callback, take read locks on caches to check for a match; only taking a write lock to actually perform an invalidation (after a another check). Signed-off-by: Paul Durrant <pdurrant@amazon.com> --- Cc: Sean Christopherson <seanjc@google.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: David Woodhouse <dwmw2@infradead.org> v10: - New in this version. --- virt/kvm/pfncache.c | 22 +++++++++++++++++++--- 1 file changed, 19 insertions(+), 3 deletions(-)
Comments
On Mon, 2023-12-04 at 14:43 +0000, Paul Durrant wrote: > From: Paul Durrant <pdurrant@amazon.com> > > Taking a write lock on a pfncache will be disruptive if the cache is > heavily used (which only requires a read lock). Hence, in the MMU notifier > callback, take read locks on caches to check for a match; only taking a > write lock to actually perform an invalidation (after a another check). > > Signed-off-by: Paul Durrant <pdurrant@amazon.com> Reviewed-by: David Woodhouse <dwmw@amazon.co.uk> In particular, the previous 'don't block on pfncache locks in kvm_xen_set_evtchn_fast()' patch in this series is easy to justify on the basis that it only falls back to the slow path if it can't take a read lock immediately. And surely it should *always* be able to take a read lock immediately unless there's an actual *writer* — which should be a rare event, and means the cache was probably going to be invalidates anyway. But then we realised the MMU notifier was going to disrupt that. > --- > Cc: Sean Christopherson <seanjc@google.com> > Cc: Paolo Bonzini <pbonzini@redhat.com> > Cc: David Woodhouse <dwmw2@infradead.org> > > v10: > - New in this version. > --- > virt/kvm/pfncache.c | 22 +++++++++++++++++++--- > 1 file changed, 19 insertions(+), 3 deletions(-) > > diff --git a/virt/kvm/pfncache.c b/virt/kvm/pfncache.c > index c2a2d1e145b6..4da16d494f4b 100644 > --- a/virt/kvm/pfncache.c > +++ b/virt/kvm/pfncache.c > @@ -29,14 +29,30 @@ void gfn_to_pfn_cache_invalidate_start(struct kvm *kvm, unsigned long start, > > spin_lock(&kvm->gpc_lock); > list_for_each_entry(gpc, &kvm->gpc_list, list) { > - write_lock_irq(&gpc->lock); > + read_lock_irq(&gpc->lock); > > /* Only a single page so no need to care about length */ > if (gpc->valid && !is_error_noslot_pfn(gpc->pfn) && > gpc->uhva >= start && gpc->uhva < end) { > - gpc->valid = false; > + read_unlock_irq(&gpc->lock); > + > + /* > + * There is a small window here where the cache could > + * be modified, and invalidation would no longer be > + * necessary. Hence check again whether invalidation > + * is still necessary once the write lock has been > + * acquired. > + */ > + > + write_lock_irq(&gpc->lock); > + if (gpc->valid && !is_error_noslot_pfn(gpc->pfn) && > + gpc->uhva >= start && gpc->uhva < end) > + gpc->valid = false; > + write_unlock_irq(&gpc->lock); > + continue; > } > - write_unlock_irq(&gpc->lock); > + > + read_unlock_irq(&gpc->lock); > } > spin_unlock(&kvm->gpc_lock); > }
diff --git a/virt/kvm/pfncache.c b/virt/kvm/pfncache.c index c2a2d1e145b6..4da16d494f4b 100644 --- a/virt/kvm/pfncache.c +++ b/virt/kvm/pfncache.c @@ -29,14 +29,30 @@ void gfn_to_pfn_cache_invalidate_start(struct kvm *kvm, unsigned long start, spin_lock(&kvm->gpc_lock); list_for_each_entry(gpc, &kvm->gpc_list, list) { - write_lock_irq(&gpc->lock); + read_lock_irq(&gpc->lock); /* Only a single page so no need to care about length */ if (gpc->valid && !is_error_noslot_pfn(gpc->pfn) && gpc->uhva >= start && gpc->uhva < end) { - gpc->valid = false; + read_unlock_irq(&gpc->lock); + + /* + * There is a small window here where the cache could + * be modified, and invalidation would no longer be + * necessary. Hence check again whether invalidation + * is still necessary once the write lock has been + * acquired. + */ + + write_lock_irq(&gpc->lock); + if (gpc->valid && !is_error_noslot_pfn(gpc->pfn) && + gpc->uhva >= start && gpc->uhva < end) + gpc->valid = false; + write_unlock_irq(&gpc->lock); + continue; } - write_unlock_irq(&gpc->lock); + + read_unlock_irq(&gpc->lock); } spin_unlock(&kvm->gpc_lock); }