From patchwork Thu Feb 15 15:29:14 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paul Durrant X-Patchwork-Id: 201591 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:b825:b0:106:860b:bbdd with SMTP id da37csp492244dyb; Thu, 15 Feb 2024 07:54:42 -0800 (PST) X-Forwarded-Encrypted: i=3; AJvYcCW76t5RHQitrclyN6mKx/702lAyvBU5j+UAjPUteksUJWYitOeloqUJ8gKvSdWSB1r5hbiJPzekdzMLx0jmTS1BKJOOJw== X-Google-Smtp-Source: AGHT+IHO8hsJAb/A7m7corsvE9CSNL2gksw5WpSnHyUgl9D6C4j/agKx0loB4kDssGT4jRZULYxt X-Received: by 2002:a17:902:e543:b0:1d9:9a90:cbe3 with SMTP id n3-20020a170902e54300b001d99a90cbe3mr2984721plf.43.1708012482508; Thu, 15 Feb 2024 07:54:42 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1708012482; cv=pass; d=google.com; s=arc-20160816; b=BxjjthcCxEVTgeWWFcmhCvKZ43YzXtpjQlIOF9WgiGm3Y5eSy30Xsltnp6I3792Jtn LZZHYxPb89J1o+Yz7S7FfAToXubVYfQk8BpAJN3Y9jWiMitPg97Zo9LUwrI+xG4aR4YK Q7BpCOmZbuU/bElkhJidEcyKIVJrDV1qQTVw88Dsr/VlBkYfijFnckTUBz2vcYn+ke5I EuJgVoJ2Gr/qrFbK5SXG/b/hVhoB81D5aCDpDHRZPLnqh8yApISdTpqHGHYPIjDBLey/ JgakyWEka82Mb53UjfB3Y7tbFWyRYmKANqrTdCMwpTBrfJ83TptGdgeAAGpt37yLV+zR xciQ== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:to:from:dkim-signature; bh=oWxMi1Cq+3Oy+z1gazw2wQP+W9Y2VE65Jj+9Nc1gV68=; fh=NrOggL3VZvuSFh5W9Wj2+kLgS4v0nogfnAWvBJoouXQ=; b=vKcprcUdMLGm1vk7T5RDk6LcLekDAg21EybRnVPnbH03pLra1vOn0ap3OGV5Ou4P6d rCoW1kIJToWGB5+Mez7RnXqBTGkTIrriDx/0+iPuJhiu1MHf69+pjTu/3rOWOQezeO8C fJIEMyzcDR+I1vR+Wr41Np91Ns5KOsRccBeyt4tsFKaw64fDa4ZqJOOP3NcqAxOvKHcE cAomeVAtPNyxFdu0KCYQ5+5EYg88YC9siqCHzRfSmPTVY+a4MmnGvjZfI56jCr6Eyvvj mX4k/oUrcDiXzGqmlhzJxmtSiUNxtMUpN31Pn83r8V1G+KQq5KKC1SzOntGrjE9T7tDB +SIw==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@xen.org header.s=20200302mail header.b=sISZNeaz; arc=pass (i=1 spf=pass spfdomain=xen.org dkim=pass dkdomain=xen.org); spf=pass (google.com: domain of linux-kernel+bounces-67224-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-67224-ouuuleilei=gmail.com@vger.kernel.org" Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [2604:1380:45e3:2400::1]) by mx.google.com with ESMTPS id f9-20020a170902684900b001db8cd74953si1020340pln.112.2024.02.15.07.54.42 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 15 Feb 2024 07:54:42 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-67224-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) client-ip=2604:1380:45e3:2400::1; Authentication-Results: mx.google.com; dkim=pass header.i=@xen.org header.s=20200302mail header.b=sISZNeaz; arc=pass (i=1 spf=pass spfdomain=xen.org dkim=pass dkdomain=xen.org); spf=pass (google.com: domain of linux-kernel+bounces-67224-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-67224-ouuuleilei=gmail.com@vger.kernel.org" Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id 7723428EFC7 for ; Thu, 15 Feb 2024 15:46:45 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id C3888135A61; Thu, 15 Feb 2024 15:44:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=xen.org header.i=@xen.org header.b="sISZNeaz" Received: from mail.xenproject.org (mail.xenproject.org [104.130.215.37]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 23D42133988; Thu, 15 Feb 2024 15:44:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=104.130.215.37 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708011874; cv=none; b=ElBPpwr3jAzrjbYA9jkE2YXXAvZKcKtM/PBJ7n1kiwlRsh9bTQB4dhuT7LzNymHTtuV5jzVZNgTNMt7nNZMDtlBnw+W02LAUS19F+wpplPTSUTBIo4yQess1ym0k1feFstL65cg77YscgRIF6lQ3o2Lutbx7DSWry+gCxj+285I= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708011874; c=relaxed/simple; bh=VRnws1u5TYtqSIKl1DEqeV2+fWCO5s7JpMyqvcgXJYM=; h=From:To:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=nR5zQBSpQA/ogZzfb5pjv06v7F7+2ODuWO6jCo87akvS24rj/Ca6RsJRf3c6Fp18tV758o3pzmP+zhWFNoi8sqvrBe/IoewduwSHpVumQ0rD0sTltUc0FWZKMyg2p3j/8bkdiXMoNGcAZuOAcujrh9Bdvptb6m+8Njzw5KrxwvY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=xen.org; spf=pass smtp.mailfrom=xen.org; dkim=pass (1024-bit key) header.d=xen.org header.i=@xen.org header.b=sISZNeaz; arc=none smtp.client-ip=104.130.215.37 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=xen.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=xen.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References: In-Reply-To:Message-Id:Date:Subject:To:From; bh=oWxMi1Cq+3Oy+z1gazw2wQP+W9Y2VE65Jj+9Nc1gV68=; b=sISZNeazYwXkmpW0QAtc8+qM7X OL8hHyHTqik5cMz7/ZF2Waa781QdxDJFHdexNBwSXkTl7ypiR4boD8SkdUWWPttcH0188tvtNFpgV lxcv5DL+NECOzEQBPJPtSDb1V/DCcDGjpZomHyNLbBBgj30oOyngWldXLu285+EG63Tg=; Received: from xenbits.xenproject.org ([104.239.192.120]) by mail.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1raduM-0001Wa-Ck; Thu, 15 Feb 2024 15:44:18 +0000 Received: from 54-240-197-226.amazon.com ([54.240.197.226] helo=REM-PW02S00X.ant.amazon.com) by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1radha-00089r-Gp; Thu, 15 Feb 2024 15:31:06 +0000 From: Paul Durrant To: Paolo Bonzini , Jonathan Corbet , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , David Hildenbrand , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Sven Schnelle , Sean Christopherson , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , David Woodhouse , Paul Durrant , Shuah Khan , kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-s390@vger.kernel.org, linux-kselftest@vger.kernel.org Subject: [PATCH v13 19/21] KVM: pfncache: check the need for invalidation under read lock first Date: Thu, 15 Feb 2024 15:29:14 +0000 Message-Id: <20240215152916.1158-20-paul@xen.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240215152916.1158-1-paul@xen.org> References: <20240215152916.1158-1-paul@xen.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1790980896490927163 X-GMAIL-MSGID: 1790980896490927163 From: Paul Durrant When processing mmu_notifier invalidations for gpc caches, pre-check for overlap with the invalidation event while holding gpc->lock for read, and only take gpc->lock for write if the cache needs to be invalidated. Doing a pre-check without taking gpc->lock for write avoids unnecessarily contending the lock for unrelated invalidations, which is very beneficial for caches that are heavily used (but rarely subjected to mmu_notifier invalidations). Signed-off-by: Paul Durrant Reviewed-by: David Woodhouse --- Cc: Sean Christopherson Cc: Paolo Bonzini Cc: David Woodhouse v13: - Use Sean's preferred wording for the commit comment. v10: - New in this version. --- virt/kvm/pfncache.c | 22 +++++++++++++++++++--- 1 file changed, 19 insertions(+), 3 deletions(-) diff --git a/virt/kvm/pfncache.c b/virt/kvm/pfncache.c index 4e64d349b2f7..a60d8f906896 100644 --- a/virt/kvm/pfncache.c +++ b/virt/kvm/pfncache.c @@ -29,14 +29,30 @@ void gfn_to_pfn_cache_invalidate_start(struct kvm *kvm, unsigned long start, spin_lock(&kvm->gpc_lock); list_for_each_entry(gpc, &kvm->gpc_list, list) { - write_lock_irq(&gpc->lock); + read_lock_irq(&gpc->lock); /* Only a single page so no need to care about length */ if (gpc->valid && !is_error_noslot_pfn(gpc->pfn) && gpc->uhva >= start && gpc->uhva < end) { - gpc->valid = false; + read_unlock_irq(&gpc->lock); + + /* + * There is a small window here where the cache could + * be modified, and invalidation would no longer be + * necessary. Hence check again whether invalidation + * is still necessary once the write lock has been + * acquired. + */ + + write_lock_irq(&gpc->lock); + if (gpc->valid && !is_error_noslot_pfn(gpc->pfn) && + gpc->uhva >= start && gpc->uhva < end) + gpc->valid = false; + write_unlock_irq(&gpc->lock); + continue; } - write_unlock_irq(&gpc->lock); + + read_unlock_irq(&gpc->lock); } spin_unlock(&kvm->gpc_lock); }