Message ID | 20230602010550.785722-1-seanjc@google.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp721508vqr; Thu, 1 Jun 2023 18:17:08 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ44mC5wnmVxlMqVNaGGgT0RVaNPlWyLwgwwFo6VglAKwFRRcOMxCdDfN/nnrnC4usACNx+R X-Received: by 2002:a17:902:8303:b0:1af:e8cf:7004 with SMTP id bd3-20020a170902830300b001afe8cf7004mr989542plb.15.1685668627770; Thu, 01 Jun 2023 18:17:07 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1685668627; cv=none; d=google.com; s=arc-20160816; b=F5lxrppOBetXsfrmtox1wgHT4D6C/7lIvP0ND2aKDnyNIJXJLMbsQJFWZNJXwowwCT c7Xk3TYjwjdc2YDt6aD2LtYvOxP668yOT5aMNK2mOtYivY+EKpRbsmqjJKwAGeUCtfUk 6NRenUGY/HS7NBVu9wmz7MOfjs8MwjD+l1nhL3jyev/Me8GSjJiv8TSW5k3tZsL1+B/H NWjQ1Kps5tJ13D/Ufppb2wGIvPzAQaudca4F3jVcgOz1ww2/JMWBcmDSj8luqRyXmgGV llCskzAbzBndDl2DViO8vlJAyXizceAq0bffghbMmBfHQcl2MymJiYdAeDW63PQbDcsl 0UIA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:message-id:mime-version:date :reply-to:dkim-signature; bh=YUFFpLyuf1rFekYt+wds5pivPqzR10FQp30bOV5fsBs=; b=TRWJlx1O5zIjZ/cCip0JQMhZ1HvRPBZ0r4yQWDMgEIqXONja7iL5SWfXY7bw31Z8b7 g91e6UoZZOoIjpwp59jSRYI7u3qZqS7kbO03z8H8l9lrcME8NYjIm43M6gCYYWW+RF+w 1noIRP6XxMjIffyelIzuQtOSzw2+JbNvyrq20bNoNG1tGLpEPrVNXJhSuB7FtKwuygZ0 w/qujCxeodr2VHqvGa/dC4U+fyNb9UOLdgLqz+/mnzADUBte4suf7TKWh5Vc1O311nf/ GKfdYa+C0LlLL94aGohMSnxEsZOnOwQMgKdpw357zQviJ4Si4xf0KXbYliPtENUFTsNr +f3Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=x0+6Zpmn; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id h11-20020a170902f54b00b001b024322547si3741157plf.577.2023.06.01.18.16.53; Thu, 01 Jun 2023 18:17:07 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20221208 header.b=x0+6Zpmn; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233212AbjFBBF7 (ORCPT <rfc822;limurcpp@gmail.com> + 99 others); Thu, 1 Jun 2023 21:05:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39080 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231726AbjFBBF4 (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Thu, 1 Jun 2023 21:05:56 -0400 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 25163F2 for <linux-kernel@vger.kernel.org>; Thu, 1 Jun 2023 18:05:55 -0700 (PDT) Received: by mail-pl1-x64a.google.com with SMTP id d9443c01a7336-1b0424c50b8so7352565ad.1 for <linux-kernel@vger.kernel.org>; Thu, 01 Jun 2023 18:05:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1685667954; x=1688259954; h=cc:to:from:subject:message-id:mime-version:date:reply-to:from:to:cc :subject:date:message-id:reply-to; bh=YUFFpLyuf1rFekYt+wds5pivPqzR10FQp30bOV5fsBs=; b=x0+6ZpmncKnzCLRAhJL8b2WqLjdd3mQWGK52zPIBDBuKLQCx4nGH1XETCfh7WCuGcx WysMiyVYz5+jCA/lBjWoqR/8j6sakTh0FMGD3Z+ahpLbDFbGEbUkwFzMgOWbZ10jdeab fTlBiVQamGiG2cldcTGWKcrUEl602V60ib5t6lvIZ3OilE0geziJyUAImdFGXdRMAjQ5 kAsuXP3TYv2Zn67DBHjQcYfH15dmxIeTJK1GRJOYvDhUvtE5OLY5FWmpthGK7IYAHN3O qDCdmhDTLH47rKGdEFqwx5q1d1eMuKYpxkSNKfNCM08QwKVTa4xG1azWJZRHbdVqr1ci oUtQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1685667954; x=1688259954; h=cc:to:from:subject:message-id:mime-version:date:reply-to :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=YUFFpLyuf1rFekYt+wds5pivPqzR10FQp30bOV5fsBs=; b=jj5+oYeQKfSQQ3hAsGOF9CO7MbnqSpfTD9pXfbQ9GWgZSoY99/MtNZQThbR0WXM2MJ 3nRp7+M+CArJmktQAfewqjUuqyd+YImFAhKKWrMV66S3EF/RwwldGsEqn2ypF5JZBgDD JG9gRncXmZnKaDL27hrbrReNdA/k1Jgo7uC55SAGELb+6R6ppkDX4BFJRicia9kraC0Y rV8xnYvmkvFHKW5yLehMGcDj/MJDSZd+fOgNqyUfv4IRDM8bmW6Nb0YkZ+WhYr3QgAEC XZTzOvoHO3SvL9Va+S5EUi/LhnA91XSszLTk7uKl+fLilI88ebZ9Ml/64mn8dQKMySKc MAVg== X-Gm-Message-State: AC+VfDyppo3+VchsP5X+lM9Uh9Z/nqbswkgbm4dP4SkfZD7UjGDJBAL1 iFVCSPsVzdhCk7WdbpRa9O5Iyny7d+U= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:903:22c9:b0:1b1:7336:2654 with SMTP id y9-20020a17090322c900b001b173362654mr247247plg.11.1685667954653; Thu, 01 Jun 2023 18:05:54 -0700 (PDT) Reply-To: Sean Christopherson <seanjc@google.com> Date: Thu, 1 Jun 2023 18:05:50 -0700 Mime-Version: 1.0 X-Mailer: git-send-email 2.41.0.rc2.161.g9c6817b8e7-goog Message-ID: <20230602010550.785722-1-seanjc@google.com> Subject: [PATCH] KVM: x86: Use cpu_feature_enabled() for PKU instead of #ifdef From: Sean Christopherson <seanjc@google.com> To: Sean Christopherson <seanjc@google.com>, Paolo Bonzini <pbonzini@redhat.com> Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Jon Kohler <jon@nutanix.com> Content-Type: text/plain; charset="UTF-8" X-Spam-Status: No, score=-9.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1767551666885231325?= X-GMAIL-MSGID: =?utf-8?q?1767551666885231325?= |
Series |
KVM: x86: Use cpu_feature_enabled() for PKU instead of #ifdef
|
|
Commit Message
Sean Christopherson
June 2, 2023, 1:05 a.m. UTC
Replace an #ifdef on CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS with a
cpu_feature_enabled() check on X86_FEATURE_PKU. The macro magic of
DISABLED_MASK_BIT_SET() means that cpu_feature_enabled() provides the
same end result (no code generated) when PKU is disabled by Kconfig.
No functional change intended.
Cc: Jon Kohler <jon@nutanix.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/kvm/x86.c | 8 ++------
1 file changed, 2 insertions(+), 6 deletions(-)
base-commit: a053a0e4a9f8c52f3acf8a9d2520c4bf39077a7e
Comments
> On Jun 1, 2023, at 9:05 PM, Sean Christopherson <seanjc@google.com> wrote: > > Replace an #ifdef on CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS with a > cpu_feature_enabled() check on X86_FEATURE_PKU. The macro magic of > DISABLED_MASK_BIT_SET() means that cpu_feature_enabled() provides the > same end result (no code generated) when PKU is disabled by Kconfig. > > No functional change intended. > > Cc: Jon Kohler <jon@nutanix.com> > Signed-off-by: Sean Christopherson <seanjc@google.com> > --- > arch/x86/kvm/x86.c | 8 ++------ > 1 file changed, 2 insertions(+), 6 deletions(-) > > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c > index ceb7c5e9cf9e..eed1f0629023 100644 > --- a/arch/x86/kvm/x86.c > +++ b/arch/x86/kvm/x86.c > @@ -1017,13 +1017,11 @@ void kvm_load_guest_xsave_state(struct kvm_vcpu *vcpu) > wrmsrl(MSR_IA32_XSS, vcpu->arch.ia32_xss); > } > > -#ifdef CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS > - if (static_cpu_has(X86_FEATURE_PKU) && > + if (cpu_feature_enabled(X86_FEATURE_PKU) && > vcpu->arch.pkru != vcpu->arch.host_pkru && > ((vcpu->arch.xcr0 & XFEATURE_MASK_PKRU) || > kvm_is_cr4_bit_set(vcpu, X86_CR4_PKE))) > write_pkru(vcpu->arch.pkru); > -#endif /* CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS */ > } > EXPORT_SYMBOL_GPL(kvm_load_guest_xsave_state); > > @@ -1032,15 +1030,13 @@ void kvm_load_host_xsave_state(struct kvm_vcpu *vcpu) > if (vcpu->arch.guest_state_protected) > return; > > -#ifdef CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS > - if (static_cpu_has(X86_FEATURE_PKU) && > + if (cpu_feature_enabled(X86_FEATURE_PKU) && > ((vcpu->arch.xcr0 & XFEATURE_MASK_PKRU) || > kvm_is_cr4_bit_set(vcpu, X86_CR4_PKE))) { > vcpu->arch.pkru = rdpkru(); > if (vcpu->arch.pkru != vcpu->arch.host_pkru) > write_pkru(vcpu->arch.host_pkru); > } > -#endif /* CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS */ > > if (kvm_is_cr4_bit_set(vcpu, X86_CR4_OSXSAVE)) { > > > base-commit: a053a0e4a9f8c52f3acf8a9d2520c4bf39077a7e > -- > 2.41.0.rc2.161.g9c6817b8e7-goog > Thanks for the cleanup! Reviewed-by: Jon Kohler <jon@nutanix.com>
On Fri, Jun 2, 2023 at 8:51 AM Jon Kohler <jon@nutanix.com> wrote: > > > > > On Jun 1, 2023, at 9:05 PM, Sean Christopherson <seanjc@google.com> wrote: > > > > Replace an #ifdef on CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS with a > > cpu_feature_enabled() check on X86_FEATURE_PKU. The macro magic of > > DISABLED_MASK_BIT_SET() means that cpu_feature_enabled() provides the > > same end result (no code generated) when PKU is disabled by Kconfig. > > > > No functional change intended. > > > > Cc: Jon Kohler <jon@nutanix.com> > > Signed-off-by: Sean Christopherson <seanjc@google.com> > > --- > > arch/x86/kvm/x86.c | 8 ++------ > > 1 file changed, 2 insertions(+), 6 deletions(-) > > > > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c > > index ceb7c5e9cf9e..eed1f0629023 100644 > > --- a/arch/x86/kvm/x86.c > > +++ b/arch/x86/kvm/x86.c > > @@ -1017,13 +1017,11 @@ void kvm_load_guest_xsave_state(struct kvm_vcpu *vcpu) > > wrmsrl(MSR_IA32_XSS, vcpu->arch.ia32_xss); > > } > > > > -#ifdef CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS > > - if (static_cpu_has(X86_FEATURE_PKU) && > > + if (cpu_feature_enabled(X86_FEATURE_PKU) && > > vcpu->arch.pkru != vcpu->arch.host_pkru && > > ((vcpu->arch.xcr0 & XFEATURE_MASK_PKRU) || > > kvm_is_cr4_bit_set(vcpu, X86_CR4_PKE))) > > write_pkru(vcpu->arch.pkru); > > -#endif /* CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS */ > > } > > EXPORT_SYMBOL_GPL(kvm_load_guest_xsave_state); > > > > @@ -1032,15 +1030,13 @@ void kvm_load_host_xsave_state(struct kvm_vcpu *vcpu) > > if (vcpu->arch.guest_state_protected) > > return; > > > > -#ifdef CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS > > - if (static_cpu_has(X86_FEATURE_PKU) && > > + if (cpu_feature_enabled(X86_FEATURE_PKU) && > > ((vcpu->arch.xcr0 & XFEATURE_MASK_PKRU) || > > kvm_is_cr4_bit_set(vcpu, X86_CR4_PKE))) { > > vcpu->arch.pkru = rdpkru(); > > if (vcpu->arch.pkru != vcpu->arch.host_pkru) > > write_pkru(vcpu->arch.host_pkru); > > } > > -#endif /* CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS */ > > > > if (kvm_is_cr4_bit_set(vcpu, X86_CR4_OSXSAVE)) { > > > > > > base-commit: a053a0e4a9f8c52f3acf8a9d2520c4bf39077a7e > > -- > > 2.41.0.rc2.161.g9c6817b8e7-goog > > > > Thanks for the cleanup! > > Reviewed-by: Jon Kohler <jon@nutanix.com> +Mingwei Zhang As we move towards enabling PKRU on the host, due to some customer requests, I have to wonder if PKRU-disabled is the norm. In other words, is this a likely() or unlikely() optimization?
> > As we move towards enabling PKRU on the host, due to some customer > requests, I have to wonder if PKRU-disabled is the norm. > > In other words, is this a likely() or unlikely() optimization? I think it should be likely() as PKU was introduced very early in the Skylake-SP server cores many years ago. Today I think all recent client CPUs should have PKU on default if I am not mistaken. So yeah, adding a likely() probably should help prevent the compiler from evicting this code chunk to the end of function. Thanks. -Mingwei
On Fri, Jun 02, 2023, Jim Mattson wrote: > On Fri, Jun 2, 2023 at 8:51 AM Jon Kohler <jon@nutanix.com> wrote: > > > On Jun 1, 2023, at 9:05 PM, Sean Christopherson <seanjc@google.com> wrote: > > > @@ -1032,15 +1030,13 @@ void kvm_load_host_xsave_state(struct kvm_vcpu *vcpu) > > > if (vcpu->arch.guest_state_protected) > > > return; > > > > > > -#ifdef CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS > > > - if (static_cpu_has(X86_FEATURE_PKU) && > > > + if (cpu_feature_enabled(X86_FEATURE_PKU) && > > > ((vcpu->arch.xcr0 & XFEATURE_MASK_PKRU) || > > > kvm_is_cr4_bit_set(vcpu, X86_CR4_PKE))) { > > > vcpu->arch.pkru = rdpkru(); > > > if (vcpu->arch.pkru != vcpu->arch.host_pkru) > > > write_pkru(vcpu->arch.host_pkru); > > > } > > > -#endif /* CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS */ > > > > > > if (kvm_is_cr4_bit_set(vcpu, X86_CR4_OSXSAVE)) { > > > > > > > > > base-commit: a053a0e4a9f8c52f3acf8a9d2520c4bf39077a7e > > > -- > > > 2.41.0.rc2.161.g9c6817b8e7-goog > > > > > > > Thanks for the cleanup! > > > > Reviewed-by: Jon Kohler <jon@nutanix.com> > > +Mingwei Zhang > > As we move towards enabling PKRU on the host, due to some customer > requests, I have to wonder if PKRU-disabled is the norm. > > In other words, is this a likely() or unlikely() optimization? Neither? I don't see any reason to speculate on guest state. I'll bet dollars to donuts that adding (un)likely() is negligible in terms of performance.
On Thu, 01 Jun 2023 18:05:50 -0700, Sean Christopherson wrote: > Replace an #ifdef on CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS with a > cpu_feature_enabled() check on X86_FEATURE_PKU. The macro magic of > DISABLED_MASK_BIT_SET() means that cpu_feature_enabled() provides the > same end result (no code generated) when PKU is disabled by Kconfig. > > No functional change intended. > > [...] Applied to kvm-x86 misc, thanks! [1/1] KVM: x86: Use cpu_feature_enabled() for PKU instead of #ifdef https://github.com/kvm-x86/linux/commit/056b9919a16a -- https://github.com/kvm-x86/linux/tree/next https://github.com/kvm-x86/linux/tree/fixes
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index ceb7c5e9cf9e..eed1f0629023 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -1017,13 +1017,11 @@ void kvm_load_guest_xsave_state(struct kvm_vcpu *vcpu) wrmsrl(MSR_IA32_XSS, vcpu->arch.ia32_xss); } -#ifdef CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS - if (static_cpu_has(X86_FEATURE_PKU) && + if (cpu_feature_enabled(X86_FEATURE_PKU) && vcpu->arch.pkru != vcpu->arch.host_pkru && ((vcpu->arch.xcr0 & XFEATURE_MASK_PKRU) || kvm_is_cr4_bit_set(vcpu, X86_CR4_PKE))) write_pkru(vcpu->arch.pkru); -#endif /* CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS */ } EXPORT_SYMBOL_GPL(kvm_load_guest_xsave_state); @@ -1032,15 +1030,13 @@ void kvm_load_host_xsave_state(struct kvm_vcpu *vcpu) if (vcpu->arch.guest_state_protected) return; -#ifdef CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS - if (static_cpu_has(X86_FEATURE_PKU) && + if (cpu_feature_enabled(X86_FEATURE_PKU) && ((vcpu->arch.xcr0 & XFEATURE_MASK_PKRU) || kvm_is_cr4_bit_set(vcpu, X86_CR4_PKE))) { vcpu->arch.pkru = rdpkru(); if (vcpu->arch.pkru != vcpu->arch.host_pkru) write_pkru(vcpu->arch.host_pkru); } -#endif /* CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS */ if (kvm_is_cr4_bit_set(vcpu, X86_CR4_OSXSAVE)) {