From patchwork Fri Nov 4 06:21:04 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Usama Arif X-Patchwork-Id: 15378 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp202226wru; Thu, 3 Nov 2022 23:22:53 -0700 (PDT) X-Google-Smtp-Source: AMsMyM5Mv3cP3YCRaGQf+46MbcQn1Vde2wENrNNNr0JnCjP3DIFYaK2Ncl0yNLmQvsh84pdMFUcP X-Received: by 2002:a17:903:444:b0:187:428:1317 with SMTP id iw4-20020a170903044400b0018704281317mr33846388plb.151.1667542973309; Thu, 03 Nov 2022 23:22:53 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1667542973; cv=none; d=google.com; s=arc-20160816; b=o4/mUwLhMi9tJPE/yyehept5L2AmSVsEnDHO9N1O2fbpiJ9n4SS6RhOC4H5/FPVODy v3LKjfsAAjTwSj/mUCeLANUkESKncNXLy2pGOzaoJX8MtvWzNhe3Un09mIHQuVh32WdY d/48k8doskeETQ32qPS/vp/qPaebfqHxY1Z6J8L2muv612nbK6rXFeTdFPj5LC/mc3QD VhEmY+nqEZLNbWbE58xPUpsDF3tpYp7im7e0T4bYUQwzVwXDvzJZMjG/ZsQmHsrngbot 1LlORMObqeIYvneXmZYTauO2IxWCIfa9rHWMkfBP+uY4NLh5hJ945Eu2AsUo5qFy4fVk WIbg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=xbUgkqdqlP0TXRThV9fNUb23PL+ZSNiv2lV8h+vDI6o=; b=QKlGWfcpdR5Lv2d38DtPX+VDBtXIrU/pY3ssg7L4Db7FI7cGhBpPIastx5ApA1Z9+5 gNYyg7mS0qysPgElJucPA6KjRm4LECWm85IAu96XLCrwfnkQsUAUgdQh48hoKc6GtfWj iPPCd6DofeWd06oH2av0B6g6BxrU7CiEfvR1Djym1Rze5qLu/2TP5+M/rm05WN5n7ULU 41RJIl8QU3nz87XDWifLph0FDeeAPpApW9nJLJiFPJ3n8Z8tsIXs0nQLQfOkcTEsFwo3 8K7N1tIL6M4fenrUe4JwPsiRl876mgMy4bj//NL7eWFLeXzEDoryuMpHwXH4AVvdZa17 xOxA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=imUXex4A; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=bytedance.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id k2-20020a056a00168200b00562b7ed6bc3si3921484pfc.373.2022.11.03.23.22.41; Thu, 03 Nov 2022 23:22:53 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=imUXex4A; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=bytedance.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231530AbiKDGVx (ORCPT + 99 others); Fri, 4 Nov 2022 02:21:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50394 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231379AbiKDGVW (ORCPT ); Fri, 4 Nov 2022 02:21:22 -0400 Received: from mail-wr1-x434.google.com (mail-wr1-x434.google.com [IPv6:2a00:1450:4864:20::434]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 323002A725 for ; Thu, 3 Nov 2022 23:21:20 -0700 (PDT) Received: by mail-wr1-x434.google.com with SMTP id w14so5665212wru.8 for ; Thu, 03 Nov 2022 23:21:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=xbUgkqdqlP0TXRThV9fNUb23PL+ZSNiv2lV8h+vDI6o=; b=imUXex4ARPKocmDM9ipkZFuRSOQjZGPCh8xCGhXhPFszelxiFFYyzabJtWVOBqRsOn UrXbYxEOedDge+aDjd+JMoQe74wgIAE/Ea0AiiaSAafhUiDs9eWNbgHHUNAec+UGqoZA nabhdH6HPcmnkSxd0xsffa/wZa7MwXCnhRPfhv/6a/wPALBI7yBrUQHYJdQoHPQkMMF6 5jxd2vhLH9BwfRoCPodbrVlU7ta4IP5DRxrtw+n3eG5FOS4e3zgirCjShSh1jjHA3X9I o6uvV9ccxY/7c0V/5qZYHfV68GVxUBXiWyXYjvSoOkoEtKZEYMqWxQsRNuxbJJb6H3n4 vpKQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=xbUgkqdqlP0TXRThV9fNUb23PL+ZSNiv2lV8h+vDI6o=; b=Hv5pTLzjimZ4+sQU28rPH2FHdkbUO97wpCfB6XIv4agIeeb2GXx4d08AFFggSJtsBa Xe8Rw3JB1xq/iN2cCKdJJw8lxvzsvWdi62AJYBXhH6IZHkxYUHMKLsfGfKaYeXQP/zJ9 bX5UauViCR+YjIKRdFcx+is4Agpv7579lS8S+A6TToaKz4ba/7KPfMzWmmfHVpVVEoJ6 s/u6DV+0nXjuKf/W+or4W8eypmTunDtYVnoUeZF5zXBK4Qr/zPkWaESThIH/JgO2P+IV qNmy4SH0nNInoKhHGFOgG/lupRmS1C5fxg5JMrvPoq3+r9KNwPd3wQiwLaHk2IS03LIh rf/A== X-Gm-Message-State: ACrzQf1c4w/EhvKLeXglR1SCh9B4RiRgoRaOF09/o0zM/t8+8Mj4f9PF LBvXlv1MNOWB7roDZflqjVt9SEOXyNDpOQ== X-Received: by 2002:a05:6000:1201:b0:236:61f4:b384 with SMTP id e1-20020a056000120100b0023661f4b384mr20419434wrx.505.1667542878791; Thu, 03 Nov 2022 23:21:18 -0700 (PDT) Received: from usaari01.cust.communityfibre.co.uk ([2a02:6b6a:b4d7:0:c7c:f931:dd4c:1ea6]) by smtp.gmail.com with ESMTPSA id w11-20020a5d608b000000b002366f9bd717sm3099924wrt.45.2022.11.03.23.21.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 03 Nov 2022 23:21:18 -0700 (PDT) From: Usama Arif To: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org, linux-doc@vger.kernel.org, virtualization@lists.linux-foundation.org, linux@armlinux.org.uk, yezengruan@huawei.com, catalin.marinas@arm.com, will@kernel.org, maz@kernel.org, steven.price@arm.com, mark.rutland@arm.com, bagasdotme@gmail.com Cc: fam.zheng@bytedance.com, liangma@liangbit.com, punit.agrawal@bytedance.com, Usama Arif Subject: [v2 5/6] KVM: arm64: Support the VCPU preemption check Date: Fri, 4 Nov 2022 06:21:04 +0000 Message-Id: <20221104062105.4119003-6-usama.arif@bytedance.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221104062105.4119003-1-usama.arif@bytedance.com> References: <20221104062105.4119003-1-usama.arif@bytedance.com> MIME-Version: 1.0 X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1748545540739040334?= X-GMAIL-MSGID: =?utf-8?q?1748545540739040334?= Support the vcpu_is_preempted() functionality under KVM/arm64. This will enhance lock performance on overcommitted hosts (more runnable VCPUs than physical CPUs in the system) as doing busy waits for preempted VCPUs will hurt system performance far worse than early yielding. Signed-off-by: Zengruan Ye Signed-off-by: Usama Arif --- arch/arm64/include/asm/paravirt.h | 2 + arch/arm64/include/asm/spinlock.h | 16 ++++- arch/arm64/kernel/paravirt.c | 112 ++++++++++++++++++++++++++++++ arch/arm64/kernel/setup.c | 3 + include/linux/cpuhotplug.h | 1 + 5 files changed, 133 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/paravirt.h b/arch/arm64/include/asm/paravirt.h index 9aa193e0e8f2..4ccb4356c56b 100644 --- a/arch/arm64/include/asm/paravirt.h +++ b/arch/arm64/include/asm/paravirt.h @@ -19,10 +19,12 @@ static inline u64 paravirt_steal_clock(int cpu) } int __init pv_time_init(void); +int __init pv_lock_init(void); #else #define pv_time_init() do {} while (0) +#define pv_lock_init() do {} while (0) #endif // CONFIG_PARAVIRT diff --git a/arch/arm64/include/asm/spinlock.h b/arch/arm64/include/asm/spinlock.h index 0525c0b089ed..7023efa4de96 100644 --- a/arch/arm64/include/asm/spinlock.h +++ b/arch/arm64/include/asm/spinlock.h @@ -10,7 +10,20 @@ /* See include/linux/spinlock.h */ #define smp_mb__after_spinlock() smp_mb() +#define vcpu_is_preempted vcpu_is_preempted + +#ifdef CONFIG_PARAVIRT +#include + +bool dummy_vcpu_is_preempted(int cpu); +DECLARE_STATIC_CALL(pv_vcpu_is_preempted, dummy_vcpu_is_preempted); +static inline bool vcpu_is_preempted(int cpu) +{ + return static_call(pv_vcpu_is_preempted)(cpu); +} + +#else /* * Changing this will break osq_lock() thanks to the call inside * smp_cond_load_relaxed(). @@ -18,10 +31,11 @@ * See: * https://lore.kernel.org/lkml/20200110100612.GC2827@hirez.programming.kicks-ass.net */ -#define vcpu_is_preempted vcpu_is_preempted static inline bool vcpu_is_preempted(int cpu) { return false; } +#endif /* CONFIG_PARAVIRT */ + #endif /* __ASM_SPINLOCK_H */ diff --git a/arch/arm64/kernel/paravirt.c b/arch/arm64/kernel/paravirt.c index 57c7c211f8c7..a340f5f4327f 100644 --- a/arch/arm64/kernel/paravirt.c +++ b/arch/arm64/kernel/paravirt.c @@ -20,8 +20,10 @@ #include #include +#include #include #include +#include #include struct static_key paravirt_steal_enabled; @@ -38,7 +40,12 @@ struct pv_time_stolen_time_region { struct pvclock_vcpu_stolen_time __rcu *kaddr; }; +struct pv_lock_state_region { + struct pvlock_vcpu_state __rcu *kaddr; +}; + static DEFINE_PER_CPU(struct pv_time_stolen_time_region, stolen_time_region); +static DEFINE_PER_CPU(struct pv_lock_state_region, lock_state_region); static bool steal_acc = true; static int __init parse_no_stealacc(char *arg) @@ -178,3 +185,108 @@ int __init pv_time_init(void) return 0; } + +static bool native_vcpu_is_preempted(int cpu) +{ + return false; +} + +DEFINE_STATIC_CALL(pv_vcpu_is_preempted, native_vcpu_is_preempted); + +static bool para_vcpu_is_preempted(int cpu) +{ + struct pv_lock_state_region *reg; + __le64 preempted_le; + + reg = per_cpu_ptr(&lock_state_region, cpu); + if (!reg->kaddr) { + pr_warn_once("PV lock enabled but not configured for cpu %d\n", + cpu); + return false; + } + + preempted_le = le64_to_cpu(READ_ONCE(reg->kaddr->preempted)); + + return !!(preempted_le); +} + +static int pvlock_vcpu_state_dying_cpu(unsigned int cpu) +{ + struct pv_lock_state_region *reg; + + reg = this_cpu_ptr(&lock_state_region); + if (!reg->kaddr) + return 0; + + memunmap(reg->kaddr); + memset(reg, 0, sizeof(*reg)); + + return 0; +} + +static int init_pvlock_vcpu_state(unsigned int cpu) +{ + struct pv_lock_state_region *reg; + struct arm_smccc_res res; + + reg = this_cpu_ptr(&lock_state_region); + + arm_smccc_1_1_invoke(ARM_SMCCC_VENDOR_HYP_KVM_PV_LOCK_FUNC_ID, &res); + + if (res.a0 == SMCCC_RET_NOT_SUPPORTED) { + pr_warn("Failed to init PV lock data structure\n"); + return -EINVAL; + } + + reg->kaddr = memremap(res.a0, + sizeof(struct pvlock_vcpu_state), + MEMREMAP_WB); + + if (!reg->kaddr) { + pr_warn("Failed to map PV lock data structure\n"); + return -ENOMEM; + } + + return 0; +} + +static int kvm_arm_init_pvlock(void) +{ + int ret; + + ret = cpuhp_setup_state(CPUHP_AP_ARM_KVM_PVLOCK_STARTING, + "hypervisor/arm/pvlock:starting", + init_pvlock_vcpu_state, + pvlock_vcpu_state_dying_cpu); + if (ret < 0) { + pr_warn("PV-lock init failed\n"); + return ret; + } + + return 0; +} + +static bool has_kvm_pvlock(void) +{ + return kvm_arm_hyp_service_available(ARM_SMCCC_KVM_FUNC_PV_LOCK); +} + +int __init pv_lock_init(void) +{ + int ret; + + if (is_hyp_mode_available()) + return 0; + + if (!has_kvm_pvlock()) + return 0; + + ret = kvm_arm_init_pvlock(); + if (ret) + return ret; + + static_call_update(pv_vcpu_is_preempted, para_vcpu_is_preempted); + pr_info("using PV-lock preempted\n"); + + return 0; +} \ No newline at end of file diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c index fea3223704b6..05ca07ac5800 100644 --- a/arch/arm64/kernel/setup.c +++ b/arch/arm64/kernel/setup.c @@ -42,6 +42,7 @@ #include #include #include +#include #include #include #include @@ -360,6 +361,8 @@ void __init __no_sanitize_address setup_arch(char **cmdline_p) smp_init_cpus(); smp_build_mpidr_hash(); + pv_lock_init(); + /* Init percpu seeds for random tags after cpus are set up. */ kasan_init_sw_tags(); diff --git a/include/linux/cpuhotplug.h b/include/linux/cpuhotplug.h index f61447913db9..c0ee11855c73 100644 --- a/include/linux/cpuhotplug.h +++ b/include/linux/cpuhotplug.h @@ -192,6 +192,7 @@ enum cpuhp_state { /* Must be the last timer callback */ CPUHP_AP_DUMMY_TIMER_STARTING, CPUHP_AP_ARM_XEN_STARTING, + CPUHP_AP_ARM_KVM_PVLOCK_STARTING, CPUHP_AP_ARM_CORESIGHT_STARTING, CPUHP_AP_ARM_CORESIGHT_CTI_STARTING, CPUHP_AP_ARM64_ISNDEP_STARTING,