From patchwork Wed Jan 3 03:14:05 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Mi, Dapeng" X-Patchwork-Id: 184592 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7301:6f82:b0:100:9c79:88ff with SMTP id tb2csp4806583dyb; Tue, 2 Jan 2024 19:12:55 -0800 (PST) X-Google-Smtp-Source: AGHT+IGI8g/ZH/uulrza//2/L56cAZEYHbgLDNTd2Mb+KfrcP5Q6jj1NyDE2Lq5O67MNIfBn8XbH X-Received: by 2002:a37:de08:0:b0:781:1d86:a4bb with SMTP id h8-20020a37de08000000b007811d86a4bbmr498728qkj.67.1704251575152; Tue, 02 Jan 2024 19:12:55 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1704251575; cv=none; d=google.com; s=arc-20160816; b=vSQih4z/6VmFWE/HCJ8oeKoaxBEnhn3qVuQ7aqXRLfkK1SG8H8+LEUKYKJFq2XM6CW /MMhnAqewzsq9S7z0FnWAgjyTkw2wvpVMA/J+9PmT0CRORGCe0vLsCSveNi8+hzmxXaj E9JrfzzlzFho1OjGZYSRXQIrpV+etK8ARICWLG/sj3g+bjJqXzcwHTPCAmYcjgFovnbP ps31pcE1KI5ye2L4fT6lipte/+N54V3zKyUrVb7Epl7rHrvOkaw09p35QpFug3hnw/5Z Svd2GN6prkXy3M7ZS0LggzG11ye3+Xb2OlqsuBI6cnJwmMIoWHx9YueFnOP7z5t4YZjK ea6Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=QMIuvbxkv9zxsaP1ueHXynfcrJjdLWQcp2U+TmUjGIA=; fh=YsfTl7qN5yLiZ/GhYjuBdDgWBv8FNcMbD5266uBeHkY=; b=fy0xr6gmqJTo3NgZqDj5k0VLawIwjj4GSOQQEuSLL+tt4iYHm0hyUXJTlyR8I7bCKL CIHZC9wW1RolBggauIWMdD7MeWoBVjysHVAMZqtPAHWdwP8l7/ADZUlZ1smPW8MqsQ8E apyeBqHpY0syU5tAHcgRZh/31Nsl7o7r0jrHJOkYwfbph3GYJxz4TiKML2ZndXm/3Uh3 95GdZeEa/1XniKpCHOtEHag0Pg0tKmQwaTms8ZQC+che7cUth0alXr4rZ1idWxF64nU3 7iyzm038Y9bMrbYx2oouqawAYz3lCcHiwpBiGf+MvFU6r6tjN43jblzvVDFTqhwJHYjt j8LQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=Y1GnUrsN; spf=pass (google.com: domain of linux-kernel+bounces-15122-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-15122-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [2604:1380:45d1:ec00::1]) by mx.google.com with ESMTPS id j25-20020a05620a147900b0077f62e42f75si27241556qkl.584.2024.01.02.19.12.55 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 02 Jan 2024 19:12:55 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-15122-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) client-ip=2604:1380:45d1:ec00::1; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=Y1GnUrsN; spf=pass (google.com: domain of linux-kernel+bounces-15122-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-15122-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id DF2D61C22D3C for ; Wed, 3 Jan 2024 03:12:54 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 7376F19441; Wed, 3 Jan 2024 03:10:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="Y1GnUrsN" X-Original-To: linux-kernel@vger.kernel.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.10]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 77C1418EA5; Wed, 3 Jan 2024 03:09:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=linux.intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1704251398; x=1735787398; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=VsfReKWKrlyt+Eg4eEpxHPMlAigKAPvNomF616KoHYs=; b=Y1GnUrsNkT/N+0X8jzVZYcXI2zjPjMFwVDai23ai3ay5DdE+yZCq22O8 US7AjyLN0kiK3jq/n42vjhFtCN/H85ClXy950bVcNm5rnWF7lBQmklD8X NKNXUZXUYMPnj+iDCpqFuiYfGJScGseW4eB1yMklhRqYZXUl84tG0Jwa9 uplwuPoHc5WLs9OKnCB27WM+iPA8foma/iJ+vCdfyd0eeg+awEgrhKszX NZ1IX2NwLUA9lhJAuJACeE/Zi/EKh99jTb1JP1asrfO6ZTk7+EyRiLlhF 53l977paB0QYKHzDsv/spJw7YP1aGjaA4UCOGUTzKRxDZlGmmQ8kdUF1P Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10941"; a="10343159" X-IronPort-AV: E=Sophos;i="6.04,326,1695711600"; d="scan'208";a="10343159" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orvoesa102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Jan 2024 19:09:57 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10941"; a="729665956" X-IronPort-AV: E=Sophos;i="6.04,326,1695711600"; d="scan'208";a="729665956" Received: from dmi-pnp-i7.sh.intel.com ([10.239.159.155]) by orsmga003.jf.intel.com with ESMTP; 02 Jan 2024 19:09:50 -0800 From: Dapeng Mi To: Sean Christopherson , Paolo Bonzini , Jim Mattson Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Zhenyu Wang , Zhang Xiong , Mingwei Zhang , Like Xu , Jinrong Liang , Dapeng Mi , Dapeng Mi Subject: [kvm-unit-tests Patch v3 07/11] x86: pmu: Enable and disable PMCs in loop() asm blob Date: Wed, 3 Jan 2024 11:14:05 +0800 Message-Id: <20240103031409.2504051-8-dapeng1.mi@linux.intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240103031409.2504051-1-dapeng1.mi@linux.intel.com> References: <20240103031409.2504051-1-dapeng1.mi@linux.intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1787037299347425038 X-GMAIL-MSGID: 1787037299347425038 Currently enabling PMCs, executing loop() and disabling PMCs are divided 3 separated functions. So there could be other instructions executed between enabling PMCS and running loop() or running loop() and disabling PMCs, e.g. if there are multiple counters enabled in measure_many() function, the instructions which enabling the 2nd and more counters would be counted in by the 1st counter. So current implementation can only verify the correctness of count by an rough range rather than a precise count even for instructions and branches events. Strictly speaking, this verification is meaningless as the test could still pass even though KVM vPMU has something wrong and reports an incorrect instructions or branches count which is in the rough range. Thus, move the PMCs enabling and disabling into the loop() asm blob and ensure only the loop asm instructions would be counted, then the instructions or branches events can be verified with an precise count instead of an rough range. Signed-off-by: Dapeng Mi --- x86/pmu.c | 83 +++++++++++++++++++++++++++++++++++++++++++++---------- 1 file changed, 69 insertions(+), 14 deletions(-) diff --git a/x86/pmu.c b/x86/pmu.c index 46bed66c5c9f..88b89ad889b9 100644 --- a/x86/pmu.c +++ b/x86/pmu.c @@ -18,6 +18,20 @@ #define EXPECTED_INSTR 17 #define EXPECTED_BRNCH 5 +// Instrustion number of LOOP_ASM code +#define LOOP_INSTRNS 10 +#define LOOP_ASM \ + "1: mov (%1), %2; add $64, %1;\n\t" \ + "nop; nop; nop; nop; nop; nop; nop;\n\t" \ + "loop 1b;\n\t" + +#define PRECISE_LOOP_ASM \ + "wrmsr;\n\t" \ + "mov %%ecx, %%edi; mov %%ebx, %%ecx;\n\t" \ + LOOP_ASM \ + "mov %%edi, %%ecx; xor %%eax, %%eax; xor %%edx, %%edx;\n\t" \ + "wrmsr;\n\t" + typedef struct { uint32_t ctr; uint64_t config; @@ -54,13 +68,43 @@ char *buf; static struct pmu_event *gp_events; static unsigned int gp_events_size; -static inline void loop(void) + +static inline void __loop(void) +{ + unsigned long tmp, tmp2, tmp3; + + asm volatile(LOOP_ASM + : "=c"(tmp), "=r"(tmp2), "=r"(tmp3) + : "0"(N), "1"(buf)); +} + +/* + * Enable and disable counters in a whole asm blob to ensure + * no other instructions are counted in the time slot between + * counters enabling and really LOOP_ASM code executing. + * Thus counters can verify instructions and branches events + * against precise counts instead of a rough valid count range. + */ +static inline void __precise_count_loop(u64 cntrs) { unsigned long tmp, tmp2, tmp3; + unsigned int global_ctl = pmu.msr_global_ctl; + u32 eax = cntrs & (BIT_ULL(32) - 1); + u32 edx = cntrs >> 32; - asm volatile("1: mov (%1), %2; add $64, %1; nop; nop; nop; nop; nop; nop; nop; loop 1b" - : "=c"(tmp), "=r"(tmp2), "=r"(tmp3): "0"(N), "1"(buf)); + asm volatile(PRECISE_LOOP_ASM + : "=b"(tmp), "=r"(tmp2), "=r"(tmp3) + : "a"(eax), "d"(edx), "c"(global_ctl), + "0"(N), "1"(buf) + : "edi"); +} +static inline void loop(u64 cntrs) +{ + if (!this_cpu_has_perf_global_ctrl()) + __loop(); + else + __precise_count_loop(cntrs); } volatile uint64_t irq_received; @@ -159,18 +203,17 @@ static void __start_event(pmu_counter_t *evt, uint64_t count) ctrl = (ctrl & ~(0xf << shift)) | (usrospmi << shift); wrmsr(MSR_CORE_PERF_FIXED_CTR_CTRL, ctrl); } - global_enable(evt); apic_write(APIC_LVTPC, PMI_VECTOR); } static void start_event(pmu_counter_t *evt) { __start_event(evt, 0); + global_enable(evt); } -static void stop_event(pmu_counter_t *evt) +static void __stop_event(pmu_counter_t *evt) { - global_disable(evt); if (is_gp(evt)) { wrmsr(MSR_GP_EVENT_SELECTx(event_to_global_idx(evt)), evt->config & ~EVNTSEL_EN); @@ -182,14 +225,24 @@ static void stop_event(pmu_counter_t *evt) evt->count = rdmsr(evt->ctr); } +static void stop_event(pmu_counter_t *evt) +{ + global_disable(evt); + __stop_event(evt); +} + static noinline void measure_many(pmu_counter_t *evt, int count) { int i; + u64 cntrs = 0; + + for (i = 0; i < count; i++) { + __start_event(&evt[i], 0); + cntrs |= BIT_ULL(event_to_global_idx(&evt[i])); + } + loop(cntrs); for (i = 0; i < count; i++) - start_event(&evt[i]); - loop(); - for (i = 0; i < count; i++) - stop_event(&evt[i]); + __stop_event(&evt[i]); } static void measure_one(pmu_counter_t *evt) @@ -199,9 +252,11 @@ static void measure_one(pmu_counter_t *evt) static noinline void __measure(pmu_counter_t *evt, uint64_t count) { + u64 cntrs = BIT_ULL(event_to_global_idx(evt)); + __start_event(evt, count); - loop(); - stop_event(evt); + loop(cntrs); + __stop_event(evt); } static bool verify_event(uint64_t count, struct pmu_event *e) @@ -451,7 +506,7 @@ static void check_running_counter_wrmsr(void) report_prefix_push("running counter wrmsr"); start_event(&evt); - loop(); + __loop(); wrmsr(MSR_GP_COUNTERx(0), 0); stop_event(&evt); report(evt.count < gp_events[0].min, "cntr"); @@ -468,7 +523,7 @@ static void check_running_counter_wrmsr(void) wrmsr(MSR_GP_COUNTERx(0), count); - loop(); + __loop(); stop_event(&evt); if (this_cpu_has_perf_global_status()) {