From patchwork Tue Jul 11 08:24:50 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anshuman Khandual X-Patchwork-Id: 118316 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:a6b2:0:b0:3e4:2afc:c1 with SMTP id c18csp327533vqm; Tue, 11 Jul 2023 01:35:40 -0700 (PDT) X-Google-Smtp-Source: APBJJlHo8yWVFdSQjDQdrwyHStuTxvveC87GlM0BVxXpFDsGDow81y4pTDse93Wcbqf/1TZbrvw7 X-Received: by 2002:a17:906:74c8:b0:993:ed3c:dee2 with SMTP id z8-20020a17090674c800b00993ed3cdee2mr9768316ejl.5.1689064539274; Tue, 11 Jul 2023 01:35:39 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1689064539; cv=none; d=google.com; s=arc-20160816; b=wS3wkHKeypnEL2CivIZFbw5TLLjBLLPp/TkwL26ACQQKBVt/QOOIKkPYGmxWbCZAZy ZTRvUZM+1UNzkF2DBxaj0+HIoq+GUQLwv6Z/V6fAKpm+UkogrxED//M1g63/pMjJDrDX 8TrTupQxIQVCWf+2wXH+aYQDat52W6Vo97g2vRZkIZt2XJD+xcKN4RB2igxjyyDxToc7 lhsKs7O5vcs2Xu6CXKwGjhD3WoKdKaCUKS7UD0dJkvNaHSTxFxWtoUwErbpVorL07ChL 46D9ElpyNKOBbKAJ6Uj4WufhOA+RhLt5MmlLYPLH8fGOMnTT8dTLUCtPd1xskdcJ7ymr Pd8Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=kynDHhOKXwyhErtXuVnSPdh2YoL+0XvoZ3jOq3aUHpc=; fh=uKDf5R1eKBEpFB2sP5rwFdsisg1sKNqEQFf4Rrsmuig=; b=NHja9sb+JsOlagh0Tthcv6isjTyJn9DxRdhJgPwRbG4VMVUTyo/JdRXxqvHcK1kpot EKuNPcjEC8zCHwh7EZMmgzBzYfv5qfS95OP1AVbu5SX0VcTsiczZlq/lNlUNaOiZL4L1 5M/vGzs5LWNvDbjiCgZdDBaorkM84LUwTkUTJXZ94uJiugnORwcz/WUTyENgSJ+NFV+I ey8DchzbYKNPX/fY3EOrzsxkUZguwMVv8t07sBJe6LOy6ooCeDulmVgvgPRki9J/lEKl 2YAxJnhlfHFA2BEyfHZqRzw7c479rbuztI8wjQZ6hv5cP2gbjs46bOr18K5wieg1BZeA iH8w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id jz21-20020a170906bb1500b00991cbb3d4a6si1725765ejb.115.2023.07.11.01.35.15; Tue, 11 Jul 2023 01:35:39 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230213AbjGKIZ4 (ORCPT + 99 others); Tue, 11 Jul 2023 04:25:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44602 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231479AbjGKIZx (ORCPT ); Tue, 11 Jul 2023 04:25:53 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 4F2FE198E; Tue, 11 Jul 2023 01:25:38 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 01434D75; Tue, 11 Jul 2023 01:26:20 -0700 (PDT) Received: from a077893.arm.com (unknown [10.163.47.87]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id ED5683F740; Tue, 11 Jul 2023 01:25:32 -0700 (PDT) From: Anshuman Khandual To: linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, will@kernel.org, catalin.marinas@arm.com, mark.rutland@arm.com Cc: Anshuman Khandual , Mark Brown , James Clark , Rob Herring , Marc Zyngier , Suzuki Poulose , Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , linux-perf-users@vger.kernel.org Subject: [PATCH V13 - RESEND 05/10] arm64/perf: Add branch stack support in ARMV8 PMU Date: Tue, 11 Jul 2023 13:54:50 +0530 Message-Id: <20230711082455.215983-6-anshuman.khandual@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230711082455.215983-1-anshuman.khandual@arm.com> References: <20230711082455.215983-1-anshuman.khandual@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1771112538368758680 X-GMAIL-MSGID: 1771112538368758680 This enables support for branch stack sampling event in ARMV8 PMU, checking has_branch_stack() on the event inside 'struct arm_pmu' callbacks. Although these branch stack helpers armv8pmu_branch_XXXXX() are just dummy functions for now. While here, this also defines arm_pmu's sched_task() callback with armv8pmu_sched_task(), which resets the branch record buffer on a sched_in. Cc: Catalin Marinas Cc: Will Deacon Cc: Mark Rutland Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Tested-by: James Clark Signed-off-by: Anshuman Khandual --- arch/arm/include/asm/arm_pmuv3.h | 9 +++ arch/arm64/include/asm/perf_event.h | 31 +++++++++++ drivers/perf/arm_pmuv3.c | 86 +++++++++++++++++++++-------- 3 files changed, 102 insertions(+), 24 deletions(-) diff --git a/arch/arm/include/asm/arm_pmuv3.h b/arch/arm/include/asm/arm_pmuv3.h index f3cd04ff022d..d7c438939a6f 100644 --- a/arch/arm/include/asm/arm_pmuv3.h +++ b/arch/arm/include/asm/arm_pmuv3.h @@ -249,4 +249,13 @@ static inline bool is_pmuv3p5(int pmuver) return pmuver >= ARMV8_PMU_DFR_VER_V3P5; } +/* BRBE stubs */ +#ifdef CONFIG_PERF_EVENTS +static inline void armv8pmu_branch_read(struct pmu_hw_events *cpuc, struct perf_event *event) { } +static inline bool armv8pmu_branch_attr_valid(struct perf_event *event) { return false; } +static inline void armv8pmu_branch_enable(struct perf_event *event) { } +static inline void armv8pmu_branch_disable(struct perf_event *event) { } +static inline void armv8pmu_branch_probe(struct arm_pmu *arm_pmu) { } +static inline void armv8pmu_branch_reset(void) { } +#endif #endif diff --git a/arch/arm64/include/asm/perf_event.h b/arch/arm64/include/asm/perf_event.h index eb7071c9eb34..ebc392ba3559 100644 --- a/arch/arm64/include/asm/perf_event.h +++ b/arch/arm64/include/asm/perf_event.h @@ -24,4 +24,35 @@ extern unsigned long perf_misc_flags(struct pt_regs *regs); (regs)->pstate = PSR_MODE_EL1h; \ } +struct pmu_hw_events; +struct arm_pmu; +struct perf_event; + +#ifdef CONFIG_PERF_EVENTS +static inline bool has_branch_stack(struct perf_event *event); + +static inline void armv8pmu_branch_read(struct pmu_hw_events *cpuc, struct perf_event *event) +{ + WARN_ON_ONCE(!has_branch_stack(event)); +} + +static inline bool armv8pmu_branch_attr_valid(struct perf_event *event) +{ + WARN_ON_ONCE(!has_branch_stack(event)); + return false; +} + +static inline void armv8pmu_branch_enable(struct perf_event *event) +{ + WARN_ON_ONCE(!has_branch_stack(event)); +} + +static inline void armv8pmu_branch_disable(struct perf_event *event) +{ + WARN_ON_ONCE(!has_branch_stack(event)); +} + +static inline void armv8pmu_branch_probe(struct arm_pmu *arm_pmu) { } +static inline void armv8pmu_branch_reset(void) { } +#endif #endif diff --git a/drivers/perf/arm_pmuv3.c b/drivers/perf/arm_pmuv3.c index 08b3a1bf0ef6..10bbe02f4079 100644 --- a/drivers/perf/arm_pmuv3.c +++ b/drivers/perf/arm_pmuv3.c @@ -721,38 +721,21 @@ static void armv8pmu_enable_event(struct perf_event *event) * Enable counter and interrupt, and set the counter to count * the event that we're interested in. */ - - /* - * Disable counter - */ armv8pmu_disable_event_counter(event); - - /* - * Set event. - */ armv8pmu_write_event_type(event); - - /* - * Enable interrupt for this counter - */ armv8pmu_enable_event_irq(event); - - /* - * Enable counter - */ armv8pmu_enable_event_counter(event); + + if (has_branch_stack(event)) + armv8pmu_branch_enable(event); } static void armv8pmu_disable_event(struct perf_event *event) { - /* - * Disable counter - */ - armv8pmu_disable_event_counter(event); + if (has_branch_stack(event)) + armv8pmu_branch_disable(event); - /* - * Disable interrupt for this counter - */ + armv8pmu_disable_event_counter(event); armv8pmu_disable_event_irq(event); } @@ -830,6 +813,11 @@ static irqreturn_t armv8pmu_handle_irq(struct arm_pmu *cpu_pmu) if (!armpmu_event_set_period(event)) continue; + if (has_branch_stack(event) && !WARN_ON(!cpuc->branches)) { + armv8pmu_branch_read(cpuc, event); + perf_sample_save_brstack(&data, event, &cpuc->branches->branch_stack); + } + /* * Perf event overflow will queue the processing of the event as * an irq_work which will be taken care of in the handling of @@ -928,6 +916,14 @@ static int armv8pmu_user_event_idx(struct perf_event *event) return event->hw.idx; } +static void armv8pmu_sched_task(struct perf_event_pmu_context *pmu_ctx, bool sched_in) +{ + struct arm_pmu *armpmu = to_arm_pmu(pmu_ctx->pmu); + + if (sched_in && armpmu->has_branch_stack) + armv8pmu_branch_reset(); +} + /* * Add an event filter to a given event. */ @@ -998,6 +994,9 @@ static void armv8pmu_reset(void *info) pmcr |= ARMV8_PMU_PMCR_LP; armv8pmu_pmcr_write(pmcr); + + if (cpu_pmu->has_branch_stack) + armv8pmu_branch_reset(); } static int __armv8_pmuv3_map_event_id(struct arm_pmu *armpmu, @@ -1035,6 +1034,9 @@ static int __armv8_pmuv3_map_event(struct perf_event *event, hw_event_id = __armv8_pmuv3_map_event_id(armpmu, event); + if (has_branch_stack(event) && !armv8pmu_branch_attr_valid(event)) + return -EOPNOTSUPP; + /* * CHAIN events only work when paired with an adjacent counter, and it * never makes sense for a user to open one in isolation, as they'll be @@ -1151,6 +1153,33 @@ static void __armv8pmu_probe_pmu(void *info) cpu_pmu->reg_pmmir = read_pmmir(); else cpu_pmu->reg_pmmir = 0; + armv8pmu_branch_probe(cpu_pmu); +} + +static int branch_records_alloc(struct arm_pmu *armpmu) +{ + struct branch_records __percpu *records; + int cpu; + + records = alloc_percpu_gfp(struct branch_records, GFP_KERNEL); + if (!records) + return -ENOMEM; + + /* + * FIXME: Memory allocated via records gets completely + * consumed here, never required to be freed up later. Hence + * losing access to on stack 'records' is acceptable. + * Otherwise this alloc handle has to be saved some where. + */ + for_each_possible_cpu(cpu) { + struct pmu_hw_events *events_cpu; + struct branch_records *records_cpu; + + events_cpu = per_cpu_ptr(armpmu->hw_events, cpu); + records_cpu = per_cpu_ptr(records, cpu); + events_cpu->branches = records_cpu; + } + return 0; } static int armv8pmu_probe_pmu(struct arm_pmu *cpu_pmu) @@ -1167,7 +1196,15 @@ static int armv8pmu_probe_pmu(struct arm_pmu *cpu_pmu) if (ret) return ret; - return probe.present ? 0 : -ENODEV; + if (!probe.present) + return -ENODEV; + + if (cpu_pmu->has_branch_stack) { + ret = branch_records_alloc(cpu_pmu); + if (ret) + return ret; + } + return 0; } static void armv8pmu_disable_user_access_ipi(void *unused) @@ -1230,6 +1267,7 @@ static int armv8_pmu_init(struct arm_pmu *cpu_pmu, char *name, cpu_pmu->set_event_filter = armv8pmu_set_event_filter; cpu_pmu->pmu.event_idx = armv8pmu_user_event_idx; + cpu_pmu->sched_task = armv8pmu_sched_task; cpu_pmu->name = name; cpu_pmu->map_event = map_event;