From patchwork Fri Dec 1 05:39:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anshuman Khandual X-Patchwork-Id: 172260 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:bcd1:0:b0:403:3b70:6f57 with SMTP id r17csp898762vqy; Thu, 30 Nov 2023 21:40:01 -0800 (PST) X-Google-Smtp-Source: AGHT+IFFZ9sZ6h/pZpgYaLECaNF+MHxxbdcRfAdSBpI/+zOA3fMsjf+oNOAR+2Ep/Tbl5N81nMGh X-Received: by 2002:a05:6e02:1b86:b0:35c:c156:f303 with SMTP id h6-20020a056e021b8600b0035cc156f303mr19845995ili.21.1701409201322; Thu, 30 Nov 2023 21:40:01 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1701409201; cv=none; d=google.com; s=arc-20160816; b=lBnlkoabL8J44xGh+uW653V2+S7rBlUDyP8OjFV3oFq2sFvC95Dtt8H+nHQfALYGA3 KxP6lVfEF75hPJw9/YgSyacd/Z5UblxBu7P6p9xmOBh4N/aPBxzGx3bqV9O50sgDdaOP C4v2X+pjq2IyPPdMozIebQdAnlPBWpWja3h1UqmQX4lKDqWcVidqcmOL0Hc5LPX2XNnK 38VYxsx0afekb6+UDvw9VMTBKaFYl8Eby52V5e5UNeZGRNvC7eo08IB7im638CtqXR2b OAqaANETBdK/X4PjsIRSYvP3SPb6dRJLZerZlZT79o+knkSaa6okz+cv3iDBEb4H6YUT Jhcg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=Z679YW/qNfCReEBg6u7UqtYgCLGLc6HKt4/ukkdyDAM=; fh=V2J5DHn0lHtldVpcv/exnbGcu06d5n6VQ+L/z9BBIRM=; b=Xj358Nn366aYFfl37PNxdSyzin3d9cBuSQSbKvELBiPffTxcqFRLMHfggkEodqixKm r+a4Q3hN4Tc+Zbz71+IpwxZKSnYPIgo3PhWgnjyJ+IupmJ5IWKIAT9vfXrUQf+ZiA9zZ kBG/y29B4VGJWsLHkzQXU1R5WzG/vYb3fc9FZYRByJ0MDSe7CYQY5F/dIM6foKrZMeoL ZKdA4Dde230oczBBm4/4mTtB6cKHqn8b4RXy7f6cSi2HQ8J8aMxS2K4NxnaJn298gzKR R+pdy9aJBvWRtd2SZiHMpnrnrDIiLjytNRj3trHm+cw0zLWsmYZaRyeQ7Ai4kCJMYHyF Ikyw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.33 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from lipwig.vger.email (lipwig.vger.email. [23.128.96.33]) by mx.google.com with ESMTPS id b11-20020a63e70b000000b005b929886d2bsi2758602pgi.138.2023.11.30.21.40.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 30 Nov 2023 21:40:01 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.33 as permitted sender) client-ip=23.128.96.33; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.33 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by lipwig.vger.email (Postfix) with ESMTP id 7F9A28142DBB; Thu, 30 Nov 2023 21:39:57 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at lipwig.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1377516AbjLAFjm (ORCPT + 99 others); Fri, 1 Dec 2023 00:39:42 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50970 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1377513AbjLAFji (ORCPT ); Fri, 1 Dec 2023 00:39:38 -0500 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 7FF791992; Thu, 30 Nov 2023 21:39:39 -0800 (PST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C88FB1042; Thu, 30 Nov 2023 21:40:25 -0800 (PST) Received: from a077893.blr.arm.com (a077893.blr.arm.com [10.162.41.8]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id B34623F73F; Thu, 30 Nov 2023 21:39:34 -0800 (PST) From: Anshuman Khandual To: linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, will@kernel.org, catalin.marinas@arm.com, mark.rutland@arm.com Cc: Anshuman Khandual , Mark Brown , James Clark , Rob Herring , Marc Zyngier , Suzuki Poulose , Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , linux-perf-users@vger.kernel.org Subject: [PATCH V15 3/8] drivers: perf: arm_pmuv3: Enable branch stack sampling framework Date: Fri, 1 Dec 2023 11:09:01 +0530 Message-Id: <20231201053906.1261704-4-anshuman.khandual@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20231201053906.1261704-1-anshuman.khandual@arm.com> References: <20231201053906.1261704-1-anshuman.khandual@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-0.8 required=5.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lipwig.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (lipwig.vger.email [0.0.0.0]); Thu, 30 Nov 2023 21:39:57 -0800 (PST) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1784056855004773355 X-GMAIL-MSGID: 1784056855004773355 Branch stack sampling support i.e capturing branch records during execution in core perf, rides along with normal HW events being scheduled on the PMU. This prepares ARMV8 PMU framework for branch stack support on relevant PMUs with required HW implementation. ARMV8 PMU hardware support for branch stack sampling is indicated via a new feature flag called 'has_branch_stack' that can be ascertained via probing. This modifies current gate in armpmu_event_init() which blocks branch stack sampling based perf events unconditionally. Instead allows such perf events getting initialized on supporting PMU hardware. Branch stack sampling is enabled and disabled along with regular PMU events . This adds required function callbacks in armv8pmu_branch_xxx() format, to drive the PMU branch stack hardware when supported. This also adds fallback stub definitions for these callbacks for PMUs which would not have required support. If a task gets scheduled out, the current branch records get saved in the task's context data, which can be later used to fill in the records upon an event overflow. Hence, we enable PERF_ATTACH_TASK_DATA (event->attach_state based flag) for branch stack requesting perf events. But this also requires adding support for pmu::sched_task() callback to arm_pmu. Cc: Catalin Marinas Cc: Will Deacon Cc: Mark Rutland Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Anshuman Khandual --- Changes in V15: - Added a comment for armv8pmu_branch_probe() regarding single cpu probe - Dropped zero_branch_stack based zero branch records mechanism - Moved armv8_pmu_xxx() stub definitions inside arm_brbe.h for arm32 (!CONFIG_ARM64_BRBE) - Included arm_brbe.h header in arm_pmuv3.c - Dropped CONFIG_PERF_EVENTS wrapping from header entries - Flush branch records when a cpu bound event follows a task bound event drivers/perf/arm_brbe.h | 50 ++++++++++++ drivers/perf/arm_pmu.c | 57 ++++++++++++- drivers/perf/arm_pmuv3.c | 141 ++++++++++++++++++++++++++++++++- include/linux/perf/arm_pmu.h | 29 ++++++- include/linux/perf/arm_pmuv3.h | 1 - 5 files changed, 273 insertions(+), 5 deletions(-) create mode 100644 drivers/perf/arm_brbe.h diff --git a/drivers/perf/arm_brbe.h b/drivers/perf/arm_brbe.h new file mode 100644 index 000000000000..609e4d4ccac6 --- /dev/null +++ b/drivers/perf/arm_brbe.h @@ -0,0 +1,50 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Branch Record Buffer Extension Helpers. + * + * Copyright (C) 2022-2023 ARM Limited + * + * Author: Anshuman Khandual + */ +#include + +static inline void armv8pmu_branch_reset(void) +{ +} + +static inline void armv8pmu_branch_probe(struct arm_pmu *arm_pmu) +{ +} + +static inline bool armv8pmu_branch_attr_valid(struct perf_event *event) +{ + WARN_ON_ONCE(!has_branch_stack(event)); + return false; +} + +static inline void armv8pmu_branch_enable(struct arm_pmu *arm_pmu) +{ +} + +static inline void armv8pmu_branch_disable(void) +{ +} + +static inline void armv8pmu_branch_read(struct pmu_hw_events *cpuc, + struct perf_event *event) +{ + WARN_ON_ONCE(!has_branch_stack(event)); +} + +static inline void armv8pmu_branch_save(struct arm_pmu *arm_pmu, void *ctx) +{ +} + +static inline int armv8pmu_task_ctx_cache_alloc(struct arm_pmu *arm_pmu) +{ + return 0; +} + +static inline void armv8pmu_task_ctx_cache_free(struct arm_pmu *arm_pmu) +{ +} diff --git a/drivers/perf/arm_pmu.c b/drivers/perf/arm_pmu.c index d712a19e47ac..15bb80823ae6 100644 --- a/drivers/perf/arm_pmu.c +++ b/drivers/perf/arm_pmu.c @@ -317,6 +317,15 @@ armpmu_del(struct perf_event *event, int flags) struct hw_perf_event *hwc = &event->hw; int idx = hwc->idx; + if (has_branch_stack(event)) { + WARN_ON_ONCE(!hw_events->brbe_users); + hw_events->brbe_users--; + if (!hw_events->brbe_users) { + hw_events->brbe_context = NULL; + hw_events->brbe_sample_type = 0; + } + } + armpmu_stop(event, PERF_EF_UPDATE); hw_events->events[idx] = NULL; armpmu->clear_event_idx(hw_events, event); @@ -333,6 +342,38 @@ armpmu_add(struct perf_event *event, int flags) struct hw_perf_event *hwc = &event->hw; int idx; + if (has_branch_stack(event)) { + /* + * Reset branch records buffer if a new CPU bound event + * gets scheduled on a PMU. Otherwise existing branch + * records present in the buffer might just leak into + * such events. + * + * Also reset current 'hw_events->brbe_context' because + * any previous task bound event now would have lost an + * opportunity for continuous branch records. + */ + if (!event->ctx->task) { + hw_events->brbe_context = NULL; + if (armpmu->branch_reset) + armpmu->branch_reset(); + } + + /* + * Reset branch records buffer if a new task event gets + * scheduled on a PMU which might have existing records. + * Otherwise older branch records present in the buffer + * might leak into the new task event. + */ + if (event->ctx->task && hw_events->brbe_context != event->ctx) { + hw_events->brbe_context = event->ctx; + if (armpmu->branch_reset) + armpmu->branch_reset(); + } + hw_events->brbe_users++; + hw_events->brbe_sample_type = event->attr.branch_sample_type; + } + /* An event following a process won't be stopped earlier */ if (!cpumask_test_cpu(smp_processor_id(), &armpmu->supported_cpus)) return -ENOENT; @@ -512,13 +553,24 @@ static int armpmu_event_init(struct perf_event *event) !cpumask_test_cpu(event->cpu, &armpmu->supported_cpus)) return -ENOENT; - /* does not support taken branch sampling */ - if (has_branch_stack(event)) + /* + * Branch stack sampling events are allowed + * only on PMU which has required support. + */ + if (has_branch_stack(event) && !armpmu->has_branch_stack) return -EOPNOTSUPP; return __hw_perf_event_init(event); } +static void armpmu_sched_task(struct perf_event_pmu_context *pmu_ctx, bool sched_in) +{ + struct arm_pmu *armpmu = to_arm_pmu(pmu_ctx->pmu); + + if (armpmu->sched_task) + armpmu->sched_task(pmu_ctx, sched_in); +} + static void armpmu_enable(struct pmu *pmu) { struct arm_pmu *armpmu = to_arm_pmu(pmu); @@ -865,6 +917,7 @@ struct arm_pmu *armpmu_alloc(void) } pmu->pmu = (struct pmu) { + .sched_task = armpmu_sched_task, .pmu_enable = armpmu_enable, .pmu_disable = armpmu_disable, .event_init = armpmu_event_init, diff --git a/drivers/perf/arm_pmuv3.c b/drivers/perf/arm_pmuv3.c index 6ca7be05229c..832df718101f 100644 --- a/drivers/perf/arm_pmuv3.c +++ b/drivers/perf/arm_pmuv3.c @@ -25,6 +25,7 @@ #include #include +#include "arm_brbe.h" /* ARMv8 Cortex-A53 specific event types. */ #define ARMV8_A53_PERFCTR_PREF_LINEFILL 0xC2 @@ -751,14 +752,56 @@ static void armv8pmu_start(struct arm_pmu *cpu_pmu) armv8pmu_pmcr_write(armv8pmu_pmcr_read() | ARMV8_PMU_PMCR_E); kvm_vcpu_pmu_resync_el0(); + if (cpu_pmu->has_branch_stack) + armv8pmu_branch_enable(cpu_pmu); } static void armv8pmu_stop(struct arm_pmu *cpu_pmu) { + if (cpu_pmu->has_branch_stack) + armv8pmu_branch_disable(); + /* Disable all counters */ armv8pmu_pmcr_write(armv8pmu_pmcr_read() & ~ARMV8_PMU_PMCR_E); } +static void read_branch_records(struct pmu_hw_events *cpuc, + struct perf_event *event, + struct perf_sample_data *data, + bool *branch_captured) +{ + /* + * CPU specific branch records buffer must have been allocated already + * for the hardware records to be captured and processed further. + */ + if (WARN_ON(!cpuc->branches)) + return; + + /* + * Overflowed event's branch_sample_type does not match the configured + * branch filters in the BRBE HW. So the captured branch records here + * cannot be co-related to the overflowed event. Report to the user as + * if no branch records have been captured, and flush branch records. + * The same scenario is applicable when the current task context does + * not match with overflown event. + */ + if ((cpuc->brbe_sample_type != event->attr.branch_sample_type) || + (event->ctx->task && cpuc->brbe_context != event->ctx)) + return; + + /* + * Read the branch records from the hardware once after the PMU IRQ + * has been triggered but subsequently same records can be used for + * other events that might have been overflowed simultaneously thus + * saving much CPU cycles. + */ + if (!*branch_captured) { + armv8pmu_branch_read(cpuc, event); + *branch_captured = true; + } + perf_sample_save_brstack(data, event, &cpuc->branches->branch_stack); +} + static irqreturn_t armv8pmu_handle_irq(struct arm_pmu *cpu_pmu) { u32 pmovsr; @@ -766,6 +809,7 @@ static irqreturn_t armv8pmu_handle_irq(struct arm_pmu *cpu_pmu) struct pmu_hw_events *cpuc = this_cpu_ptr(cpu_pmu->hw_events); struct pt_regs *regs; int idx; + bool branch_captured = false; /* * Get and reset the IRQ flags @@ -809,6 +853,13 @@ static irqreturn_t armv8pmu_handle_irq(struct arm_pmu *cpu_pmu) if (!armpmu_event_set_period(event)) continue; + /* + * PMU IRQ should remain asserted until all branch records + * are captured and processed into struct perf_sample_data. + */ + if (has_branch_stack(event) && cpu_pmu->has_branch_stack) + read_branch_records(cpuc, event, &data, &branch_captured); + /* * Perf event overflow will queue the processing of the event as * an irq_work which will be taken care of in the handling of @@ -818,6 +869,8 @@ static irqreturn_t armv8pmu_handle_irq(struct arm_pmu *cpu_pmu) cpu_pmu->disable(event); } armv8pmu_start(cpu_pmu); + if (cpu_pmu->has_branch_stack) + armv8pmu_branch_reset(); return IRQ_HANDLED; } @@ -907,6 +960,24 @@ static int armv8pmu_user_event_idx(struct perf_event *event) return event->hw.idx; } +static void armv8pmu_sched_task(struct perf_event_pmu_context *pmu_ctx, bool sched_in) +{ + struct arm_pmu *armpmu = to_arm_pmu(pmu_ctx->pmu); + void *task_ctx = pmu_ctx->task_ctx_data; + + if (armpmu->has_branch_stack) { + /* Save branch records in task_ctx on sched out */ + if (task_ctx && !sched_in) { + armv8pmu_branch_save(armpmu, task_ctx); + return; + } + + /* Reset branch records on sched in */ + if (sched_in) + armv8pmu_branch_reset(); + } +} + /* * Add an event filter to a given event. */ @@ -977,6 +1048,9 @@ static void armv8pmu_reset(void *info) pmcr |= ARMV8_PMU_PMCR_LP; armv8pmu_pmcr_write(pmcr); + + if (cpu_pmu->has_branch_stack) + armv8pmu_branch_reset(); } static int __armv8_pmuv3_map_event_id(struct arm_pmu *armpmu, @@ -1014,6 +1088,20 @@ static int __armv8_pmuv3_map_event(struct perf_event *event, hw_event_id = __armv8_pmuv3_map_event_id(armpmu, event); + if (has_branch_stack(event)) { + if (!armv8pmu_branch_attr_valid(event)) + return -EOPNOTSUPP; + + /* + * If a task gets scheduled out, the current branch records + * get saved in the task's context data, which can be later + * used to fill in the records upon an event overflow. Let's + * enable PERF_ATTACH_TASK_DATA in 'event->attach_state' for + * all branch stack sampling perf events. + */ + event->attach_state |= PERF_ATTACH_TASK_DATA; + } + /* * CHAIN events only work when paired with an adjacent counter, and it * never makes sense for a user to open one in isolation, as they'll be @@ -1130,6 +1218,41 @@ static void __armv8pmu_probe_pmu(void *info) cpu_pmu->reg_pmmir = read_pmmir(); else cpu_pmu->reg_pmmir = 0; + + /* + * BRBE is being probed on a single cpu for a + * given PMU. The remaining cpus, are assumed + * to have the exact same BRBE implementation. + */ + armv8pmu_branch_probe(cpu_pmu); +} + +static int branch_records_alloc(struct arm_pmu *armpmu) +{ + struct branch_records __percpu *records; + int cpu; + + records = alloc_percpu_gfp(struct branch_records, GFP_KERNEL); + if (!records) + return -ENOMEM; + + /* + * percpu memory allocated for 'records' gets completely consumed + * here, and never required to be freed up later. So permanently + * losing access to this anchor i.e 'records' is acceptable. + * + * Otherwise this allocation handle would have to be saved up for + * free_percpu() release later if required. + */ + for_each_possible_cpu(cpu) { + struct pmu_hw_events *events_cpu; + struct branch_records *records_cpu; + + events_cpu = per_cpu_ptr(armpmu->hw_events, cpu); + records_cpu = per_cpu_ptr(records, cpu); + events_cpu->branches = records_cpu; + } + return 0; } static int armv8pmu_probe_pmu(struct arm_pmu *cpu_pmu) @@ -1146,7 +1269,21 @@ static int armv8pmu_probe_pmu(struct arm_pmu *cpu_pmu) if (ret) return ret; - return probe.present ? 0 : -ENODEV; + if (!probe.present) + return -ENODEV; + + if (cpu_pmu->has_branch_stack) { + ret = armv8pmu_task_ctx_cache_alloc(cpu_pmu); + if (ret) + return ret; + + ret = branch_records_alloc(cpu_pmu); + if (ret) { + armv8pmu_task_ctx_cache_free(cpu_pmu); + return ret; + } + } + return 0; } static void armv8pmu_disable_user_access_ipi(void *unused) @@ -1205,6 +1342,8 @@ static int armv8_pmu_init(struct arm_pmu *cpu_pmu, char *name, cpu_pmu->set_event_filter = armv8pmu_set_event_filter; cpu_pmu->pmu.event_idx = armv8pmu_user_event_idx; + cpu_pmu->sched_task = armv8pmu_sched_task; + cpu_pmu->branch_reset = armv8pmu_branch_reset; cpu_pmu->name = name; cpu_pmu->map_event = map_event; diff --git a/include/linux/perf/arm_pmu.h b/include/linux/perf/arm_pmu.h index 143fbc10ecfe..a489fdf163b4 100644 --- a/include/linux/perf/arm_pmu.h +++ b/include/linux/perf/arm_pmu.h @@ -46,6 +46,18 @@ static_assert((PERF_EVENT_FLAG_ARCH & ARMPMU_EVT_63BIT) == ARMPMU_EVT_63BIT); }, \ } +/* + * Maximum branch record entries which could be processed + * for core perf branch stack sampling support, regardless + * of the hardware support available on a given ARM PMU. + */ +#define MAX_BRANCH_RECORDS 64 + +struct branch_records { + struct perf_branch_stack branch_stack; + struct perf_branch_entry branch_entries[MAX_BRANCH_RECORDS]; +}; + /* The events for a given PMU register set. */ struct pmu_hw_events { /* @@ -72,6 +84,17 @@ struct pmu_hw_events { struct arm_pmu *percpu_pmu; int irq; + + struct branch_records *branches; + + /* Active context for task events */ + void *brbe_context; + + /* Active events requesting branch records */ + unsigned int brbe_users; + + /* Active branch sample type filters */ + unsigned long brbe_sample_type; }; enum armpmu_attr_groups { @@ -102,8 +125,12 @@ struct arm_pmu { void (*stop)(struct arm_pmu *); void (*reset)(void *); int (*map_event)(struct perf_event *event); + void (*sched_task)(struct perf_event_pmu_context *pmu_ctx, bool sched_in); + void (*branch_reset)(void); int num_events; - bool secure_access; /* 32-bit ARM only */ + unsigned int secure_access : 1, /* 32-bit ARM only */ + has_branch_stack: 1, /* 64-bit ARM only */ + reserved : 30; #define ARMV8_PMUV3_MAX_COMMON_EVENTS 0x40 DECLARE_BITMAP(pmceid_bitmap, ARMV8_PMUV3_MAX_COMMON_EVENTS); #define ARMV8_PMUV3_EXT_COMMON_EVENT_BASE 0x4000 diff --git a/include/linux/perf/arm_pmuv3.h b/include/linux/perf/arm_pmuv3.h index 9c226adf938a..4abbb3932f54 100644 --- a/include/linux/perf/arm_pmuv3.h +++ b/include/linux/perf/arm_pmuv3.h @@ -302,5 +302,4 @@ default: WARN(1, "Invalid PMEV* index\n"); \ } \ } while (0) - #endif