From patchwork Tue Jul 11 08:24:46 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anshuman Khandual X-Patchwork-Id: 118330 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:a6b2:0:b0:3e4:2afc:c1 with SMTP id c18csp334500vqm; Tue, 11 Jul 2023 01:54:33 -0700 (PDT) X-Google-Smtp-Source: APBJJlF7B9aaQ9KT86a6Toegevt8XQgBmLKD7BsahLMwbG7rwBVzAZPL8X3D3fWHC+r4WRUvsKNF X-Received: by 2002:a17:906:101a:b0:994:673:8b00 with SMTP id 26-20020a170906101a00b0099406738b00mr4828615ejm.12.1689065672669; Tue, 11 Jul 2023 01:54:32 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1689065672; cv=none; d=google.com; s=arc-20160816; b=K5mY4t7s3+pi2MlX61QwZDLSTk4Q/xsMJAcLBf9QEEXb2PpYwfmY+ux4cH4/5UnUfd SE1AGRPxBNRTMyIWBNnnn/0UVSR8zzVJp4xzuX1R7WLKEbfxK7OCdmd0x1mO3gpicZ5+ DALmWcWoOvABJU4gh3QKgRbhclRopfK1SvNSWaU9GJxZAF/+9h9bVVNTpEsQLiSB8oko N/G+koxaRfkIi2Q2H/v0TgZJMBonlWtKKoYBD4UQnP6IZGFwqQrOU88jIAA6ZI0ezHjn GnR/4M7aygt0mC4HRS0TZqkAin7XjfkoD7S6Teh8NnpvHrJn0RgPD5rrimmpx+p2AOwr Pbbw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=/Mpa2cNZbtMlDRNvsu0vhgj+3dSVygIdmCi5oMT8SyQ=; fh=uKDf5R1eKBEpFB2sP5rwFdsisg1sKNqEQFf4Rrsmuig=; b=OUXjUsMD9VQ0Qek86bo/DxAD2bjmA2RbK5QN5hxZPnk/EtOO6DRpiognsWKs314wBX jHhvOyTuKDiAIOpv7UgnctBKKFe+3411xs0pd/2MNeVyodkwMDgoxwj2SfvrIL+AUwXp TLLzGbeRWd1FSrRV8hVeZ8DU232ZD8rXkAifiTEiWeLrDvde2TBhjIaAzy/iuXJ+StOu 5AhS2mF251udT//0UmiUsXnydkjd+8ik6Kb7bQEMKz30RxeItRjWMgAT6Z6o0SCicGk6 BR/azUU72ADNjKsA9Wf/jCBrowLo/OnrqjCa4jlsNQq71z45nOQqW+kgRjiKldudJUVA oZzA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id lc9-20020a170906f90900b0098678728b52si1457488ejb.123.2023.07.11.01.54.08; Tue, 11 Jul 2023 01:54:32 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231348AbjGKIZZ (ORCPT + 99 others); Tue, 11 Jul 2023 04:25:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43866 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231251AbjGKIZS (ORCPT ); Tue, 11 Jul 2023 04:25:18 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id F14F5E6A; Tue, 11 Jul 2023 01:25:15 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C0093D75; Tue, 11 Jul 2023 01:25:57 -0700 (PDT) Received: from a077893.arm.com (unknown [10.163.47.87]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 9C4E23F740; Tue, 11 Jul 2023 01:25:10 -0700 (PDT) From: Anshuman Khandual To: linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, will@kernel.org, catalin.marinas@arm.com, mark.rutland@arm.com Cc: Anshuman Khandual , Mark Brown , James Clark , Rob Herring , Marc Zyngier , Suzuki Poulose , Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , linux-perf-users@vger.kernel.org Subject: [PATCH V13 - RESEND 01/10] drivers: perf: arm_pmu: Add new sched_task() callback Date: Tue, 11 Jul 2023 13:54:46 +0530 Message-Id: <20230711082455.215983-2-anshuman.khandual@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230711082455.215983-1-anshuman.khandual@arm.com> References: <20230711082455.215983-1-anshuman.khandual@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1771113726607986652 X-GMAIL-MSGID: 1771113726607986652 This adds armpmu_sched_task(), as generic pmu's sched_task() override which in turn can utilize a new arm_pmu.sched_task() callback when available from the arm_pmu instance. This new callback will be used while enabling BRBE in ARMV8 PMU. Cc: Catalin Marinas Cc: Will Deacon Cc: Mark Rutland Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Tested-by: James Clark Acked-by: Mark Rutland Signed-off-by: Anshuman Khandual --- drivers/perf/arm_pmu.c | 9 +++++++++ include/linux/perf/arm_pmu.h | 1 + 2 files changed, 10 insertions(+) diff --git a/drivers/perf/arm_pmu.c b/drivers/perf/arm_pmu.c index f6ccb2cd4dfc..c0475a96c2a0 100644 --- a/drivers/perf/arm_pmu.c +++ b/drivers/perf/arm_pmu.c @@ -519,6 +519,14 @@ static int armpmu_event_init(struct perf_event *event) return __hw_perf_event_init(event); } +static void armpmu_sched_task(struct perf_event_pmu_context *pmu_ctx, bool sched_in) +{ + struct arm_pmu *armpmu = to_arm_pmu(pmu_ctx->pmu); + + if (armpmu->sched_task) + armpmu->sched_task(pmu_ctx, sched_in); +} + static void armpmu_enable(struct pmu *pmu) { struct arm_pmu *armpmu = to_arm_pmu(pmu); @@ -865,6 +873,7 @@ struct arm_pmu *armpmu_alloc(void) } pmu->pmu = (struct pmu) { + .sched_task = armpmu_sched_task, .pmu_enable = armpmu_enable, .pmu_disable = armpmu_disable, .event_init = armpmu_event_init, diff --git a/include/linux/perf/arm_pmu.h b/include/linux/perf/arm_pmu.h index a0801f68762b..3d3bd944d630 100644 --- a/include/linux/perf/arm_pmu.h +++ b/include/linux/perf/arm_pmu.h @@ -102,6 +102,7 @@ struct arm_pmu { void (*stop)(struct arm_pmu *); void (*reset)(void *); int (*map_event)(struct perf_event *event); + void (*sched_task)(struct perf_event_pmu_context *pmu_ctx, bool sched_in); int num_events; bool secure_access; /* 32-bit ARM only */ #define ARMV8_PMUV3_MAX_COMMON_EVENTS 0x40 From patchwork Tue Jul 11 08:24:47 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anshuman Khandual X-Patchwork-Id: 118317 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:a6b2:0:b0:3e4:2afc:c1 with SMTP id c18csp328044vqm; Tue, 11 Jul 2023 01:37:13 -0700 (PDT) X-Google-Smtp-Source: APBJJlHAtgtHtgfY6LLG4rnxR+F6BtHiWcm5w0hI8rTUkl92SP27aoFzKFfGc8xR3RBu2pX+ywji X-Received: by 2002:a17:906:5a6e:b0:965:9602:1f07 with SMTP id my46-20020a1709065a6e00b0096596021f07mr15537836ejc.39.1689064633032; Tue, 11 Jul 2023 01:37:13 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1689064633; cv=none; d=google.com; s=arc-20160816; b=jzuJWx/kZUDqPHm9Cx0DSwygK8TajPcnv7eIEjWJcRcz8swduGYwVbAwJ2sUmjSUPi /0rP0I6ZEhbaEphhuevHl2VKnh/3jB8x4+KYDAZqJedD/2P1rIe6eCg9/ytSjDnbiL6N mg2F4u4a4HXoTi3J1HypN7jZIDi9cgg47v9cGuxqlxpg4ZkChi3vjfgwZw8e9XnriwcW bYKyKBmvO0rUUGuNzF4kfRqbr1VPa9akS7ErJP1wWckSFoFO7bpmfcUUNoXN/MjfDcl/ yJVAlWYuxpskiJ2lfHfrDRKL3XH0vMzdkvgirKasw3Aox0zBu3rXNiYObK7fEfnvd2B1 cEoA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=TS935OodlHtIFGRzjXBlAgObNZxRR/WV7JGg5HMnTzk=; fh=uKDf5R1eKBEpFB2sP5rwFdsisg1sKNqEQFf4Rrsmuig=; b=Iw4MkG9ZJ1MC8qBeCDFsAzX6sjZepD762JyqDw+XcYyaCqk0MArCBOnNHCjclqCfo4 aN7LFmOd6Xqmm3Gh4DRnP2z/zAhGy0fTQsG3hbsQRrzxU8Q1B1v0p7i6gy3SuQzN+ibq fNhsvGRBSUYnpO5M0FLBgjBzokuP5bpZX5r7riYh6ly7794jplyMpXjhlPrP7+jdjEL3 6WyduzvdDdWtGRAJRAzWpRM8rEV++dP3bUo4MVVyUeGVH6Wior8dsZTzWjkh2txTbSaa Q2wMrvUdaWwHnrFu+VYP+C7tv/EQKfQKAyDxJkZWIXljbYkeXisjpCT46C9crJMiLpeV 47UQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id sa17-20020a170906edb100b00992b75d593bsi1580728ejb.318.2023.07.11.01.36.49; Tue, 11 Jul 2023 01:37:13 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231251AbjGKIZc (ORCPT + 99 others); Tue, 11 Jul 2023 04:25:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43916 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231163AbjGKIZX (ORCPT ); Tue, 11 Jul 2023 04:25:23 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 4A6B4C0; Tue, 11 Jul 2023 01:25:21 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 33D031042; Tue, 11 Jul 2023 01:26:03 -0700 (PDT) Received: from a077893.arm.com (unknown [10.163.47.87]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 429EE3F740; Tue, 11 Jul 2023 01:25:15 -0700 (PDT) From: Anshuman Khandual To: linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, will@kernel.org, catalin.marinas@arm.com, mark.rutland@arm.com Cc: Anshuman Khandual , Mark Brown , James Clark , Rob Herring , Marc Zyngier , Suzuki Poulose , Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , linux-perf-users@vger.kernel.org Subject: [PATCH V13 - RESEND 02/10] arm64/perf: Add BRBE registers and fields Date: Tue, 11 Jul 2023 13:54:47 +0530 Message-Id: <20230711082455.215983-3-anshuman.khandual@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230711082455.215983-1-anshuman.khandual@arm.com> References: <20230711082455.215983-1-anshuman.khandual@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1771112636473443441 X-GMAIL-MSGID: 1771112636473443441 This adds BRBE related register definitions and various other related field macros there in. These will be used subsequently in a BRBE driver which is being added later on. Cc: Catalin Marinas Cc: Will Deacon Cc: Marc Zyngier Cc: Mark Rutland Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Tested-by: James Clark Reviewed-by: Mark Brown Signed-off-by: Anshuman Khandual --- arch/arm64/include/asm/sysreg.h | 103 +++++++++++++++++++++ arch/arm64/tools/sysreg | 158 ++++++++++++++++++++++++++++++++ 2 files changed, 261 insertions(+) diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h index b481935e9314..f95e30c13c8b 100644 --- a/arch/arm64/include/asm/sysreg.h +++ b/arch/arm64/include/asm/sysreg.h @@ -163,6 +163,109 @@ #define SYS_DBGDTRTX_EL0 sys_reg(2, 3, 0, 5, 0) #define SYS_DBGVCR32_EL2 sys_reg(2, 4, 0, 7, 0) +#define __SYS_BRBINFO(n) sys_reg(2, 1, 8, ((n) & 0xf), ((((n) & 0x10)) >> 2 + 0)) +#define __SYS_BRBSRC(n) sys_reg(2, 1, 8, ((n) & 0xf), ((((n) & 0x10)) >> 2 + 1)) +#define __SYS_BRBTGT(n) sys_reg(2, 1, 8, ((n) & 0xf), ((((n) & 0x10)) >> 2 + 2)) + +#define SYS_BRBINF0_EL1 __SYS_BRBINFO(0) +#define SYS_BRBINF1_EL1 __SYS_BRBINFO(1) +#define SYS_BRBINF2_EL1 __SYS_BRBINFO(2) +#define SYS_BRBINF3_EL1 __SYS_BRBINFO(3) +#define SYS_BRBINF4_EL1 __SYS_BRBINFO(4) +#define SYS_BRBINF5_EL1 __SYS_BRBINFO(5) +#define SYS_BRBINF6_EL1 __SYS_BRBINFO(6) +#define SYS_BRBINF7_EL1 __SYS_BRBINFO(7) +#define SYS_BRBINF8_EL1 __SYS_BRBINFO(8) +#define SYS_BRBINF9_EL1 __SYS_BRBINFO(9) +#define SYS_BRBINF10_EL1 __SYS_BRBINFO(10) +#define SYS_BRBINF11_EL1 __SYS_BRBINFO(11) +#define SYS_BRBINF12_EL1 __SYS_BRBINFO(12) +#define SYS_BRBINF13_EL1 __SYS_BRBINFO(13) +#define SYS_BRBINF14_EL1 __SYS_BRBINFO(14) +#define SYS_BRBINF15_EL1 __SYS_BRBINFO(15) +#define SYS_BRBINF16_EL1 __SYS_BRBINFO(16) +#define SYS_BRBINF17_EL1 __SYS_BRBINFO(17) +#define SYS_BRBINF18_EL1 __SYS_BRBINFO(18) +#define SYS_BRBINF19_EL1 __SYS_BRBINFO(19) +#define SYS_BRBINF20_EL1 __SYS_BRBINFO(20) +#define SYS_BRBINF21_EL1 __SYS_BRBINFO(21) +#define SYS_BRBINF22_EL1 __SYS_BRBINFO(22) +#define SYS_BRBINF23_EL1 __SYS_BRBINFO(23) +#define SYS_BRBINF24_EL1 __SYS_BRBINFO(24) +#define SYS_BRBINF25_EL1 __SYS_BRBINFO(25) +#define SYS_BRBINF26_EL1 __SYS_BRBINFO(26) +#define SYS_BRBINF27_EL1 __SYS_BRBINFO(27) +#define SYS_BRBINF28_EL1 __SYS_BRBINFO(28) +#define SYS_BRBINF29_EL1 __SYS_BRBINFO(29) +#define SYS_BRBINF30_EL1 __SYS_BRBINFO(30) +#define SYS_BRBINF31_EL1 __SYS_BRBINFO(31) + +#define SYS_BRBSRC0_EL1 __SYS_BRBSRC(0) +#define SYS_BRBSRC1_EL1 __SYS_BRBSRC(1) +#define SYS_BRBSRC2_EL1 __SYS_BRBSRC(2) +#define SYS_BRBSRC3_EL1 __SYS_BRBSRC(3) +#define SYS_BRBSRC4_EL1 __SYS_BRBSRC(4) +#define SYS_BRBSRC5_EL1 __SYS_BRBSRC(5) +#define SYS_BRBSRC6_EL1 __SYS_BRBSRC(6) +#define SYS_BRBSRC7_EL1 __SYS_BRBSRC(7) +#define SYS_BRBSRC8_EL1 __SYS_BRBSRC(8) +#define SYS_BRBSRC9_EL1 __SYS_BRBSRC(9) +#define SYS_BRBSRC10_EL1 __SYS_BRBSRC(10) +#define SYS_BRBSRC11_EL1 __SYS_BRBSRC(11) +#define SYS_BRBSRC12_EL1 __SYS_BRBSRC(12) +#define SYS_BRBSRC13_EL1 __SYS_BRBSRC(13) +#define SYS_BRBSRC14_EL1 __SYS_BRBSRC(14) +#define SYS_BRBSRC15_EL1 __SYS_BRBSRC(15) +#define SYS_BRBSRC16_EL1 __SYS_BRBSRC(16) +#define SYS_BRBSRC17_EL1 __SYS_BRBSRC(17) +#define SYS_BRBSRC18_EL1 __SYS_BRBSRC(18) +#define SYS_BRBSRC19_EL1 __SYS_BRBSRC(19) +#define SYS_BRBSRC20_EL1 __SYS_BRBSRC(20) +#define SYS_BRBSRC21_EL1 __SYS_BRBSRC(21) +#define SYS_BRBSRC22_EL1 __SYS_BRBSRC(22) +#define SYS_BRBSRC23_EL1 __SYS_BRBSRC(23) +#define SYS_BRBSRC24_EL1 __SYS_BRBSRC(24) +#define SYS_BRBSRC25_EL1 __SYS_BRBSRC(25) +#define SYS_BRBSRC26_EL1 __SYS_BRBSRC(26) +#define SYS_BRBSRC27_EL1 __SYS_BRBSRC(27) +#define SYS_BRBSRC28_EL1 __SYS_BRBSRC(28) +#define SYS_BRBSRC29_EL1 __SYS_BRBSRC(29) +#define SYS_BRBSRC30_EL1 __SYS_BRBSRC(30) +#define SYS_BRBSRC31_EL1 __SYS_BRBSRC(31) + +#define SYS_BRBTGT0_EL1 __SYS_BRBTGT(0) +#define SYS_BRBTGT1_EL1 __SYS_BRBTGT(1) +#define SYS_BRBTGT2_EL1 __SYS_BRBTGT(2) +#define SYS_BRBTGT3_EL1 __SYS_BRBTGT(3) +#define SYS_BRBTGT4_EL1 __SYS_BRBTGT(4) +#define SYS_BRBTGT5_EL1 __SYS_BRBTGT(5) +#define SYS_BRBTGT6_EL1 __SYS_BRBTGT(6) +#define SYS_BRBTGT7_EL1 __SYS_BRBTGT(7) +#define SYS_BRBTGT8_EL1 __SYS_BRBTGT(8) +#define SYS_BRBTGT9_EL1 __SYS_BRBTGT(9) +#define SYS_BRBTGT10_EL1 __SYS_BRBTGT(10) +#define SYS_BRBTGT11_EL1 __SYS_BRBTGT(11) +#define SYS_BRBTGT12_EL1 __SYS_BRBTGT(12) +#define SYS_BRBTGT13_EL1 __SYS_BRBTGT(13) +#define SYS_BRBTGT14_EL1 __SYS_BRBTGT(14) +#define SYS_BRBTGT15_EL1 __SYS_BRBTGT(15) +#define SYS_BRBTGT16_EL1 __SYS_BRBTGT(16) +#define SYS_BRBTGT17_EL1 __SYS_BRBTGT(17) +#define SYS_BRBTGT18_EL1 __SYS_BRBTGT(18) +#define SYS_BRBTGT19_EL1 __SYS_BRBTGT(19) +#define SYS_BRBTGT20_EL1 __SYS_BRBTGT(20) +#define SYS_BRBTGT21_EL1 __SYS_BRBTGT(21) +#define SYS_BRBTGT22_EL1 __SYS_BRBTGT(22) +#define SYS_BRBTGT23_EL1 __SYS_BRBTGT(23) +#define SYS_BRBTGT24_EL1 __SYS_BRBTGT(24) +#define SYS_BRBTGT25_EL1 __SYS_BRBTGT(25) +#define SYS_BRBTGT26_EL1 __SYS_BRBTGT(26) +#define SYS_BRBTGT27_EL1 __SYS_BRBTGT(27) +#define SYS_BRBTGT28_EL1 __SYS_BRBTGT(28) +#define SYS_BRBTGT29_EL1 __SYS_BRBTGT(29) +#define SYS_BRBTGT30_EL1 __SYS_BRBTGT(30) +#define SYS_BRBTGT31_EL1 __SYS_BRBTGT(31) + #define SYS_MIDR_EL1 sys_reg(3, 0, 0, 0, 0) #define SYS_MPIDR_EL1 sys_reg(3, 0, 0, 0, 5) #define SYS_REVIDR_EL1 sys_reg(3, 0, 0, 0, 6) diff --git a/arch/arm64/tools/sysreg b/arch/arm64/tools/sysreg index 1ea4a3dc68f8..9892af96262f 100644 --- a/arch/arm64/tools/sysreg +++ b/arch/arm64/tools/sysreg @@ -1002,6 +1002,164 @@ UnsignedEnum 3:0 BT EndEnum EndSysreg + +SysregFields BRBINFx_EL1 +Res0 63:47 +Field 46 CCU +Field 45:32 CC +Res0 31:18 +Field 17 LASTFAILED +Field 16 T +Res0 15:14 +Enum 13:8 TYPE + 0b000000 UNCOND_DIRECT + 0b000001 INDIRECT + 0b000010 DIRECT_LINK + 0b000011 INDIRECT_LINK + 0b000101 RET + 0b000111 ERET + 0b001000 COND_DIRECT + 0b100001 DEBUG_HALT + 0b100010 CALL + 0b100011 TRAP + 0b100100 SERROR + 0b100110 INSN_DEBUG + 0b100111 DATA_DEBUG + 0b101010 ALIGN_FAULT + 0b101011 INSN_FAULT + 0b101100 DATA_FAULT + 0b101110 IRQ + 0b101111 FIQ + 0b111001 DEBUG_EXIT +EndEnum +Enum 7:6 EL + 0b00 EL0 + 0b01 EL1 + 0b10 EL2 + 0b11 EL3 +EndEnum +Field 5 MPRED +Res0 4:2 +Enum 1:0 VALID + 0b00 NONE + 0b01 TARGET + 0b10 SOURCE + 0b11 FULL +EndEnum +EndSysregFields + +Sysreg BRBCR_EL1 2 1 9 0 0 +Res0 63:24 +Field 23 EXCEPTION +Field 22 ERTN +Res0 21:9 +Field 8 FZP +Res0 7 +Enum 6:5 TS + 0b01 VIRTUAL + 0b10 GUEST_PHYSICAL + 0b11 PHYSICAL +EndEnum +Field 4 MPRED +Field 3 CC +Res0 2 +Field 1 E1BRE +Field 0 E0BRE +EndSysreg + +Sysreg BRBFCR_EL1 2 1 9 0 1 +Res0 63:30 +Enum 29:28 BANK + 0b0 FIRST + 0b1 SECOND +EndEnum +Res0 27:23 +Field 22 CONDDIR +Field 21 DIRCALL +Field 20 INDCALL +Field 19 RTN +Field 18 INDIRECT +Field 17 DIRECT +Field 16 EnI +Res0 15:8 +Field 7 PAUSED +Field 6 LASTFAILED +Res0 5:0 +EndSysreg + +Sysreg BRBTS_EL1 2 1 9 0 2 +Field 63:0 TS +EndSysreg + +Sysreg BRBINFINJ_EL1 2 1 9 1 0 +Res0 63:47 +Field 46 CCU +Field 45:32 CC +Res0 31:18 +Field 17 LASTFAILED +Field 16 T +Res0 15:14 +Enum 13:8 TYPE + 0b000000 UNCOND_DIRECT + 0b000001 INDIRECT + 0b000010 DIRECT_LINK + 0b000011 INDIRECT_LINK + 0b000101 RET + 0b000111 ERET + 0b001000 COND_DIRECT + 0b100001 DEBUG_HALT + 0b100010 CALL + 0b100011 TRAP + 0b100100 SERROR + 0b100110 INSN_DEBUG + 0b100111 DATA_DEBUG + 0b101010 ALIGN_FAULT + 0b101011 INSN_FAULT + 0b101100 DATA_FAULT + 0b101110 IRQ + 0b101111 FIQ + 0b111001 DEBUG_EXIT +EndEnum +Enum 7:6 EL + 0b00 EL0 + 0b01 EL1 + 0b10 EL2 + 0b11 EL3 +EndEnum +Field 5 MPRED +Res0 4:2 +Enum 1:0 VALID + 0b00 NONE + 0b01 TARGET + 0b10 SOURCE + 0b11 FULL +EndEnum +EndSysreg + +Sysreg BRBSRCINJ_EL1 2 1 9 1 1 +Field 63:0 ADDRESS +EndSysreg + +Sysreg BRBTGTINJ_EL1 2 1 9 1 2 +Field 63:0 ADDRESS +EndSysreg + +Sysreg BRBIDR0_EL1 2 1 9 2 0 +Res0 63:16 +Enum 15:12 CC + 0b101 20_BIT +EndEnum +Enum 11:8 FORMAT + 0b0 0 +EndEnum +Enum 7:0 NUMREC + 0b0001000 8 + 0b0010000 16 + 0b0100000 32 + 0b1000000 64 +EndEnum +EndSysreg + Sysreg ID_AA64ZFR0_EL1 3 0 0 4 4 Res0 63:60 UnsignedEnum 59:56 F64MM From patchwork Tue Jul 11 08:24:48 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anshuman Khandual X-Patchwork-Id: 118329 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:a6b2:0:b0:3e4:2afc:c1 with SMTP id c18csp334121vqm; Tue, 11 Jul 2023 01:53:31 -0700 (PDT) X-Google-Smtp-Source: APBJJlEsB3brCh3CCPHEEQ7CiJlwuL+4uqODnLrtrWvwho0vjqOzifO9u2EULfDKVdDJVrgJDxTB X-Received: by 2002:a17:907:124d:b0:991:d2a8:6588 with SMTP id wc13-20020a170907124d00b00991d2a86588mr15296722ejb.51.1689065611610; Tue, 11 Jul 2023 01:53:31 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1689065611; cv=none; d=google.com; s=arc-20160816; b=BNy5XBZT8FKhuRSDZrm/Q59cJ91KrAxsLIDTGQaOmBd2S/lkysKZs3RTIBybUSVhIl lu3UhVdUMOjF3oVpvqYvFh5p1Y/SnghkdVvLitUwd/zj5mPZnw9JMYwRlxl+1amRhOEb wpf3Ebpaa0sh8mU6stPD2W2AqPO54XfQvOjpcYB+1aym9v2vAWb4MVuIg4HpzoAEwc95 piAOaccKXnSrIMek6pPE2qMrQZC4KadXB0vgrR7kojcV1qIV0TEJOV3yuFD+QxpBlQzS R20/RNlVQDr6ffEhn93YN5rotZ7mcjBq4hnR6Wq2lUl93cXGZhs6fLE47A0qISQVI9w0 78ew== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=Vnec7PfAkzLIgHqW4qMSPsKvzNHCL6SNCocFFc9CnCk=; fh=uKDf5R1eKBEpFB2sP5rwFdsisg1sKNqEQFf4Rrsmuig=; b=Av/RA2H2AMMIq2Kqbzm4zCF4GVwHNZbWPoHwbepth6lkF9jkcRbt7TXJs8GAMSjJL0 tklJIHe6rZdpMLgjCZqt/j/6aXou/DJ7UExuwv5Nm2mJFe0aBxOAgtu1CneYgYK+/Xm6 cbFPDp2WN5fL5bN4swGG8D5NU7d9lKXhgyYAy7EAQpnbfUm8aepMMwIjm/OFOGcbWT0r hr9eZyKF2cUXLUY93gf/lvMuFDVIlsQIPFeHUA6qv0gF7ZfWFkloJbkvU6oI7BTT+O/n geAnPuT34xJ2t1fxJVJYAPIoZIQhD2FEisnAXheQFiFjYkRrSd2cR6P2wREqrtzD5Fs5 t66g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id n25-20020a1709065e1900b009888aa1da13si1478035eju.752.2023.07.11.01.53.07; Tue, 11 Jul 2023 01:53:31 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231403AbjGKIZm (ORCPT + 99 others); Tue, 11 Jul 2023 04:25:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44170 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231490AbjGKIZd (ORCPT ); Tue, 11 Jul 2023 04:25:33 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 125D610F0; Tue, 11 Jul 2023 01:25:27 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D984F2B; Tue, 11 Jul 2023 01:26:08 -0700 (PDT) Received: from a077893.arm.com (unknown [10.163.47.87]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id AB0F63F740; Tue, 11 Jul 2023 01:25:21 -0700 (PDT) From: Anshuman Khandual To: linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, will@kernel.org, catalin.marinas@arm.com, mark.rutland@arm.com Cc: Anshuman Khandual , Mark Brown , James Clark , Rob Herring , Marc Zyngier , Suzuki Poulose , Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , linux-perf-users@vger.kernel.org Subject: [PATCH V13 - RESEND 03/10] arm64/perf: Add branch stack support in struct arm_pmu Date: Tue, 11 Jul 2023 13:54:48 +0530 Message-Id: <20230711082455.215983-4-anshuman.khandual@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230711082455.215983-1-anshuman.khandual@arm.com> References: <20230711082455.215983-1-anshuman.khandual@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1771113662746376864 X-GMAIL-MSGID: 1771113662746376864 This updates 'struct arm_pmu' for branch stack sampling support being added later. This adds an element 'reg_trbidr' to capture BRBE attribute details. These updates here will help in tracking any branch stack sampling support. This also enables perf branch stack sampling event on all 'struct arm pmu', supporting the feature but after removing the current gate that blocks such events unconditionally in armpmu_event_init(). Instead a quick probe can be initiated via arm_pmu->has_branch_stack to ascertain the support. Cc: Catalin Marinas Cc: Will Deacon Cc: Mark Rutland Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Tested-by: James Clark Signed-off-by: Anshuman Khandual --- drivers/perf/arm_pmu.c | 3 +-- include/linux/perf/arm_pmu.h | 9 ++++++++- 2 files changed, 9 insertions(+), 3 deletions(-) diff --git a/drivers/perf/arm_pmu.c b/drivers/perf/arm_pmu.c index c0475a96c2a0..fe456a46fe8d 100644 --- a/drivers/perf/arm_pmu.c +++ b/drivers/perf/arm_pmu.c @@ -512,8 +512,7 @@ static int armpmu_event_init(struct perf_event *event) !cpumask_test_cpu(event->cpu, &armpmu->supported_cpus)) return -ENOENT; - /* does not support taken branch sampling */ - if (has_branch_stack(event)) + if (has_branch_stack(event) && !armpmu->has_branch_stack) return -EOPNOTSUPP; return __hw_perf_event_init(event); diff --git a/include/linux/perf/arm_pmu.h b/include/linux/perf/arm_pmu.h index 3d3bd944d630..5ecd3d0e7645 100644 --- a/include/linux/perf/arm_pmu.h +++ b/include/linux/perf/arm_pmu.h @@ -104,7 +104,9 @@ struct arm_pmu { int (*map_event)(struct perf_event *event); void (*sched_task)(struct perf_event_pmu_context *pmu_ctx, bool sched_in); int num_events; - bool secure_access; /* 32-bit ARM only */ + unsigned int secure_access : 1, /* 32-bit ARM only */ + has_branch_stack: 1, /* 64-bit ARM only */ + reserved : 30; #define ARMV8_PMUV3_MAX_COMMON_EVENTS 0x40 DECLARE_BITMAP(pmceid_bitmap, ARMV8_PMUV3_MAX_COMMON_EVENTS); #define ARMV8_PMUV3_EXT_COMMON_EVENT_BASE 0x4000 @@ -120,6 +122,11 @@ struct arm_pmu { /* Only to be used by ACPI probing code */ unsigned long acpi_cpuid; + + /* Implementation specific attributes */ +#ifdef CONFIG_ARM64_BRBE + u64 reg_brbidr; +#endif }; #define to_arm_pmu(p) (container_of(p, struct arm_pmu, pmu)) From patchwork Tue Jul 11 08:24:49 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anshuman Khandual X-Patchwork-Id: 118322 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:a6b2:0:b0:3e4:2afc:c1 with SMTP id c18csp330403vqm; Tue, 11 Jul 2023 01:43:30 -0700 (PDT) X-Google-Smtp-Source: APBJJlGlPqZEhbd/f759bGgeyUf0vbMsY3e9ZndxQc2MqN3dUJ+nEycxpTrj3u6MxGg0iHn5otDS X-Received: by 2002:a05:6a20:3257:b0:120:1baf:e56e with SMTP id hm23-20020a056a20325700b001201bafe56emr15974743pzc.19.1689065010016; Tue, 11 Jul 2023 01:43:30 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1689065010; cv=none; d=google.com; s=arc-20160816; b=Cvtr8vlaCRYiL6FyPlz6fQrCm8pATQDQq0c5EBs54x6SG1/BK9MjseQsUaPZ2hRlkN NOE3MShOiFrYdTsw6chdRS1JBTLmpxUiEhGbs2cx8icX8uALJLc1cwSu7CF8SVKsjDTk 6YjP9rr/VU1o5SMTkR0i3NO+81Qvs0YUJPh0EkMKDiOTg4Y8UEFAQzwexJjUm/Ign1WL GnWE0d6Po1EFCHncKpXlC/SD4OrmmgNSxOHcj72mqFyq7bqgXYmcew5SEfTDttG10FqT zXBgv8eKRrFiMKcJ0uHYNYonJKXjAIsXRskeGH0UAm99oabdtgUyt6ur17RYhDU6Vpqd yr/g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=7DVVDZhMoqfzQJYaYBJdQFBa4kbaLwgKeS1ZeqB8NE8=; fh=uKDf5R1eKBEpFB2sP5rwFdsisg1sKNqEQFf4Rrsmuig=; b=RFO6mjXCUOsyCYH3xBAf5Lw9+iArs0sGjQO4aAwkcpskdfxH7/ZZEgnHnFGhLX0AkO DguDBk+roBmVDgsgQwQSqqsGtSM3TJCtFyQ0i9NQvmpIhK3MKT7yY54RQ1gy/p15hQHf +voBPKieTadCPY0XG8kn28HHQhFQ6/MyTZc8RkK/XsHVBUhLREUAL88ZmrUlC+MfMxb5 DEy27NEzKmDgIpcVCiFSTCfJk3BulYc7JcIy3lcJjixW61eY4+rFu2R9PIvCtQkbpzoy Fde5wfHRQBp+2tDQMaQG3NrvTmADB8F4k1vcN3asA2PSPGnTju0KxutFZ+pYlkYHD05e 9WKA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id cd25-20020a056a00421900b0067de347ee12si1127961pfb.164.2023.07.11.01.43.17; Tue, 11 Jul 2023 01:43:29 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231328AbjGKIZs (ORCPT + 99 others); Tue, 11 Jul 2023 04:25:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44240 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231406AbjGKIZm (ORCPT ); Tue, 11 Jul 2023 04:25:42 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 1F1091728; Tue, 11 Jul 2023 01:25:32 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 8DD5B2B; Tue, 11 Jul 2023 01:26:14 -0700 (PDT) Received: from a077893.arm.com (unknown [10.163.47.87]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 405B23F740; Tue, 11 Jul 2023 01:25:27 -0700 (PDT) From: Anshuman Khandual To: linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, will@kernel.org, catalin.marinas@arm.com, mark.rutland@arm.com Cc: Anshuman Khandual , Mark Brown , James Clark , Rob Herring , Marc Zyngier , Suzuki Poulose , Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , linux-perf-users@vger.kernel.org Subject: [PATCH V13 - RESEND 04/10] arm64/perf: Add branch stack support in struct pmu_hw_events Date: Tue, 11 Jul 2023 13:54:49 +0530 Message-Id: <20230711082455.215983-5-anshuman.khandual@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230711082455.215983-1-anshuman.khandual@arm.com> References: <20230711082455.215983-1-anshuman.khandual@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1771113031811007073 X-GMAIL-MSGID: 1771113031811007073 This adds branch records buffer pointer in 'struct pmu_hw_events' which can be used to capture branch records during PMU interrupt. This percpu pointer here needs to be allocated first before usage. Cc: Catalin Marinas Cc: Will Deacon Cc: Mark Rutland Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Tested-by: James Clark Acked-by: Mark Rutland Signed-off-by: Anshuman Khandual --- include/linux/perf/arm_pmu.h | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/include/linux/perf/arm_pmu.h b/include/linux/perf/arm_pmu.h index 5ecd3d0e7645..2dd5bcd3080d 100644 --- a/include/linux/perf/arm_pmu.h +++ b/include/linux/perf/arm_pmu.h @@ -46,6 +46,13 @@ static_assert((PERF_EVENT_FLAG_ARCH & ARMPMU_EVT_63BIT) == ARMPMU_EVT_63BIT); }, \ } +#define MAX_BRANCH_RECORDS 64 + +struct branch_records { + struct perf_branch_stack branch_stack; + struct perf_branch_entry branch_entries[MAX_BRANCH_RECORDS]; +}; + /* The events for a given PMU register set. */ struct pmu_hw_events { /* @@ -72,6 +79,8 @@ struct pmu_hw_events { struct arm_pmu *percpu_pmu; int irq; + + struct branch_records *branches; }; enum armpmu_attr_groups { From patchwork Tue Jul 11 08:24:50 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anshuman Khandual X-Patchwork-Id: 118316 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:a6b2:0:b0:3e4:2afc:c1 with SMTP id c18csp327533vqm; Tue, 11 Jul 2023 01:35:40 -0700 (PDT) X-Google-Smtp-Source: APBJJlHo8yWVFdSQjDQdrwyHStuTxvveC87GlM0BVxXpFDsGDow81y4pTDse93Wcbqf/1TZbrvw7 X-Received: by 2002:a17:906:74c8:b0:993:ed3c:dee2 with SMTP id z8-20020a17090674c800b00993ed3cdee2mr9768316ejl.5.1689064539274; Tue, 11 Jul 2023 01:35:39 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1689064539; cv=none; d=google.com; s=arc-20160816; b=wS3wkHKeypnEL2CivIZFbw5TLLjBLLPp/TkwL26ACQQKBVt/QOOIKkPYGmxWbCZAZy ZTRvUZM+1UNzkF2DBxaj0+HIoq+GUQLwv6Z/V6fAKpm+UkogrxED//M1g63/pMjJDrDX 8TrTupQxIQVCWf+2wXH+aYQDat52W6Vo97g2vRZkIZt2XJD+xcKN4RB2igxjyyDxToc7 lhsKs7O5vcs2Xu6CXKwGjhD3WoKdKaCUKS7UD0dJkvNaHSTxFxWtoUwErbpVorL07ChL 46D9ElpyNKOBbKAJ6Uj4WufhOA+RhLt5MmlLYPLH8fGOMnTT8dTLUCtPd1xskdcJ7ymr Pd8Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=kynDHhOKXwyhErtXuVnSPdh2YoL+0XvoZ3jOq3aUHpc=; fh=uKDf5R1eKBEpFB2sP5rwFdsisg1sKNqEQFf4Rrsmuig=; b=NHja9sb+JsOlagh0Tthcv6isjTyJn9DxRdhJgPwRbG4VMVUTyo/JdRXxqvHcK1kpot EKuNPcjEC8zCHwh7EZMmgzBzYfv5qfS95OP1AVbu5SX0VcTsiczZlq/lNlUNaOiZL4L1 5M/vGzs5LWNvDbjiCgZdDBaorkM84LUwTkUTJXZ94uJiugnORwcz/WUTyENgSJ+NFV+I ey8DchzbYKNPX/fY3EOrzsxkUZguwMVv8t07sBJe6LOy6ooCeDulmVgvgPRki9J/lEKl 2YAxJnhlfHFA2BEyfHZqRzw7c479rbuztI8wjQZ6hv5cP2gbjs46bOr18K5wieg1BZeA iH8w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id jz21-20020a170906bb1500b00991cbb3d4a6si1725765ejb.115.2023.07.11.01.35.15; Tue, 11 Jul 2023 01:35:39 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230213AbjGKIZ4 (ORCPT + 99 others); Tue, 11 Jul 2023 04:25:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44602 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231479AbjGKIZx (ORCPT ); Tue, 11 Jul 2023 04:25:53 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 4F2FE198E; Tue, 11 Jul 2023 01:25:38 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 01434D75; Tue, 11 Jul 2023 01:26:20 -0700 (PDT) Received: from a077893.arm.com (unknown [10.163.47.87]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id ED5683F740; Tue, 11 Jul 2023 01:25:32 -0700 (PDT) From: Anshuman Khandual To: linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, will@kernel.org, catalin.marinas@arm.com, mark.rutland@arm.com Cc: Anshuman Khandual , Mark Brown , James Clark , Rob Herring , Marc Zyngier , Suzuki Poulose , Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , linux-perf-users@vger.kernel.org Subject: [PATCH V13 - RESEND 05/10] arm64/perf: Add branch stack support in ARMV8 PMU Date: Tue, 11 Jul 2023 13:54:50 +0530 Message-Id: <20230711082455.215983-6-anshuman.khandual@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230711082455.215983-1-anshuman.khandual@arm.com> References: <20230711082455.215983-1-anshuman.khandual@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1771112538368758680 X-GMAIL-MSGID: 1771112538368758680 This enables support for branch stack sampling event in ARMV8 PMU, checking has_branch_stack() on the event inside 'struct arm_pmu' callbacks. Although these branch stack helpers armv8pmu_branch_XXXXX() are just dummy functions for now. While here, this also defines arm_pmu's sched_task() callback with armv8pmu_sched_task(), which resets the branch record buffer on a sched_in. Cc: Catalin Marinas Cc: Will Deacon Cc: Mark Rutland Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Tested-by: James Clark Signed-off-by: Anshuman Khandual --- arch/arm/include/asm/arm_pmuv3.h | 9 +++ arch/arm64/include/asm/perf_event.h | 31 +++++++++++ drivers/perf/arm_pmuv3.c | 86 +++++++++++++++++++++-------- 3 files changed, 102 insertions(+), 24 deletions(-) diff --git a/arch/arm/include/asm/arm_pmuv3.h b/arch/arm/include/asm/arm_pmuv3.h index f3cd04ff022d..d7c438939a6f 100644 --- a/arch/arm/include/asm/arm_pmuv3.h +++ b/arch/arm/include/asm/arm_pmuv3.h @@ -249,4 +249,13 @@ static inline bool is_pmuv3p5(int pmuver) return pmuver >= ARMV8_PMU_DFR_VER_V3P5; } +/* BRBE stubs */ +#ifdef CONFIG_PERF_EVENTS +static inline void armv8pmu_branch_read(struct pmu_hw_events *cpuc, struct perf_event *event) { } +static inline bool armv8pmu_branch_attr_valid(struct perf_event *event) { return false; } +static inline void armv8pmu_branch_enable(struct perf_event *event) { } +static inline void armv8pmu_branch_disable(struct perf_event *event) { } +static inline void armv8pmu_branch_probe(struct arm_pmu *arm_pmu) { } +static inline void armv8pmu_branch_reset(void) { } +#endif #endif diff --git a/arch/arm64/include/asm/perf_event.h b/arch/arm64/include/asm/perf_event.h index eb7071c9eb34..ebc392ba3559 100644 --- a/arch/arm64/include/asm/perf_event.h +++ b/arch/arm64/include/asm/perf_event.h @@ -24,4 +24,35 @@ extern unsigned long perf_misc_flags(struct pt_regs *regs); (regs)->pstate = PSR_MODE_EL1h; \ } +struct pmu_hw_events; +struct arm_pmu; +struct perf_event; + +#ifdef CONFIG_PERF_EVENTS +static inline bool has_branch_stack(struct perf_event *event); + +static inline void armv8pmu_branch_read(struct pmu_hw_events *cpuc, struct perf_event *event) +{ + WARN_ON_ONCE(!has_branch_stack(event)); +} + +static inline bool armv8pmu_branch_attr_valid(struct perf_event *event) +{ + WARN_ON_ONCE(!has_branch_stack(event)); + return false; +} + +static inline void armv8pmu_branch_enable(struct perf_event *event) +{ + WARN_ON_ONCE(!has_branch_stack(event)); +} + +static inline void armv8pmu_branch_disable(struct perf_event *event) +{ + WARN_ON_ONCE(!has_branch_stack(event)); +} + +static inline void armv8pmu_branch_probe(struct arm_pmu *arm_pmu) { } +static inline void armv8pmu_branch_reset(void) { } +#endif #endif diff --git a/drivers/perf/arm_pmuv3.c b/drivers/perf/arm_pmuv3.c index 08b3a1bf0ef6..10bbe02f4079 100644 --- a/drivers/perf/arm_pmuv3.c +++ b/drivers/perf/arm_pmuv3.c @@ -721,38 +721,21 @@ static void armv8pmu_enable_event(struct perf_event *event) * Enable counter and interrupt, and set the counter to count * the event that we're interested in. */ - - /* - * Disable counter - */ armv8pmu_disable_event_counter(event); - - /* - * Set event. - */ armv8pmu_write_event_type(event); - - /* - * Enable interrupt for this counter - */ armv8pmu_enable_event_irq(event); - - /* - * Enable counter - */ armv8pmu_enable_event_counter(event); + + if (has_branch_stack(event)) + armv8pmu_branch_enable(event); } static void armv8pmu_disable_event(struct perf_event *event) { - /* - * Disable counter - */ - armv8pmu_disable_event_counter(event); + if (has_branch_stack(event)) + armv8pmu_branch_disable(event); - /* - * Disable interrupt for this counter - */ + armv8pmu_disable_event_counter(event); armv8pmu_disable_event_irq(event); } @@ -830,6 +813,11 @@ static irqreturn_t armv8pmu_handle_irq(struct arm_pmu *cpu_pmu) if (!armpmu_event_set_period(event)) continue; + if (has_branch_stack(event) && !WARN_ON(!cpuc->branches)) { + armv8pmu_branch_read(cpuc, event); + perf_sample_save_brstack(&data, event, &cpuc->branches->branch_stack); + } + /* * Perf event overflow will queue the processing of the event as * an irq_work which will be taken care of in the handling of @@ -928,6 +916,14 @@ static int armv8pmu_user_event_idx(struct perf_event *event) return event->hw.idx; } +static void armv8pmu_sched_task(struct perf_event_pmu_context *pmu_ctx, bool sched_in) +{ + struct arm_pmu *armpmu = to_arm_pmu(pmu_ctx->pmu); + + if (sched_in && armpmu->has_branch_stack) + armv8pmu_branch_reset(); +} + /* * Add an event filter to a given event. */ @@ -998,6 +994,9 @@ static void armv8pmu_reset(void *info) pmcr |= ARMV8_PMU_PMCR_LP; armv8pmu_pmcr_write(pmcr); + + if (cpu_pmu->has_branch_stack) + armv8pmu_branch_reset(); } static int __armv8_pmuv3_map_event_id(struct arm_pmu *armpmu, @@ -1035,6 +1034,9 @@ static int __armv8_pmuv3_map_event(struct perf_event *event, hw_event_id = __armv8_pmuv3_map_event_id(armpmu, event); + if (has_branch_stack(event) && !armv8pmu_branch_attr_valid(event)) + return -EOPNOTSUPP; + /* * CHAIN events only work when paired with an adjacent counter, and it * never makes sense for a user to open one in isolation, as they'll be @@ -1151,6 +1153,33 @@ static void __armv8pmu_probe_pmu(void *info) cpu_pmu->reg_pmmir = read_pmmir(); else cpu_pmu->reg_pmmir = 0; + armv8pmu_branch_probe(cpu_pmu); +} + +static int branch_records_alloc(struct arm_pmu *armpmu) +{ + struct branch_records __percpu *records; + int cpu; + + records = alloc_percpu_gfp(struct branch_records, GFP_KERNEL); + if (!records) + return -ENOMEM; + + /* + * FIXME: Memory allocated via records gets completely + * consumed here, never required to be freed up later. Hence + * losing access to on stack 'records' is acceptable. + * Otherwise this alloc handle has to be saved some where. + */ + for_each_possible_cpu(cpu) { + struct pmu_hw_events *events_cpu; + struct branch_records *records_cpu; + + events_cpu = per_cpu_ptr(armpmu->hw_events, cpu); + records_cpu = per_cpu_ptr(records, cpu); + events_cpu->branches = records_cpu; + } + return 0; } static int armv8pmu_probe_pmu(struct arm_pmu *cpu_pmu) @@ -1167,7 +1196,15 @@ static int armv8pmu_probe_pmu(struct arm_pmu *cpu_pmu) if (ret) return ret; - return probe.present ? 0 : -ENODEV; + if (!probe.present) + return -ENODEV; + + if (cpu_pmu->has_branch_stack) { + ret = branch_records_alloc(cpu_pmu); + if (ret) + return ret; + } + return 0; } static void armv8pmu_disable_user_access_ipi(void *unused) @@ -1230,6 +1267,7 @@ static int armv8_pmu_init(struct arm_pmu *cpu_pmu, char *name, cpu_pmu->set_event_filter = armv8pmu_set_event_filter; cpu_pmu->pmu.event_idx = armv8pmu_user_event_idx; + cpu_pmu->sched_task = armv8pmu_sched_task; cpu_pmu->name = name; cpu_pmu->map_event = map_event; From patchwork Tue Jul 11 08:24:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anshuman Khandual X-Patchwork-Id: 118319 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:a6b2:0:b0:3e4:2afc:c1 with SMTP id c18csp329534vqm; Tue, 11 Jul 2023 01:40:59 -0700 (PDT) X-Google-Smtp-Source: APBJJlGXU4UpIksk5j37J5c+9fy2laHm8P2/lPWqLAFxr7ElQIwMP7A6lkha/Xxs1gKHrJudbHFe X-Received: by 2002:a05:6a00:1708:b0:682:4c1c:a0fc with SMTP id h8-20020a056a00170800b006824c1ca0fcmr18881471pfc.19.1689064859424; Tue, 11 Jul 2023 01:40:59 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1689064859; cv=none; d=google.com; s=arc-20160816; b=sQ4FKsYHFFPBnZalwsyAVkhWOml/WhpKy+agRMBPzgj6g+4JEZ0AmqVlFYSWDXlBBu 46ZUAZv8PTceUjMFzbZ15le9Rq4C70bUOYM2AE1C5qpB9cAN8xAqxIxsTR9yH2xmYAiq 86ncoVMYWuBkYoho4cAO4IVf0tcccK1rpJ4ZrziD0O3+zDt9y8E+a59Ax77CXmHKWb/f fqSInSM6CLve07KfEsz4xKhosVMtul4LKNyoIAw9SaPDbP4BQQg5H+qi7X3zkGf5FEq7 50GP6WZMQm32lecX/3zrJoreSL+kPmPRFUYMHO+CWAsxOcbbe1fNdx4tgewTURBy4QSk DqRw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=zY2IRyV3RaOsamWJX5hYXNI2Hsy0j9lhBSXTFadTPOk=; fh=uKDf5R1eKBEpFB2sP5rwFdsisg1sKNqEQFf4Rrsmuig=; b=Bj4BVpL00rSBZhPVF4WnknFHYijqYw8opuolTDda/kGjUXf+SisD4CRqeW62CRZvO0 wgTs69kGcTwUOngwCfdmg/+fkJBpW5YPAD2b91hIU0ZvRlAr6IPKofY4SwXMsjYfxiip Tpj5K/t1Tdprk7GI8Ou14j3LgmcCpSw51ORDNiVH7QY6fyr618ZgYkcGBbru7joydRmj 4WJVIUAJbuR5KFIGwrmXJcTkCeW0q7UFU8ij8Ox6+uO97v21KR0M/bv8u8YDxasfQQ0E uHQUL3tzCIg0uY5vmdCeu0m/iKnD+LXaQiGv7BI9fRlzveqYGXHJsjrPHJi3fW0+EHYk f+Tw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id eh12-20020a056a00808c00b00682cba7571fsi1138163pfb.385.2023.07.11.01.40.47; Tue, 11 Jul 2023 01:40:59 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231501AbjGKI0K (ORCPT + 99 others); Tue, 11 Jul 2023 04:26:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44866 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231479AbjGKI0J (ORCPT ); Tue, 11 Jul 2023 04:26:09 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 1A0E81708; Tue, 11 Jul 2023 01:25:44 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B513A2B; Tue, 11 Jul 2023 01:26:25 -0700 (PDT) Received: from a077893.arm.com (unknown [10.163.47.87]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 7DD923F740; Tue, 11 Jul 2023 01:25:38 -0700 (PDT) From: Anshuman Khandual To: linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, will@kernel.org, catalin.marinas@arm.com, mark.rutland@arm.com Cc: Anshuman Khandual , Mark Brown , James Clark , Rob Herring , Marc Zyngier , Suzuki Poulose , Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , linux-perf-users@vger.kernel.org Subject: [PATCH V13 - RESEND 06/10] arm64/perf: Enable branch stack events via FEAT_BRBE Date: Tue, 11 Jul 2023 13:54:51 +0530 Message-Id: <20230711082455.215983-7-anshuman.khandual@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230711082455.215983-1-anshuman.khandual@arm.com> References: <20230711082455.215983-1-anshuman.khandual@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1771112874244072188 X-GMAIL-MSGID: 1771112874244072188 This enables branch stack sampling events in ARMV8 PMU, via an architecture feature FEAT_BRBE aka branch record buffer extension. This defines required branch helper functions pmuv8pmu_branch_XXXXX() and the implementation here is wrapped with a new config option CONFIG_ARM64_BRBE. Cc: Catalin Marinas Cc: Will Deacon Cc: Mark Rutland Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Tested-by: James Clark Signed-off-by: Anshuman Khandual --- arch/arm64/include/asm/perf_event.h | 9 + drivers/perf/Kconfig | 11 + drivers/perf/Makefile | 1 + drivers/perf/arm_brbe.c | 549 ++++++++++++++++++++++++++++ drivers/perf/arm_brbe.h | 257 +++++++++++++ drivers/perf/arm_pmuv3.c | 4 + 6 files changed, 831 insertions(+) create mode 100644 drivers/perf/arm_brbe.c create mode 100644 drivers/perf/arm_brbe.h diff --git a/arch/arm64/include/asm/perf_event.h b/arch/arm64/include/asm/perf_event.h index ebc392ba3559..49a973571415 100644 --- a/arch/arm64/include/asm/perf_event.h +++ b/arch/arm64/include/asm/perf_event.h @@ -31,6 +31,14 @@ struct perf_event; #ifdef CONFIG_PERF_EVENTS static inline bool has_branch_stack(struct perf_event *event); +#ifdef CONFIG_ARM64_BRBE +void armv8pmu_branch_read(struct pmu_hw_events *cpuc, struct perf_event *event); +bool armv8pmu_branch_attr_valid(struct perf_event *event); +void armv8pmu_branch_enable(struct perf_event *event); +void armv8pmu_branch_disable(struct perf_event *event); +void armv8pmu_branch_probe(struct arm_pmu *arm_pmu); +void armv8pmu_branch_reset(void); +#else static inline void armv8pmu_branch_read(struct pmu_hw_events *cpuc, struct perf_event *event) { WARN_ON_ONCE(!has_branch_stack(event)); @@ -56,3 +64,4 @@ static inline void armv8pmu_branch_probe(struct arm_pmu *arm_pmu) { } static inline void armv8pmu_branch_reset(void) { } #endif #endif +#endif diff --git a/drivers/perf/Kconfig b/drivers/perf/Kconfig index f4572a5cca72..7c8448051741 100644 --- a/drivers/perf/Kconfig +++ b/drivers/perf/Kconfig @@ -180,6 +180,17 @@ config ARM_SPE_PMU Extension, which provides periodic sampling of operations in the CPU pipeline and reports this via the perf AUX interface. +config ARM64_BRBE + bool "Enable support for Branch Record Buffer Extension (BRBE)" + depends on PERF_EVENTS && ARM64 && ARM_PMU + default y + help + Enable perf support for Branch Record Buffer Extension (BRBE) which + records all branches taken in an execution path. This supports some + branch types and privilege based filtering. It captured additional + relevant information such as cycle count, misprediction and branch + type, branch privilege level etc. + config ARM_DMC620_PMU tristate "Enable PMU support for the ARM DMC-620 memory controller" depends on (ARM64 && ACPI) || COMPILE_TEST diff --git a/drivers/perf/Makefile b/drivers/perf/Makefile index 16b3ec4db916..a8b7bc22e3d6 100644 --- a/drivers/perf/Makefile +++ b/drivers/perf/Makefile @@ -18,6 +18,7 @@ obj-$(CONFIG_RISCV_PMU_SBI) += riscv_pmu_sbi.o obj-$(CONFIG_THUNDERX2_PMU) += thunderx2_pmu.o obj-$(CONFIG_XGENE_PMU) += xgene_pmu.o obj-$(CONFIG_ARM_SPE_PMU) += arm_spe_pmu.o +obj-$(CONFIG_ARM64_BRBE) += arm_brbe.o obj-$(CONFIG_ARM_DMC620_PMU) += arm_dmc620_pmu.o obj-$(CONFIG_MARVELL_CN10K_TAD_PMU) += marvell_cn10k_tad_pmu.o obj-$(CONFIG_MARVELL_CN10K_DDR_PMU) += marvell_cn10k_ddr_pmu.o diff --git a/drivers/perf/arm_brbe.c b/drivers/perf/arm_brbe.c new file mode 100644 index 000000000000..79106300cf2e --- /dev/null +++ b/drivers/perf/arm_brbe.c @@ -0,0 +1,549 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Branch Record Buffer Extension Driver. + * + * Copyright (C) 2022 ARM Limited + * + * Author: Anshuman Khandual + */ +#include "arm_brbe.h" + +static bool valid_brbe_nr(int brbe_nr) +{ + return brbe_nr == BRBIDR0_EL1_NUMREC_8 || + brbe_nr == BRBIDR0_EL1_NUMREC_16 || + brbe_nr == BRBIDR0_EL1_NUMREC_32 || + brbe_nr == BRBIDR0_EL1_NUMREC_64; +} + +static bool valid_brbe_cc(int brbe_cc) +{ + return brbe_cc == BRBIDR0_EL1_CC_20_BIT; +} + +static bool valid_brbe_format(int brbe_format) +{ + return brbe_format == BRBIDR0_EL1_FORMAT_0; +} + +static bool valid_brbe_version(int brbe_version) +{ + return brbe_version == ID_AA64DFR0_EL1_BRBE_IMP || + brbe_version == ID_AA64DFR0_EL1_BRBE_BRBE_V1P1; +} + +static void select_brbe_bank(int bank) +{ + u64 brbfcr; + + WARN_ON(bank > BRBE_BANK_IDX_1); + brbfcr = read_sysreg_s(SYS_BRBFCR_EL1); + brbfcr &= ~BRBFCR_EL1_BANK_MASK; + brbfcr |= SYS_FIELD_PREP(BRBFCR_EL1, BANK, bank); + write_sysreg_s(brbfcr, SYS_BRBFCR_EL1); + isb(); +} + +/* + * Generic perf branch filters supported on BRBE + * + * New branch filters need to be evaluated whether they could be supported on + * BRBE. This ensures that such branch filters would not just be accepted, to + * fail silently. PERF_SAMPLE_BRANCH_HV is a special case that is selectively + * supported only on platforms where kernel is in hyp mode. + */ +#define BRBE_EXCLUDE_BRANCH_FILTERS (PERF_SAMPLE_BRANCH_ABORT_TX | \ + PERF_SAMPLE_BRANCH_IN_TX | \ + PERF_SAMPLE_BRANCH_NO_TX | \ + PERF_SAMPLE_BRANCH_CALL_STACK) + +#define BRBE_ALLOWED_BRANCH_FILTERS (PERF_SAMPLE_BRANCH_USER | \ + PERF_SAMPLE_BRANCH_KERNEL | \ + PERF_SAMPLE_BRANCH_HV | \ + PERF_SAMPLE_BRANCH_ANY | \ + PERF_SAMPLE_BRANCH_ANY_CALL | \ + PERF_SAMPLE_BRANCH_ANY_RETURN | \ + PERF_SAMPLE_BRANCH_IND_CALL | \ + PERF_SAMPLE_BRANCH_COND | \ + PERF_SAMPLE_BRANCH_IND_JUMP | \ + PERF_SAMPLE_BRANCH_CALL | \ + PERF_SAMPLE_BRANCH_NO_FLAGS | \ + PERF_SAMPLE_BRANCH_NO_CYCLES | \ + PERF_SAMPLE_BRANCH_TYPE_SAVE | \ + PERF_SAMPLE_BRANCH_HW_INDEX | \ + PERF_SAMPLE_BRANCH_PRIV_SAVE) + +#define BRBE_PERF_BRANCH_FILTERS (BRBE_ALLOWED_BRANCH_FILTERS | \ + BRBE_EXCLUDE_BRANCH_FILTERS) + +bool armv8pmu_branch_attr_valid(struct perf_event *event) +{ + u64 branch_type = event->attr.branch_sample_type; + + /* + * Ensure both perf branch filter allowed and exclude + * masks are always in sync with the generic perf ABI. + */ + BUILD_BUG_ON(BRBE_PERF_BRANCH_FILTERS != (PERF_SAMPLE_BRANCH_MAX - 1)); + + if (branch_type & ~BRBE_ALLOWED_BRANCH_FILTERS) { + pr_debug_once("requested branch filter not supported 0x%llx\n", branch_type); + return false; + } + + /* + * If the event does not have at least one of the privilege + * branch filters as in PERF_SAMPLE_BRANCH_PLM_ALL, the core + * perf will adjust its value based on perf event's existing + * privilege level via attr.exclude_[user|kernel|hv]. + * + * As event->attr.branch_sample_type might have been changed + * when the event reaches here, it is not possible to figure + * out whether the event originally had HV privilege request + * or got added via the core perf. Just report this situation + * once and continue ignoring if there are other instances. + */ + if ((branch_type & PERF_SAMPLE_BRANCH_HV) && !is_kernel_in_hyp_mode()) + pr_debug_once("hypervisor privilege filter not supported 0x%llx\n", branch_type); + + return true; +} + +static int brbe_attributes_probe(struct arm_pmu *armpmu, u32 brbe) +{ + u64 brbidr = read_sysreg_s(SYS_BRBIDR0_EL1); + int brbe_version, brbe_format, brbe_cc, brbe_nr; + + brbe_version = brbe; + brbe_format = brbe_get_format(brbidr); + brbe_cc = brbe_get_cc_bits(brbidr); + brbe_nr = brbe_get_numrec(brbidr); + armpmu->reg_brbidr = brbidr; + + if (!valid_brbe_version(brbe_version) || + !valid_brbe_format(brbe_format) || + !valid_brbe_cc(brbe_cc) || + !valid_brbe_nr(brbe_nr)) + return -EOPNOTSUPP; + return 0; +} + +void armv8pmu_branch_probe(struct arm_pmu *armpmu) +{ + u64 aa64dfr0 = read_sysreg_s(SYS_ID_AA64DFR0_EL1); + u32 brbe; + + brbe = cpuid_feature_extract_unsigned_field(aa64dfr0, ID_AA64DFR0_EL1_BRBE_SHIFT); + if (!brbe) + return; + + if (brbe_attributes_probe(armpmu, brbe)) + return; + + armpmu->has_branch_stack = 1; +} + +static u64 branch_type_to_brbfcr(int branch_type) +{ + u64 brbfcr = 0; + + if (branch_type & PERF_SAMPLE_BRANCH_ANY) { + brbfcr |= BRBFCR_EL1_BRANCH_FILTERS; + return brbfcr; + } + + if (branch_type & PERF_SAMPLE_BRANCH_ANY_CALL) { + brbfcr |= BRBFCR_EL1_INDCALL; + brbfcr |= BRBFCR_EL1_DIRCALL; + } + + if (branch_type & PERF_SAMPLE_BRANCH_ANY_RETURN) + brbfcr |= BRBFCR_EL1_RTN; + + if (branch_type & PERF_SAMPLE_BRANCH_IND_CALL) + brbfcr |= BRBFCR_EL1_INDCALL; + + if (branch_type & PERF_SAMPLE_BRANCH_COND) + brbfcr |= BRBFCR_EL1_CONDDIR; + + if (branch_type & PERF_SAMPLE_BRANCH_IND_JUMP) + brbfcr |= BRBFCR_EL1_INDIRECT; + + if (branch_type & PERF_SAMPLE_BRANCH_CALL) + brbfcr |= BRBFCR_EL1_DIRCALL; + + return brbfcr; +} + +static u64 branch_type_to_brbcr(int branch_type) +{ + u64 brbcr = BRBCR_EL1_DEFAULT_TS; + + /* + * BRBE should be paused on PMU interrupt while tracing kernel + * space to stop capturing further branch records. Otherwise + * interrupt handler branch records might get into the samples + * which is not desired. + * + * BRBE need not be paused on PMU interrupt while tracing only + * the user space, because it will automatically be inside the + * prohibited region. But even after PMU overflow occurs, the + * interrupt could still take much more cycles, before it can + * be taken and by that time BRBE will have been overwritten. + * Hence enable pause on PMU interrupt mechanism even for user + * only traces as well. + */ + brbcr |= BRBCR_EL1_FZP; + + /* + * When running in the hyp mode, writing into BRBCR_EL1 + * actually writes into BRBCR_EL2 instead. Field E2BRE + * is also at the same position as E1BRE. + */ + if (branch_type & PERF_SAMPLE_BRANCH_USER) + brbcr |= BRBCR_EL1_E0BRE; + + if (branch_type & PERF_SAMPLE_BRANCH_KERNEL) + brbcr |= BRBCR_EL1_E1BRE; + + if (branch_type & PERF_SAMPLE_BRANCH_HV) { + if (is_kernel_in_hyp_mode()) + brbcr |= BRBCR_EL1_E1BRE; + } + + if (!(branch_type & PERF_SAMPLE_BRANCH_NO_CYCLES)) + brbcr |= BRBCR_EL1_CC; + + if (!(branch_type & PERF_SAMPLE_BRANCH_NO_FLAGS)) + brbcr |= BRBCR_EL1_MPRED; + + /* + * The exception and exception return branches could be + * captured, irrespective of the perf event's privilege. + * If the perf event does not have enough privilege for + * a given exception level, then addresses which falls + * under that exception level will be reported as zero + * for the captured branch record, creating source only + * or target only records. + */ + if (branch_type & PERF_SAMPLE_BRANCH_ANY) { + brbcr |= BRBCR_EL1_EXCEPTION; + brbcr |= BRBCR_EL1_ERTN; + } + + if (branch_type & PERF_SAMPLE_BRANCH_ANY_CALL) + brbcr |= BRBCR_EL1_EXCEPTION; + + if (branch_type & PERF_SAMPLE_BRANCH_ANY_RETURN) + brbcr |= BRBCR_EL1_ERTN; + + return brbcr & BRBCR_EL1_DEFAULT_CONFIG; +} + +void armv8pmu_branch_enable(struct perf_event *event) +{ + u64 branch_type = event->attr.branch_sample_type; + u64 brbfcr, brbcr; + + brbfcr = read_sysreg_s(SYS_BRBFCR_EL1); + brbfcr &= ~BRBFCR_EL1_DEFAULT_CONFIG; + brbfcr |= branch_type_to_brbfcr(branch_type); + write_sysreg_s(brbfcr, SYS_BRBFCR_EL1); + isb(); + + brbcr = read_sysreg_s(SYS_BRBCR_EL1); + brbcr &= ~BRBCR_EL1_DEFAULT_CONFIG; + brbcr |= branch_type_to_brbcr(branch_type); + write_sysreg_s(brbcr, SYS_BRBCR_EL1); + isb(); + armv8pmu_branch_reset(); +} + +void armv8pmu_branch_disable(struct perf_event *event) +{ + u64 brbfcr = read_sysreg_s(SYS_BRBFCR_EL1); + u64 brbcr = read_sysreg_s(SYS_BRBCR_EL1); + + brbcr &= ~(BRBCR_EL1_E0BRE | BRBCR_EL1_E1BRE); + brbfcr |= BRBFCR_EL1_PAUSED; + write_sysreg_s(brbcr, SYS_BRBCR_EL1); + write_sysreg_s(brbfcr, SYS_BRBFCR_EL1); + isb(); +} + +static void brbe_set_perf_entry_type(struct perf_branch_entry *entry, u64 brbinf) +{ + int brbe_type = brbe_get_type(brbinf); + + switch (brbe_type) { + case BRBINFx_EL1_TYPE_UNCOND_DIRECT: + entry->type = PERF_BR_UNCOND; + break; + case BRBINFx_EL1_TYPE_INDIRECT: + entry->type = PERF_BR_IND; + break; + case BRBINFx_EL1_TYPE_DIRECT_LINK: + entry->type = PERF_BR_CALL; + break; + case BRBINFx_EL1_TYPE_INDIRECT_LINK: + entry->type = PERF_BR_IND_CALL; + break; + case BRBINFx_EL1_TYPE_RET: + entry->type = PERF_BR_RET; + break; + case BRBINFx_EL1_TYPE_COND_DIRECT: + entry->type = PERF_BR_COND; + break; + case BRBINFx_EL1_TYPE_CALL: + entry->type = PERF_BR_CALL; + break; + case BRBINFx_EL1_TYPE_TRAP: + entry->type = PERF_BR_SYSCALL; + break; + case BRBINFx_EL1_TYPE_ERET: + entry->type = PERF_BR_ERET; + break; + case BRBINFx_EL1_TYPE_IRQ: + entry->type = PERF_BR_IRQ; + break; + case BRBINFx_EL1_TYPE_DEBUG_HALT: + entry->type = PERF_BR_EXTEND_ABI; + entry->new_type = PERF_BR_ARM64_DEBUG_HALT; + break; + case BRBINFx_EL1_TYPE_SERROR: + entry->type = PERF_BR_SERROR; + break; + case BRBINFx_EL1_TYPE_INSN_DEBUG: + entry->type = PERF_BR_EXTEND_ABI; + entry->new_type = PERF_BR_ARM64_DEBUG_INST; + break; + case BRBINFx_EL1_TYPE_DATA_DEBUG: + entry->type = PERF_BR_EXTEND_ABI; + entry->new_type = PERF_BR_ARM64_DEBUG_DATA; + break; + case BRBINFx_EL1_TYPE_ALIGN_FAULT: + entry->type = PERF_BR_EXTEND_ABI; + entry->new_type = PERF_BR_NEW_FAULT_ALGN; + break; + case BRBINFx_EL1_TYPE_INSN_FAULT: + entry->type = PERF_BR_EXTEND_ABI; + entry->new_type = PERF_BR_NEW_FAULT_INST; + break; + case BRBINFx_EL1_TYPE_DATA_FAULT: + entry->type = PERF_BR_EXTEND_ABI; + entry->new_type = PERF_BR_NEW_FAULT_DATA; + break; + case BRBINFx_EL1_TYPE_FIQ: + entry->type = PERF_BR_EXTEND_ABI; + entry->new_type = PERF_BR_ARM64_FIQ; + break; + case BRBINFx_EL1_TYPE_DEBUG_EXIT: + entry->type = PERF_BR_EXTEND_ABI; + entry->new_type = PERF_BR_ARM64_DEBUG_EXIT; + break; + default: + pr_warn_once("%d - unknown branch type captured\n", brbe_type); + entry->type = PERF_BR_UNKNOWN; + break; + } +} + +static int brbe_get_perf_priv(u64 brbinf) +{ + int brbe_el = brbe_get_el(brbinf); + + switch (brbe_el) { + case BRBINFx_EL1_EL_EL0: + return PERF_BR_PRIV_USER; + case BRBINFx_EL1_EL_EL1: + return PERF_BR_PRIV_KERNEL; + case BRBINFx_EL1_EL_EL2: + if (is_kernel_in_hyp_mode()) + return PERF_BR_PRIV_KERNEL; + return PERF_BR_PRIV_HV; + default: + pr_warn_once("%d - unknown branch privilege captured\n", brbe_el); + return PERF_BR_PRIV_UNKNOWN; + } +} + +static void capture_brbe_flags(struct perf_branch_entry *entry, struct perf_event *event, + u64 brbinf) +{ + if (branch_sample_type(event)) + brbe_set_perf_entry_type(entry, brbinf); + + if (!branch_sample_no_cycles(event)) + entry->cycles = brbe_get_cycles(brbinf); + + if (!branch_sample_no_flags(event)) { + /* + * BRBINFx_EL1.LASTFAILED indicates that a TME transaction failed (or + * was cancelled) prior to this record, and some number of records + * prior to this one, may have been generated during an attempt to + * execute the transaction. + * + * We will remove such entries later in process_branch_aborts(). + */ + entry->abort = brbe_get_lastfailed(brbinf); + + /* + * All these information (i.e transaction state and mispredicts) + * are available for source only and complete branch records. + */ + if (brbe_record_is_complete(brbinf) || + brbe_record_is_source_only(brbinf)) { + entry->mispred = brbe_get_mispredict(brbinf); + entry->predicted = !entry->mispred; + entry->in_tx = brbe_get_in_tx(brbinf); + } + } + + if (branch_sample_priv(event)) { + /* + * All these information (i.e branch privilege level) are + * available for target only and complete branch records. + */ + if (brbe_record_is_complete(brbinf) || + brbe_record_is_target_only(brbinf)) + entry->priv = brbe_get_perf_priv(brbinf); + } +} + +/* + * A branch record with BRBINFx_EL1.LASTFAILED set, implies that all + * preceding consecutive branch records, that were in a transaction + * (i.e their BRBINFx_EL1.TX set) have been aborted. + * + * Similarly BRBFCR_EL1.LASTFAILED set, indicate that all preceding + * consecutive branch records up to the last record, which were in a + * transaction (i.e their BRBINFx_EL1.TX set) have been aborted. + * + * --------------------------------- ------------------- + * | 00 | BRBSRC | BRBTGT | BRBINF | | TX = 1 | LF = 0 | [TX success] + * --------------------------------- ------------------- + * | 01 | BRBSRC | BRBTGT | BRBINF | | TX = 1 | LF = 0 | [TX success] + * --------------------------------- ------------------- + * | 02 | BRBSRC | BRBTGT | BRBINF | | TX = 0 | LF = 0 | + * --------------------------------- ------------------- + * | 03 | BRBSRC | BRBTGT | BRBINF | | TX = 1 | LF = 0 | [TX failed] + * --------------------------------- ------------------- + * | 04 | BRBSRC | BRBTGT | BRBINF | | TX = 1 | LF = 0 | [TX failed] + * --------------------------------- ------------------- + * | 05 | BRBSRC | BRBTGT | BRBINF | | TX = 0 | LF = 1 | + * --------------------------------- ------------------- + * | .. | BRBSRC | BRBTGT | BRBINF | | TX = 0 | LF = 0 | + * --------------------------------- ------------------- + * | 61 | BRBSRC | BRBTGT | BRBINF | | TX = 1 | LF = 0 | [TX failed] + * --------------------------------- ------------------- + * | 62 | BRBSRC | BRBTGT | BRBINF | | TX = 1 | LF = 0 | [TX failed] + * --------------------------------- ------------------- + * | 63 | BRBSRC | BRBTGT | BRBINF | | TX = 1 | LF = 0 | [TX failed] + * --------------------------------- ------------------- + * + * BRBFCR_EL1.LASTFAILED == 1 + * + * BRBFCR_EL1.LASTFAILED fails all those consecutive, in transaction + * branches records near the end of the BRBE buffer. + * + * Architecture does not guarantee a non transaction (TX = 0) branch + * record between two different transactions. So it is possible that + * a subsequent lastfailed record (TX = 0, LF = 1) might erroneously + * mark more than required transactions as aborted. + */ +static void process_branch_aborts(struct pmu_hw_events *cpuc) +{ + u64 brbfcr = read_sysreg_s(SYS_BRBFCR_EL1); + bool lastfailed = !!(brbfcr & BRBFCR_EL1_LASTFAILED); + int idx = brbe_get_numrec(cpuc->percpu_pmu->reg_brbidr) - 1; + struct perf_branch_entry *entry; + + do { + entry = &cpuc->branches->branch_entries[idx]; + if (entry->in_tx) { + entry->abort = lastfailed; + } else { + lastfailed = entry->abort; + entry->abort = false; + } + } while (idx--, idx >= 0); +} + +void armv8pmu_branch_reset(void) +{ + asm volatile(BRB_IALL); + isb(); +} + +static bool capture_branch_entry(struct pmu_hw_events *cpuc, + struct perf_event *event, int idx) +{ + struct perf_branch_entry *entry = &cpuc->branches->branch_entries[idx]; + u64 brbinf = get_brbinf_reg(idx); + + /* + * There are no valid entries anymore on the buffer. + * Abort the branch record processing to save some + * cycles and also reduce the capture/process load + * for the user space as well. + */ + if (brbe_invalid(brbinf)) + return false; + + perf_clear_branch_entry_bitfields(entry); + if (brbe_record_is_complete(brbinf)) { + entry->from = get_brbsrc_reg(idx); + entry->to = get_brbtgt_reg(idx); + } else if (brbe_record_is_source_only(brbinf)) { + entry->from = get_brbsrc_reg(idx); + entry->to = 0; + } else if (brbe_record_is_target_only(brbinf)) { + entry->from = 0; + entry->to = get_brbtgt_reg(idx); + } + capture_brbe_flags(entry, event, brbinf); + return true; +} + +void armv8pmu_branch_read(struct pmu_hw_events *cpuc, struct perf_event *event) +{ + int nr_hw_entries = brbe_get_numrec(cpuc->percpu_pmu->reg_brbidr); + u64 brbfcr, brbcr; + int idx = 0; + + brbcr = read_sysreg_s(SYS_BRBCR_EL1); + brbfcr = read_sysreg_s(SYS_BRBFCR_EL1); + + /* Ensure pause on PMU interrupt is enabled */ + WARN_ON_ONCE(!(brbcr & BRBCR_EL1_FZP)); + + /* Pause the buffer */ + write_sysreg_s(brbfcr | BRBFCR_EL1_PAUSED, SYS_BRBFCR_EL1); + isb(); + + /* Loop through bank 0 */ + select_brbe_bank(BRBE_BANK_IDX_0); + while (idx < nr_hw_entries && idx <= BRBE_BANK0_IDX_MAX) { + if (!capture_branch_entry(cpuc, event, idx)) + goto skip_bank_1; + idx++; + } + + /* Loop through bank 1 */ + select_brbe_bank(BRBE_BANK_IDX_1); + while (idx < nr_hw_entries && idx <= BRBE_BANK1_IDX_MAX) { + if (!capture_branch_entry(cpuc, event, idx)) + break; + idx++; + } + +skip_bank_1: + cpuc->branches->branch_stack.nr = idx; + cpuc->branches->branch_stack.hw_idx = -1ULL; + process_branch_aborts(cpuc); + + /* Unpause the buffer */ + write_sysreg_s(brbfcr & ~BRBFCR_EL1_PAUSED, SYS_BRBFCR_EL1); + isb(); + armv8pmu_branch_reset(); +} diff --git a/drivers/perf/arm_brbe.h b/drivers/perf/arm_brbe.h new file mode 100644 index 000000000000..a47480eec070 --- /dev/null +++ b/drivers/perf/arm_brbe.h @@ -0,0 +1,257 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Branch Record Buffer Extension Helpers. + * + * Copyright (C) 2022 ARM Limited + * + * Author: Anshuman Khandual + */ +#define pr_fmt(fmt) "brbe: " fmt + +#include + +#define BRBFCR_EL1_BRANCH_FILTERS (BRBFCR_EL1_DIRECT | \ + BRBFCR_EL1_INDIRECT | \ + BRBFCR_EL1_RTN | \ + BRBFCR_EL1_INDCALL | \ + BRBFCR_EL1_DIRCALL | \ + BRBFCR_EL1_CONDDIR) + +#define BRBFCR_EL1_DEFAULT_CONFIG (BRBFCR_EL1_BANK_MASK | \ + BRBFCR_EL1_PAUSED | \ + BRBFCR_EL1_EnI | \ + BRBFCR_EL1_BRANCH_FILTERS) + +/* + * BRBTS_EL1 is currently not used for branch stack implementation + * purpose but BRBCR_EL1.TS needs to have a valid value from all + * available options. BRBCR_EL1_TS_VIRTUAL is selected for this. + */ +#define BRBCR_EL1_DEFAULT_TS FIELD_PREP(BRBCR_EL1_TS_MASK, BRBCR_EL1_TS_VIRTUAL) + +#define BRBCR_EL1_DEFAULT_CONFIG (BRBCR_EL1_EXCEPTION | \ + BRBCR_EL1_ERTN | \ + BRBCR_EL1_CC | \ + BRBCR_EL1_MPRED | \ + BRBCR_EL1_E1BRE | \ + BRBCR_EL1_E0BRE | \ + BRBCR_EL1_FZP | \ + BRBCR_EL1_DEFAULT_TS) +/* + * BRBE Instructions + * + * BRB_IALL : Invalidate the entire buffer + * BRB_INJ : Inject latest branch record derived from [BRBSRCINJ, BRBTGTINJ, BRBINFINJ] + */ +#define BRB_IALL __emit_inst(0xD5000000 | sys_insn(1, 1, 7, 2, 4) | (0x1f)) +#define BRB_INJ __emit_inst(0xD5000000 | sys_insn(1, 1, 7, 2, 5) | (0x1f)) + +/* + * BRBE Buffer Organization + * + * BRBE buffer is arranged as multiple banks of 32 branch record + * entries each. An individual branch record in a given bank could + * be accessed, after selecting the bank in BRBFCR_EL1.BANK and + * accessing the registers i.e [BRBSRC, BRBTGT, BRBINF] set with + * indices [0..31]. + * + * Bank 0 + * + * --------------------------------- ------ + * | 00 | BRBSRC | BRBTGT | BRBINF | | 00 | + * --------------------------------- ------ + * | 01 | BRBSRC | BRBTGT | BRBINF | | 01 | + * --------------------------------- ------ + * | .. | BRBSRC | BRBTGT | BRBINF | | .. | + * --------------------------------- ------ + * | 31 | BRBSRC | BRBTGT | BRBINF | | 31 | + * --------------------------------- ------ + * + * Bank 1 + * + * --------------------------------- ------ + * | 32 | BRBSRC | BRBTGT | BRBINF | | 00 | + * --------------------------------- ------ + * | 33 | BRBSRC | BRBTGT | BRBINF | | 01 | + * --------------------------------- ------ + * | .. | BRBSRC | BRBTGT | BRBINF | | .. | + * --------------------------------- ------ + * | 63 | BRBSRC | BRBTGT | BRBINF | | 31 | + * --------------------------------- ------ + */ +#define BRBE_BANK_MAX_ENTRIES 32 + +#define BRBE_BANK0_IDX_MIN 0 +#define BRBE_BANK0_IDX_MAX 31 +#define BRBE_BANK1_IDX_MIN 32 +#define BRBE_BANK1_IDX_MAX 63 + +struct brbe_hw_attr { + int brbe_version; + int brbe_cc; + int brbe_nr; + int brbe_format; +}; + +enum brbe_bank_idx { + BRBE_BANK_IDX_INVALID = -1, + BRBE_BANK_IDX_0, + BRBE_BANK_IDX_1, + BRBE_BANK_IDX_MAX +}; + +#define RETURN_READ_BRBSRCN(n) \ + read_sysreg_s(SYS_BRBSRC##n##_EL1) + +#define RETURN_READ_BRBTGTN(n) \ + read_sysreg_s(SYS_BRBTGT##n##_EL1) + +#define RETURN_READ_BRBINFN(n) \ + read_sysreg_s(SYS_BRBINF##n##_EL1) + +#define BRBE_REGN_CASE(n, case_macro) \ + case n: return case_macro(n); break + +#define BRBE_REGN_SWITCH(x, case_macro) \ + do { \ + switch (x) { \ + BRBE_REGN_CASE(0, case_macro); \ + BRBE_REGN_CASE(1, case_macro); \ + BRBE_REGN_CASE(2, case_macro); \ + BRBE_REGN_CASE(3, case_macro); \ + BRBE_REGN_CASE(4, case_macro); \ + BRBE_REGN_CASE(5, case_macro); \ + BRBE_REGN_CASE(6, case_macro); \ + BRBE_REGN_CASE(7, case_macro); \ + BRBE_REGN_CASE(8, case_macro); \ + BRBE_REGN_CASE(9, case_macro); \ + BRBE_REGN_CASE(10, case_macro); \ + BRBE_REGN_CASE(11, case_macro); \ + BRBE_REGN_CASE(12, case_macro); \ + BRBE_REGN_CASE(13, case_macro); \ + BRBE_REGN_CASE(14, case_macro); \ + BRBE_REGN_CASE(15, case_macro); \ + BRBE_REGN_CASE(16, case_macro); \ + BRBE_REGN_CASE(17, case_macro); \ + BRBE_REGN_CASE(18, case_macro); \ + BRBE_REGN_CASE(19, case_macro); \ + BRBE_REGN_CASE(20, case_macro); \ + BRBE_REGN_CASE(21, case_macro); \ + BRBE_REGN_CASE(22, case_macro); \ + BRBE_REGN_CASE(23, case_macro); \ + BRBE_REGN_CASE(24, case_macro); \ + BRBE_REGN_CASE(25, case_macro); \ + BRBE_REGN_CASE(26, case_macro); \ + BRBE_REGN_CASE(27, case_macro); \ + BRBE_REGN_CASE(28, case_macro); \ + BRBE_REGN_CASE(29, case_macro); \ + BRBE_REGN_CASE(30, case_macro); \ + BRBE_REGN_CASE(31, case_macro); \ + default: \ + pr_warn("unknown register index\n"); \ + return -1; \ + } \ + } while (0) + +static inline int buffer_to_brbe_idx(int buffer_idx) +{ + return buffer_idx % BRBE_BANK_MAX_ENTRIES; +} + +static inline u64 get_brbsrc_reg(int buffer_idx) +{ + int brbe_idx = buffer_to_brbe_idx(buffer_idx); + + BRBE_REGN_SWITCH(brbe_idx, RETURN_READ_BRBSRCN); +} + +static inline u64 get_brbtgt_reg(int buffer_idx) +{ + int brbe_idx = buffer_to_brbe_idx(buffer_idx); + + BRBE_REGN_SWITCH(brbe_idx, RETURN_READ_BRBTGTN); +} + +static inline u64 get_brbinf_reg(int buffer_idx) +{ + int brbe_idx = buffer_to_brbe_idx(buffer_idx); + + BRBE_REGN_SWITCH(brbe_idx, RETURN_READ_BRBINFN); +} + +static inline u64 brbe_record_valid(u64 brbinf) +{ + return FIELD_GET(BRBINFx_EL1_VALID_MASK, brbinf); +} + +static inline bool brbe_invalid(u64 brbinf) +{ + return brbe_record_valid(brbinf) == BRBINFx_EL1_VALID_NONE; +} + +static inline bool brbe_record_is_complete(u64 brbinf) +{ + return brbe_record_valid(brbinf) == BRBINFx_EL1_VALID_FULL; +} + +static inline bool brbe_record_is_source_only(u64 brbinf) +{ + return brbe_record_valid(brbinf) == BRBINFx_EL1_VALID_SOURCE; +} + +static inline bool brbe_record_is_target_only(u64 brbinf) +{ + return brbe_record_valid(brbinf) == BRBINFx_EL1_VALID_TARGET; +} + +static inline int brbe_get_in_tx(u64 brbinf) +{ + return FIELD_GET(BRBINFx_EL1_T_MASK, brbinf); +} + +static inline int brbe_get_mispredict(u64 brbinf) +{ + return FIELD_GET(BRBINFx_EL1_MPRED_MASK, brbinf); +} + +static inline int brbe_get_lastfailed(u64 brbinf) +{ + return FIELD_GET(BRBINFx_EL1_LASTFAILED_MASK, brbinf); +} + +static inline int brbe_get_cycles(u64 brbinf) +{ + /* + * Captured cycle count is unknown and hence + * should not be passed on to the user space. + */ + if (brbinf & BRBINFx_EL1_CCU) + return 0; + + return FIELD_GET(BRBINFx_EL1_CC_MASK, brbinf); +} + +static inline int brbe_get_type(u64 brbinf) +{ + return FIELD_GET(BRBINFx_EL1_TYPE_MASK, brbinf); +} + +static inline int brbe_get_el(u64 brbinf) +{ + return FIELD_GET(BRBINFx_EL1_EL_MASK, brbinf); +} + +static inline int brbe_get_numrec(u64 brbidr) +{ + return FIELD_GET(BRBIDR0_EL1_NUMREC_MASK, brbidr); +} + +static inline int brbe_get_format(u64 brbidr) +{ + return FIELD_GET(BRBIDR0_EL1_FORMAT_MASK, brbidr); +} + +static inline int brbe_get_cc_bits(u64 brbidr) +{ + return FIELD_GET(BRBIDR0_EL1_CC_MASK, brbidr); +} diff --git a/drivers/perf/arm_pmuv3.c b/drivers/perf/arm_pmuv3.c index 10bbe02f4079..7c9e9045c24e 100644 --- a/drivers/perf/arm_pmuv3.c +++ b/drivers/perf/arm_pmuv3.c @@ -813,6 +813,10 @@ static irqreturn_t armv8pmu_handle_irq(struct arm_pmu *cpu_pmu) if (!armpmu_event_set_period(event)) continue; + /* + * PMU IRQ should remain asserted until all branch records + * are captured and processed into struct perf_sample_data. + */ if (has_branch_stack(event) && !WARN_ON(!cpuc->branches)) { armv8pmu_branch_read(cpuc, event); perf_sample_save_brstack(&data, event, &cpuc->branches->branch_stack); From patchwork Tue Jul 11 08:24:52 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anshuman Khandual X-Patchwork-Id: 118328 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:a6b2:0:b0:3e4:2afc:c1 with SMTP id c18csp333971vqm; Tue, 11 Jul 2023 01:53:00 -0700 (PDT) X-Google-Smtp-Source: APBJJlEji6eNIUBgbynE0ewkwxm2s/s5kKRv3VJ8GRlqP4ip4WZiK/FtnIuXiT3KF77Cdjdfhb63 X-Received: by 2002:a17:907:7d89:b0:988:b204:66b0 with SMTP id oz9-20020a1709077d8900b00988b20466b0mr22641177ejc.33.1689065580657; Tue, 11 Jul 2023 01:53:00 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1689065580; cv=none; d=google.com; s=arc-20160816; b=yMzIyC0p8mVlpBGoUDos7PrvT+ctwvFfF9RgH8roh+xtpS/fk4aO9xmeBq51T1psWJ tf2aUWL7bBIeLYhdWO3u+FouR8IveQUH5PQoz7Eur9mVbB3QoL0pj1g+IgOueYyIAx8u TsWYQxeLJitKEBpqjLi6q8w6vofjaR5RWicSvrzVlBQpYP2vzhnsaXgZeVhjzggUBbBo 7jRyXPhJi+mupkNzGacsvu1q22EiX2XtGZ8LxZUEMVJOVWbMkB4pvczpo63WGrWCddg1 B0MKiENngg3qkJorZU69T8b3nIWYYfmvgnES+zMPCdiyiMvEFxsqZgU27Toi4PJvETyj qk+A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=uAr37oD0YMttpVIsTeDhI+D8rVrdAF8ryVcjt0e6Ksc=; fh=uKDf5R1eKBEpFB2sP5rwFdsisg1sKNqEQFf4Rrsmuig=; b=PORIyowfFo1rGjTj8KBDe6yo9oOoyIFxSTbw5/LdqJnNHU3lUYFP8LMoLiNTjrBMya S05ONT8kfjVUZdaZRh5p2Mj0CmVQPCvkSFJYFkeUyN21dqzBA4hR3+89WxnVqLGYefTt 1mt0RI05xn4Qui9ridYKRK1ohC3amgkkbmYn7g+Ah8ncsIXRP5nbq5dE1nZmC0Z8Blrg y/jqpZKpk6IObr1BU7uTg3pTiviO5GvfqOO+sDiPF8+5gd6bGg3oqkncpy7nzM9a6vcz mYKvimokBIAAmmdQE6/Jcf71A66Cgd7kako57qwxsfN7yj6KStv0puOs3/OtxNMiuv53 N8gA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id oq24-20020a170906cc9800b0099309c64f17si1655206ejb.146.2023.07.11.01.52.36; Tue, 11 Jul 2023 01:53:00 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231585AbjGKI0a (ORCPT + 99 others); Tue, 11 Jul 2023 04:26:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45076 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231700AbjGKI0W (ORCPT ); Tue, 11 Jul 2023 04:26:22 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 40A8F1720; Tue, 11 Jul 2023 01:25:52 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 5BB401042; Tue, 11 Jul 2023 01:26:31 -0700 (PDT) Received: from a077893.arm.com (unknown [10.163.47.87]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 3836E3F740; Tue, 11 Jul 2023 01:25:43 -0700 (PDT) From: Anshuman Khandual To: linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, will@kernel.org, catalin.marinas@arm.com, mark.rutland@arm.com Cc: Anshuman Khandual , Mark Brown , James Clark , Rob Herring , Marc Zyngier , Suzuki Poulose , Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , linux-perf-users@vger.kernel.org Subject: [PATCH V13 - RESEND 07/10] arm64/perf: Add PERF_ATTACH_TASK_DATA to events with has_branch_stack() Date: Tue, 11 Jul 2023 13:54:52 +0530 Message-Id: <20230711082455.215983-8-anshuman.khandual@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230711082455.215983-1-anshuman.khandual@arm.com> References: <20230711082455.215983-1-anshuman.khandual@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1771113630551850955 X-GMAIL-MSGID: 1771113630551850955 Short running processes i.e those getting very small cpu run time each time when they get scheduled on, might not accumulate much branch records before a PMU IRQ really happens. This increases possibility, for such processes to loose much of its branch records, while being scheduled in-out of various cpus on the system. There is a need to save all occurred branch records during the cpu run time while the process gets scheduled out. It requires an event context specific buffer for such storage. This adds PERF_ATTACH_TASK_DATA flag unconditionally, for all branch stack sampling events, which would allocate task_ctx_data during its event init. This also creates a platform specific task_ctx_data kmem cache which will serve such allocation requests. This adds a new structure 'arm64_perf_task_context' which encapsulates brbe register set for maximum possible BRBE entries on the HW along with a valid records tracking element. Cc: Catalin Marinas Cc: Will Deacon Cc: Mark Rutland Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Tested-by: James Clark Signed-off-by: Anshuman Khandual --- arch/arm/include/asm/arm_pmuv3.h | 2 ++ arch/arm64/include/asm/perf_event.h | 4 ++++ drivers/perf/arm_brbe.c | 21 +++++++++++++++++++++ drivers/perf/arm_brbe.h | 13 +++++++++++++ drivers/perf/arm_pmuv3.c | 16 +++++++++++++--- 5 files changed, 53 insertions(+), 3 deletions(-) diff --git a/arch/arm/include/asm/arm_pmuv3.h b/arch/arm/include/asm/arm_pmuv3.h index d7c438939a6f..3d8faf4200dc 100644 --- a/arch/arm/include/asm/arm_pmuv3.h +++ b/arch/arm/include/asm/arm_pmuv3.h @@ -257,5 +257,7 @@ static inline void armv8pmu_branch_enable(struct perf_event *event) { } static inline void armv8pmu_branch_disable(struct perf_event *event) { } static inline void armv8pmu_branch_probe(struct arm_pmu *arm_pmu) { } static inline void armv8pmu_branch_reset(void) { } +static inline int armv8pmu_task_ctx_cache_alloc(struct arm_pmu *arm_pmu) { return 0; } +static inline void armv8pmu_task_ctx_cache_free(struct arm_pmu *arm_pmu) { } #endif #endif diff --git a/arch/arm64/include/asm/perf_event.h b/arch/arm64/include/asm/perf_event.h index 49a973571415..b0c12a5882df 100644 --- a/arch/arm64/include/asm/perf_event.h +++ b/arch/arm64/include/asm/perf_event.h @@ -38,6 +38,8 @@ void armv8pmu_branch_enable(struct perf_event *event); void armv8pmu_branch_disable(struct perf_event *event); void armv8pmu_branch_probe(struct arm_pmu *arm_pmu); void armv8pmu_branch_reset(void); +int armv8pmu_task_ctx_cache_alloc(struct arm_pmu *arm_pmu); +void armv8pmu_task_ctx_cache_free(struct arm_pmu *arm_pmu); #else static inline void armv8pmu_branch_read(struct pmu_hw_events *cpuc, struct perf_event *event) { @@ -62,6 +64,8 @@ static inline void armv8pmu_branch_disable(struct perf_event *event) static inline void armv8pmu_branch_probe(struct arm_pmu *arm_pmu) { } static inline void armv8pmu_branch_reset(void) { } +static inline int armv8pmu_task_ctx_cache_alloc(struct arm_pmu *arm_pmu) { return 0; } +static inline void armv8pmu_task_ctx_cache_free(struct arm_pmu *arm_pmu) { } #endif #endif #endif diff --git a/drivers/perf/arm_brbe.c b/drivers/perf/arm_brbe.c index 79106300cf2e..a74459445813 100644 --- a/drivers/perf/arm_brbe.c +++ b/drivers/perf/arm_brbe.c @@ -109,6 +109,27 @@ bool armv8pmu_branch_attr_valid(struct perf_event *event) return true; } +static inline struct kmem_cache * +arm64_create_brbe_task_ctx_kmem_cache(size_t size) +{ + return kmem_cache_create("arm64_brbe_task_ctx", size, 0, 0, NULL); +} + +int armv8pmu_task_ctx_cache_alloc(struct arm_pmu *arm_pmu) +{ + size_t size = sizeof(struct arm64_perf_task_context); + + arm_pmu->pmu.task_ctx_cache = arm64_create_brbe_task_ctx_kmem_cache(size); + if (!arm_pmu->pmu.task_ctx_cache) + return -ENOMEM; + return 0; +} + +void armv8pmu_task_ctx_cache_free(struct arm_pmu *arm_pmu) +{ + kmem_cache_destroy(arm_pmu->pmu.task_ctx_cache); +} + static int brbe_attributes_probe(struct arm_pmu *armpmu, u32 brbe) { u64 brbidr = read_sysreg_s(SYS_BRBIDR0_EL1); diff --git a/drivers/perf/arm_brbe.h b/drivers/perf/arm_brbe.h index a47480eec070..4a72c2ba7140 100644 --- a/drivers/perf/arm_brbe.h +++ b/drivers/perf/arm_brbe.h @@ -80,12 +80,25 @@ * --------------------------------- ------ */ #define BRBE_BANK_MAX_ENTRIES 32 +#define BRBE_MAX_BANK 2 +#define BRBE_MAX_ENTRIES (BRBE_BANK_MAX_ENTRIES * BRBE_MAX_BANK) #define BRBE_BANK0_IDX_MIN 0 #define BRBE_BANK0_IDX_MAX 31 #define BRBE_BANK1_IDX_MIN 32 #define BRBE_BANK1_IDX_MAX 63 +struct brbe_regset { + unsigned long brbsrc; + unsigned long brbtgt; + unsigned long brbinf; +}; + +struct arm64_perf_task_context { + struct brbe_regset store[BRBE_MAX_ENTRIES]; + int nr_brbe_records; +}; + struct brbe_hw_attr { int brbe_version; int brbe_cc; diff --git a/drivers/perf/arm_pmuv3.c b/drivers/perf/arm_pmuv3.c index 7c9e9045c24e..408974d5c57b 100644 --- a/drivers/perf/arm_pmuv3.c +++ b/drivers/perf/arm_pmuv3.c @@ -1038,8 +1038,12 @@ static int __armv8_pmuv3_map_event(struct perf_event *event, hw_event_id = __armv8_pmuv3_map_event_id(armpmu, event); - if (has_branch_stack(event) && !armv8pmu_branch_attr_valid(event)) - return -EOPNOTSUPP; + if (has_branch_stack(event)) { + if (!armv8pmu_branch_attr_valid(event)) + return -EOPNOTSUPP; + + event->attach_state |= PERF_ATTACH_TASK_DATA; + } /* * CHAIN events only work when paired with an adjacent counter, and it @@ -1204,9 +1208,15 @@ static int armv8pmu_probe_pmu(struct arm_pmu *cpu_pmu) return -ENODEV; if (cpu_pmu->has_branch_stack) { - ret = branch_records_alloc(cpu_pmu); + ret = armv8pmu_task_ctx_cache_alloc(cpu_pmu); if (ret) return ret; + + ret = branch_records_alloc(cpu_pmu); + if (ret) { + armv8pmu_task_ctx_cache_free(cpu_pmu); + return ret; + } } return 0; } From patchwork Tue Jul 11 08:24:53 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anshuman Khandual X-Patchwork-Id: 118331 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:a6b2:0:b0:3e4:2afc:c1 with SMTP id c18csp334723vqm; Tue, 11 Jul 2023 01:55:12 -0700 (PDT) X-Google-Smtp-Source: APBJJlFUhmXCi1bLCUi9fsOa8CYHLwUzmfDgOk779pq9FsQoQUrYOgcvfjZapirKMpgXSNr2OmW7 X-Received: by 2002:a05:6402:14c7:b0:51d:d3d4:d02f with SMTP id f7-20020a05640214c700b0051dd3d4d02fmr15832255edx.8.1689065712408; Tue, 11 Jul 2023 01:55:12 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1689065712; cv=none; d=google.com; s=arc-20160816; b=zQStNiXgxjA1vYMT+XFiWzClDL4JhhZm3c3mU3Xnnxewy/bfjOcQRTCBBtqaA9qHJq D/3w7wTP2j027msOPj8heh/SYqcbRaKRyCFuTYawrMp6uBgc2nYaf+rITW/PMQVB5Jib G/0mjm7XqCgjL6a5eOxNQVIMvfufQPVnXyQbDOG42vRGlUU9xCxuijXvQq7JWevzYl0u P3lMoakQOCe5sSwis6Kzir+Jwi7ibtQvFmg0hlTwhUOkmObLTN0vPuKGd6M/8QJCCtJO PEs/FEplLu5981bcDaBSw2DYvyNRdLpOiBgfUNSWnyBczdqRIydUXWn1CqPc9roq0fnH PppQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=UxMl/HVW9v6UnCW9nydtJDL0yazyeTdhr57JCOn/bfU=; fh=uKDf5R1eKBEpFB2sP5rwFdsisg1sKNqEQFf4Rrsmuig=; b=yOKbu7t9IXlAuzDGWmWqArZ2K0e2u+HcO525a6bqAJuMTV5Gs6Mu0CMWozu9+Cw62t egjW+SjqOQWYjBvY1saRXLXSlV5P6yEXAsbdX0yHRIENReFg9M3/zNNBI6A7GMa7IkVX tT4+T8KHaERx/bj+u+KGkbMIw7oYf67l9nZ7jBHFMPCYSb8IbnTSYxPggGphYDPZSehF K66NmBvpOU8NYMwAWJKODEXIDcr7zUYZrmCsBjslHyY2xtEM6C4vWGznRNN6eXWWtyyS vpE69RkGLVgmARbAyUy42naw3AulHNupyTZQu85EY7/EUIroZoOPBQUvutN0fF7fdfzm OMxg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id n8-20020aa7db48000000b0051df74b43f1si1481563edt.539.2023.07.11.01.54.48; Tue, 11 Jul 2023 01:55:12 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231731AbjGKI0n (ORCPT + 99 others); Tue, 11 Jul 2023 04:26:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45214 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231661AbjGKI0d (ORCPT ); Tue, 11 Jul 2023 04:26:33 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 3B8F51708; Tue, 11 Jul 2023 01:26:12 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id F34A5D75; Tue, 11 Jul 2023 01:26:37 -0700 (PDT) Received: from a077893.arm.com (unknown [10.163.47.87]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id D284C3F740; Tue, 11 Jul 2023 01:25:49 -0700 (PDT) From: Anshuman Khandual To: linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, will@kernel.org, catalin.marinas@arm.com, mark.rutland@arm.com Cc: Anshuman Khandual , Mark Brown , James Clark , Rob Herring , Marc Zyngier , Suzuki Poulose , Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , linux-perf-users@vger.kernel.org Subject: [PATCH V13 - RESEND 08/10] arm64/perf: Add struct brbe_regset helper functions Date: Tue, 11 Jul 2023 13:54:53 +0530 Message-Id: <20230711082455.215983-9-anshuman.khandual@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230711082455.215983-1-anshuman.khandual@arm.com> References: <20230711082455.215983-1-anshuman.khandual@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1771113768608766614 X-GMAIL-MSGID: 1771113768608766614 The primary abstraction level for fetching branch records from BRBE HW has been changed as 'struct brbe_regset', which contains storage for all three BRBE registers i.e BRBSRC, BRBTGT, BRBINF. Whether branch record processing happens in the task sched out path, or in the PMU IRQ handling path, these registers need to be extracted from the HW. Afterwards both live and stored sets need to be stitched together to create final branch records set. This adds required helper functions for such operations. Cc: Catalin Marinas Cc: Will Deacon Cc: Mark Rutland Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Tested-by: James Clark Acked-by: Mark Rutland Signed-off-by: Anshuman Khandual --- drivers/perf/arm_brbe.c | 121 ++++++++++++++++++++++++++++++++++++++++ 1 file changed, 121 insertions(+) diff --git a/drivers/perf/arm_brbe.c b/drivers/perf/arm_brbe.c index a74459445813..203cd4f350d5 100644 --- a/drivers/perf/arm_brbe.c +++ b/drivers/perf/arm_brbe.c @@ -44,6 +44,127 @@ static void select_brbe_bank(int bank) isb(); } +static bool __read_brbe_regset(struct brbe_regset *entry, int idx) +{ + entry->brbinf = get_brbinf_reg(idx); + + if (brbe_invalid(entry->brbinf)) + return false; + + entry->brbsrc = get_brbsrc_reg(idx); + entry->brbtgt = get_brbtgt_reg(idx); + return true; +} + +/* + * Read all BRBE entries in HW until the first invalid entry. + * + * The caller must ensure that the BRBE is not concurrently modifying these + * branch entries. + */ +static int capture_brbe_regset(struct brbe_regset *buf, int nr_hw_entries) +{ + int idx = 0; + + select_brbe_bank(BRBE_BANK_IDX_0); + while (idx < nr_hw_entries && idx <= BRBE_BANK0_IDX_MAX) { + if (!__read_brbe_regset(&buf[idx], idx)) + return idx; + idx++; + } + + select_brbe_bank(BRBE_BANK_IDX_1); + while (idx < nr_hw_entries && idx <= BRBE_BANK1_IDX_MAX) { + if (!__read_brbe_regset(&buf[idx], idx)) + return idx; + idx++; + } + return idx; +} + +/* + * This function concatenates branch records from stored and live buffer + * up to maximum nr_max records and the stored buffer holds the resultant + * buffer. The concatenated buffer contains all the branch records from + * the live buffer but might contain some from stored buffer considering + * the maximum combined length does not exceed 'nr_max'. + * + * Stored records Live records + * ------------------------------------------------^ + * | S0 | L0 | Newest | + * --------------------------------- | + * | S1 | L1 | | + * --------------------------------- | + * | S2 | L2 | | + * --------------------------------- | + * | S3 | L3 | | + * --------------------------------- | + * | S4 | L4 | nr_max + * --------------------------------- | + * | | L5 | | + * --------------------------------- | + * | | L6 | | + * --------------------------------- | + * | | L7 | | + * --------------------------------- | + * | | | | + * --------------------------------- | + * | | | Oldest | + * ------------------------------------------------V + * + * + * S0 is the newest in the stored records, where as L7 is the oldest in + * the live records. Unless the live buffer is detected as being full + * thus potentially dropping off some older records, L7 and S0 records + * are contiguous in time for a user task context. The stitched buffer + * here represents maximum possible branch records, contiguous in time. + * + * Stored records Live records + * ------------------------------------------------^ + * | L0 | L0 | Newest | + * --------------------------------- | + * | L0 | L1 | | + * --------------------------------- | + * | L2 | L2 | | + * --------------------------------- | + * | L3 | L3 | | + * --------------------------------- | + * | L4 | L4 | nr_max + * --------------------------------- | + * | L5 | L5 | | + * --------------------------------- | + * | L6 | L6 | | + * --------------------------------- | + * | L7 | L7 | | + * --------------------------------- | + * | S0 | | | + * --------------------------------- | + * | S1 | | Oldest | + * ------------------------------------------------V + * | S2 | <----| + * ----------------- | + * | S3 | <----| Dropped off after nr_max + * ----------------- | + * | S4 | <----| + * ----------------- + */ +static int stitch_stored_live_entries(struct brbe_regset *stored, + struct brbe_regset *live, + int nr_stored, int nr_live, + int nr_max) +{ + int nr_move = min(nr_stored, nr_max - nr_live); + + /* Move the tail of the buffer to make room for the new entries */ + memmove(&stored[nr_live], &stored[0], nr_move * sizeof(*stored)); + + /* Copy the new entries into the head of the buffer */ + memcpy(&stored[0], &live[0], nr_live * sizeof(*stored)); + + /* Return the number of entries in the stitched buffer */ + return min(nr_live + nr_stored, nr_max); +} + /* * Generic perf branch filters supported on BRBE * From patchwork Tue Jul 11 08:24:54 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anshuman Khandual X-Patchwork-Id: 118323 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:a6b2:0:b0:3e4:2afc:c1 with SMTP id c18csp331589vqm; Tue, 11 Jul 2023 01:46:34 -0700 (PDT) X-Google-Smtp-Source: APBJJlFN0Tk3sTAFF8fV/PJY0VXFTmpKOy7Cd2XAmbVXDMQFwnahrpdBYV7mPi2mOanRXNSffv05 X-Received: by 2002:a17:906:3f09:b0:992:528:abe with SMTP id c9-20020a1709063f0900b0099205280abemr14685345ejj.53.1689065194437; Tue, 11 Jul 2023 01:46:34 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1689065194; cv=none; d=google.com; s=arc-20160816; b=mCg/28Sx+bzyzaSbERyB+/6UDRTjn8s72EvgrxYyjUxMytJ/oxXnda/9ZaFLC1oIYJ beMjp0pkrYrCtlJp3TBYnl7fqQxcciK2Q1SpWVaTF6OzwmvZ1OecdlaK6RNUf/Md5HWE gm5kAX0iH4v8SHfSqwtBqLO7ZC9PYREFzalRG5Bj4bB1+3waHsCr+Hb1+Nx8YtA8jTlD PGrR7HovswapepzmjOmYUEPNpvfwqimAcUlTY5piFBBMKa6N+at3WHF511U6h+o/U31i PQbagY2nMPre+9Ew4p/UZOrfGV9c3Kkz6mu1skMyaQCJWYA76ceG78NLBY0e6HB4/n4I IF6A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=je//YSa5Ke4Ds4d9Gw151MvigT8fCVI4BMmVsMhcBI4=; fh=uKDf5R1eKBEpFB2sP5rwFdsisg1sKNqEQFf4Rrsmuig=; b=Evc+twBDEbdNpbmEuInok/Z0F/EwfHT0trnftH8qPXX5qfYtJABRhyIlyx5nVpJ8fa FSzP6QLcQx8MAQ0ehwR/dyycK2WmVUF1PbLDbolW7Q9mC0+VDfGkKv/D9OIRltGivspf +iu4AwUUZO/AaaIZVQ3PeIurCc28+F1DKWg/oHBSMkwBxq4rmoVl8TNnI1CECWOnIM9i sLivvgkCKfNBEwKKcjg9yCnWvoKdqJNOmdd7/6AOSuW/iXjddN8ydshTUbyQFohbAO3U i/8adbbKqrNi49omHdAwP0qeeG4D+w6YToKVIQeFVVIY9W+YiJJLKSj2wndCAd/bKdra iTDw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id bo23-20020a170906d05700b009930c030835si1540635ejb.631.2023.07.11.01.46.10; Tue, 11 Jul 2023 01:46:34 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231578AbjGKI04 (ORCPT + 99 others); Tue, 11 Jul 2023 04:26:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45434 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231531AbjGKI0w (ORCPT ); Tue, 11 Jul 2023 04:26:52 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 22A07173D; Tue, 11 Jul 2023 01:26:35 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A206611FB; Tue, 11 Jul 2023 01:26:43 -0700 (PDT) Received: from a077893.arm.com (unknown [10.163.47.87]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 9958E3F740; Tue, 11 Jul 2023 01:25:56 -0700 (PDT) From: Anshuman Khandual To: linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, will@kernel.org, catalin.marinas@arm.com, mark.rutland@arm.com Cc: Anshuman Khandual , Mark Brown , James Clark , Rob Herring , Marc Zyngier , Suzuki Poulose , Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , linux-perf-users@vger.kernel.org Subject: [PATCH V13 - RESEND 09/10] arm64/perf: Implement branch records save on task sched out Date: Tue, 11 Jul 2023 13:54:54 +0530 Message-Id: <20230711082455.215983-10-anshuman.khandual@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230711082455.215983-1-anshuman.khandual@arm.com> References: <20230711082455.215983-1-anshuman.khandual@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1771113225108467947 X-GMAIL-MSGID: 1771113225108467947 This modifies current armv8pmu_sched_task(), to implement a branch records save mechanism via armv8pmu_branch_save() when a task scheds out of a cpu. BRBE is paused and disabled for all exception levels before branch records get captured, which then get concatenated with all existing stored records present in the task context maintaining the contiguity. Although the final length of the concatenated buffer does not exceed implemented BRBE length. Cc: Catalin Marinas Cc: Will Deacon Cc: Mark Rutland Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Tested-by: James Clark Acked-by: Mark Rutland Signed-off-by: Anshuman Khandual --- arch/arm/include/asm/arm_pmuv3.h | 1 + arch/arm64/include/asm/perf_event.h | 2 ++ drivers/perf/arm_brbe.c | 30 +++++++++++++++++++++++++++++ drivers/perf/arm_pmuv3.c | 14 ++++++++++++-- 4 files changed, 45 insertions(+), 2 deletions(-) diff --git a/arch/arm/include/asm/arm_pmuv3.h b/arch/arm/include/asm/arm_pmuv3.h index 3d8faf4200dc..3d047d2c9430 100644 --- a/arch/arm/include/asm/arm_pmuv3.h +++ b/arch/arm/include/asm/arm_pmuv3.h @@ -259,5 +259,6 @@ static inline void armv8pmu_branch_probe(struct arm_pmu *arm_pmu) { } static inline void armv8pmu_branch_reset(void) { } static inline int armv8pmu_task_ctx_cache_alloc(struct arm_pmu *arm_pmu) { return 0; } static inline void armv8pmu_task_ctx_cache_free(struct arm_pmu *arm_pmu) { } +static inline void armv8pmu_branch_save(struct arm_pmu *arm_pmu, void *ctx) { } #endif #endif diff --git a/arch/arm64/include/asm/perf_event.h b/arch/arm64/include/asm/perf_event.h index b0c12a5882df..36e7dfb466a6 100644 --- a/arch/arm64/include/asm/perf_event.h +++ b/arch/arm64/include/asm/perf_event.h @@ -40,6 +40,7 @@ void armv8pmu_branch_probe(struct arm_pmu *arm_pmu); void armv8pmu_branch_reset(void); int armv8pmu_task_ctx_cache_alloc(struct arm_pmu *arm_pmu); void armv8pmu_task_ctx_cache_free(struct arm_pmu *arm_pmu); +void armv8pmu_branch_save(struct arm_pmu *arm_pmu, void *ctx); #else static inline void armv8pmu_branch_read(struct pmu_hw_events *cpuc, struct perf_event *event) { @@ -66,6 +67,7 @@ static inline void armv8pmu_branch_probe(struct arm_pmu *arm_pmu) { } static inline void armv8pmu_branch_reset(void) { } static inline int armv8pmu_task_ctx_cache_alloc(struct arm_pmu *arm_pmu) { return 0; } static inline void armv8pmu_task_ctx_cache_free(struct arm_pmu *arm_pmu) { } +static inline void armv8pmu_branch_save(struct arm_pmu *arm_pmu, void *ctx) { } #endif #endif #endif diff --git a/drivers/perf/arm_brbe.c b/drivers/perf/arm_brbe.c index 203cd4f350d5..2177632befa6 100644 --- a/drivers/perf/arm_brbe.c +++ b/drivers/perf/arm_brbe.c @@ -165,6 +165,36 @@ static int stitch_stored_live_entries(struct brbe_regset *stored, return min(nr_live + nr_stored, nr_max); } +static int brbe_branch_save(struct brbe_regset *live, int nr_hw_entries) +{ + u64 brbfcr = read_sysreg_s(SYS_BRBFCR_EL1); + int nr_live; + + write_sysreg_s(brbfcr | BRBFCR_EL1_PAUSED, SYS_BRBFCR_EL1); + isb(); + + nr_live = capture_brbe_regset(live, nr_hw_entries); + + write_sysreg_s(brbfcr & ~BRBFCR_EL1_PAUSED, SYS_BRBFCR_EL1); + isb(); + + return nr_live; +} + +void armv8pmu_branch_save(struct arm_pmu *arm_pmu, void *ctx) +{ + struct arm64_perf_task_context *task_ctx = ctx; + struct brbe_regset live[BRBE_MAX_ENTRIES]; + int nr_live, nr_store, nr_hw_entries; + + nr_hw_entries = brbe_get_numrec(arm_pmu->reg_brbidr); + nr_live = brbe_branch_save(live, nr_hw_entries); + nr_store = task_ctx->nr_brbe_records; + nr_store = stitch_stored_live_entries(task_ctx->store, live, nr_store, + nr_live, nr_hw_entries); + task_ctx->nr_brbe_records = nr_store; +} + /* * Generic perf branch filters supported on BRBE * diff --git a/drivers/perf/arm_pmuv3.c b/drivers/perf/arm_pmuv3.c index 408974d5c57b..aa3c7b3dcdd6 100644 --- a/drivers/perf/arm_pmuv3.c +++ b/drivers/perf/arm_pmuv3.c @@ -923,9 +923,19 @@ static int armv8pmu_user_event_idx(struct perf_event *event) static void armv8pmu_sched_task(struct perf_event_pmu_context *pmu_ctx, bool sched_in) { struct arm_pmu *armpmu = to_arm_pmu(pmu_ctx->pmu); + void *task_ctx = pmu_ctx ? pmu_ctx->task_ctx_data : NULL; - if (sched_in && armpmu->has_branch_stack) - armv8pmu_branch_reset(); + if (armpmu->has_branch_stack) { + /* Save branch records in task_ctx on sched out */ + if (task_ctx && !sched_in) { + armv8pmu_branch_save(armpmu, task_ctx); + return; + } + + /* Reset branch records on sched in */ + if (sched_in) + armv8pmu_branch_reset(); + } } /* From patchwork Tue Jul 11 08:24:55 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anshuman Khandual X-Patchwork-Id: 118321 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:a6b2:0:b0:3e4:2afc:c1 with SMTP id c18csp329767vqm; Tue, 11 Jul 2023 01:41:40 -0700 (PDT) X-Google-Smtp-Source: APBJJlEvB5nYvYWyD3lTxBVVW0ONvu0K4Kj1nBMk8vhA8T5w/c5IdqfQdXErC6ZQs8RsVWwWK+B3 X-Received: by 2002:a17:902:e80d:b0:1b8:b436:bef3 with SMTP id u13-20020a170902e80d00b001b8b436bef3mr13206193plg.24.1689064900125; Tue, 11 Jul 2023 01:41:40 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1689064900; cv=none; d=google.com; s=arc-20160816; b=RyqGz369V2NWKN9k/JN1wvLiE0HjknN0s3yi66isT0ZqLOCLzTjGSI/zan8TenGDFc plWG3Fce/ighJUN1Tr2/Mc+3sOBSlBSid9Ez6XbKpRCO4sVpFOq/4carqcJrkc/+1JEu FqcdWM4vtIwFOe4UcnUiyO8+0jTVbQtettTNC31x5rPmVF5TtfrztcE1A/NcEQ79+GCZ zPshBmZ9KuIYICXd+2SbCWFcLuRexkIg5wr6sxHur/zTGxvd3hfj+rb1f1qKYP4O/g1D oHS7HdE2VFhD0UepcWGbFpCgI9OUz/Em+qXLx30NtmEcQr8ZtKuCVC20+6D9y9Tzz+Ff FWIQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=aif5AvRzZRYIEHGz44dMUX4EXn2+mF01IeV4B2cj2Lo=; fh=uKDf5R1eKBEpFB2sP5rwFdsisg1sKNqEQFf4Rrsmuig=; b=uj1RKQhg3RXhZ5tNf+BxJ34Zrw6IMzTqYRM4mkRJ7Uw+PrgoBDqr1zsQMwlzCTOgQV JBcSNxvP1mKfxLG7IGOM90cNWOawb9A/1A1cG3rSyauAyLvviV5x/MHBoPq48eFYV6xS 4vhLjyPvx++ny/0qskNzxI4TR0EAxGeeyF/aQfr8t2DGW+EUFftqDFrOUAMHd4QdXCwN B2aqcOJsX6nmNzMwrtFNPc4CEXDQ7HBNVdwMIMrnBSm9Et8jHfh+jXh41Lg+/t9AKge1 hA7JZ9Jw0F4TWqKVprFf9sxwB7UBoxPbt6vBw4/hpQz/ynqAlxb4euk6OMhoTHtAZ026 WT8w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id kh13-20020a170903064d00b001ab089f7329si1215409plb.73.2023.07.11.01.41.27; Tue, 11 Jul 2023 01:41:40 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231689AbjGKI1A (ORCPT + 99 others); Tue, 11 Jul 2023 04:27:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45846 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231658AbjGKI05 (ORCPT ); Tue, 11 Jul 2023 04:26:57 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 4897EE7E; Tue, 11 Jul 2023 01:26:39 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 2EED113D5; Tue, 11 Jul 2023 01:26:49 -0700 (PDT) Received: from a077893.arm.com (unknown [10.163.47.87]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 24C043F740; Tue, 11 Jul 2023 01:26:01 -0700 (PDT) From: Anshuman Khandual To: linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, will@kernel.org, catalin.marinas@arm.com, mark.rutland@arm.com Cc: Anshuman Khandual , Mark Brown , James Clark , Rob Herring , Marc Zyngier , Suzuki Poulose , Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , linux-perf-users@vger.kernel.org Subject: [PATCH V13 - RESEND 10/10] arm64/perf: Implement branch records save on PMU IRQ Date: Tue, 11 Jul 2023 13:54:55 +0530 Message-Id: <20230711082455.215983-11-anshuman.khandual@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230711082455.215983-1-anshuman.khandual@arm.com> References: <20230711082455.215983-1-anshuman.khandual@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1771112916745318672 X-GMAIL-MSGID: 1771112916745318672 This modifies armv8pmu_branch_read() to concatenate live entries along with task context stored entries and then process the resultant buffer to create perf branch entry array for perf_sample_data. It follows the same principle like task sched out. Cc: Catalin Marinas Cc: Will Deacon Cc: Mark Rutland Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Tested-by: James Clark Acked-by: Mark Rutland Signed-off-by: Anshuman Khandual --- drivers/perf/arm_brbe.c | 69 +++++++++++++++++++---------------------- 1 file changed, 32 insertions(+), 37 deletions(-) diff --git a/drivers/perf/arm_brbe.c b/drivers/perf/arm_brbe.c index 2177632befa6..cb765aaf1b52 100644 --- a/drivers/perf/arm_brbe.c +++ b/drivers/perf/arm_brbe.c @@ -647,41 +647,44 @@ void armv8pmu_branch_reset(void) isb(); } -static bool capture_branch_entry(struct pmu_hw_events *cpuc, - struct perf_event *event, int idx) +static void brbe_regset_branch_entries(struct pmu_hw_events *cpuc, struct perf_event *event, + struct brbe_regset *regset, int idx) { struct perf_branch_entry *entry = &cpuc->branches->branch_entries[idx]; - u64 brbinf = get_brbinf_reg(idx); - - /* - * There are no valid entries anymore on the buffer. - * Abort the branch record processing to save some - * cycles and also reduce the capture/process load - * for the user space as well. - */ - if (brbe_invalid(brbinf)) - return false; + u64 brbinf = regset[idx].brbinf; perf_clear_branch_entry_bitfields(entry); if (brbe_record_is_complete(brbinf)) { - entry->from = get_brbsrc_reg(idx); - entry->to = get_brbtgt_reg(idx); + entry->from = regset[idx].brbsrc; + entry->to = regset[idx].brbtgt; } else if (brbe_record_is_source_only(brbinf)) { - entry->from = get_brbsrc_reg(idx); + entry->from = regset[idx].brbsrc; entry->to = 0; } else if (brbe_record_is_target_only(brbinf)) { entry->from = 0; - entry->to = get_brbtgt_reg(idx); + entry->to = regset[idx].brbtgt; } capture_brbe_flags(entry, event, brbinf); - return true; +} + +static void process_branch_entries(struct pmu_hw_events *cpuc, struct perf_event *event, + struct brbe_regset *regset, int nr_regset) +{ + int idx; + + for (idx = 0; idx < nr_regset; idx++) + brbe_regset_branch_entries(cpuc, event, regset, idx); + + cpuc->branches->branch_stack.nr = nr_regset; + cpuc->branches->branch_stack.hw_idx = -1ULL; } void armv8pmu_branch_read(struct pmu_hw_events *cpuc, struct perf_event *event) { - int nr_hw_entries = brbe_get_numrec(cpuc->percpu_pmu->reg_brbidr); + struct arm64_perf_task_context *task_ctx = event->pmu_ctx->task_ctx_data; + struct brbe_regset live[BRBE_MAX_ENTRIES]; + int nr_live, nr_store, nr_hw_entries; u64 brbfcr, brbcr; - int idx = 0; brbcr = read_sysreg_s(SYS_BRBCR_EL1); brbfcr = read_sysreg_s(SYS_BRBFCR_EL1); @@ -693,25 +696,17 @@ void armv8pmu_branch_read(struct pmu_hw_events *cpuc, struct perf_event *event) write_sysreg_s(brbfcr | BRBFCR_EL1_PAUSED, SYS_BRBFCR_EL1); isb(); - /* Loop through bank 0 */ - select_brbe_bank(BRBE_BANK_IDX_0); - while (idx < nr_hw_entries && idx <= BRBE_BANK0_IDX_MAX) { - if (!capture_branch_entry(cpuc, event, idx)) - goto skip_bank_1; - idx++; - } - - /* Loop through bank 1 */ - select_brbe_bank(BRBE_BANK_IDX_1); - while (idx < nr_hw_entries && idx <= BRBE_BANK1_IDX_MAX) { - if (!capture_branch_entry(cpuc, event, idx)) - break; - idx++; + nr_hw_entries = brbe_get_numrec(cpuc->percpu_pmu->reg_brbidr); + nr_live = capture_brbe_regset(live, nr_hw_entries); + if (event->ctx->task) { + nr_store = task_ctx->nr_brbe_records; + nr_store = stitch_stored_live_entries(task_ctx->store, live, nr_store, + nr_live, nr_hw_entries); + process_branch_entries(cpuc, event, task_ctx->store, nr_store); + task_ctx->nr_brbe_records = 0; + } else { + process_branch_entries(cpuc, event, live, nr_live); } - -skip_bank_1: - cpuc->branches->branch_stack.nr = idx; - cpuc->branches->branch_stack.hw_idx = -1ULL; process_branch_aborts(cpuc); /* Unpause the buffer */