From patchwork Mon Nov 7 06:25:08 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anshuman Khandual X-Patchwork-Id: 16248 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp1869026wru; Sun, 6 Nov 2022 22:27:54 -0800 (PST) X-Google-Smtp-Source: AMsMyM5lBf3K4c6rCg3GHsZOU5i3Y+pD6i9DgS0wHLVpiIHR1FBMX0r0tH0RguMS6e2M4niSDUcG X-Received: by 2002:a17:906:6a27:b0:7a6:c537:ba4 with SMTP id qw39-20020a1709066a2700b007a6c5370ba4mr44551071ejc.517.1667802474386; Sun, 06 Nov 2022 22:27:54 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1667802474; cv=none; d=google.com; s=arc-20160816; b=0Mv3PykH+EC91H9KmkhMcm2ZTSeshl6s6ix+VimljGzM4/wGhUvP+5PojFf5QOL3vk QM6oR1qzwTmUyDRc3lDF2llgly1lAPauTCsYuSgZKtc7fuLFXQuW9icf/cFhznzKg0ET 12wEQ7lF7wAwspuee7JrXBBsde1CSzkDqxR6dRcg2uLbSAuyWQ19ShCLIxMo87YVElUt 8X9+lUgnfRlj8vLUeNOd9BRqegwU/XxRVR+Xdywu3ncG9tf7KwvFF42wCK+PPKfofwKa B4iWOlSve86Q9rbzl2Vull8BWDiFvye9DGWH1LS6sTxqX+Rr31eS+NxUDAqT8EzWeDFm IWVw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=3yiYgDG5+eQT43ox1si3GCUHf2154B5Tc3ax0gzhVf0=; b=jXTNZuZxWxyP4UTaJcsZbV6skysmx+N5lMCrRk5AmrtbocrP8NKWVEuMVvzX5jyFKR Ht8WjWoscyFqTK76qE5skbvO1+h+t288EssZ+zp76Y414Mg5r4Oo203qMS7eGnz0oipL 1F8T0G/Sr8bCozY37VyG8aF0gbjxts0gbuWrCA4ioSf7lqQtNsAvgEbqcOGweIkdSvEC KGHVoI17HxRM4knE+qp8y0aw03vAKcfQfkizhtvX1UOlFrpA8q3bJVnWIGhOOXDTUKEk 6DSss0wr671gDlolYw8CVvHscns9iYgWdd1+2+n7QMh2Xym9Bt8UN2g5Sz8eeJKqPCpk ikjw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id l1-20020a056402344100b0046157981475si7806518edc.474.2022.11.06.22.27.30; Sun, 06 Nov 2022 22:27:54 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231274AbiKGGZq (ORCPT + 99 others); Mon, 7 Nov 2022 01:25:46 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40406 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231277AbiKGGZn (ORCPT ); Mon, 7 Nov 2022 01:25:43 -0500 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 3570F12771; Sun, 6 Nov 2022 22:25:42 -0800 (PST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 2335B23A; Sun, 6 Nov 2022 22:25:48 -0800 (PST) Received: from a077893.blr.arm.com (unknown [10.162.42.7]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 2CC0F3F534; Sun, 6 Nov 2022 22:25:36 -0800 (PST) From: Anshuman Khandual To: linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-arm-kernel@lists.infradead.org, peterz@infradead.org, acme@kernel.org, mark.rutland@arm.com, will@kernel.org, catalin.marinas@arm.com Cc: Anshuman Khandual , Mark Brown , James Clark , Rob Herring , Marc Zyngier , Suzuki Poulose , Ingo Molnar Subject: [PATCH V5 1/7] arm64/perf: Add BRBE registers and fields Date: Mon, 7 Nov 2022 11:55:08 +0530 Message-Id: <20221107062514.2851047-2-anshuman.khandual@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221107062514.2851047-1-anshuman.khandual@arm.com> References: <20221107062514.2851047-1-anshuman.khandual@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1748817647431905260?= X-GMAIL-MSGID: =?utf-8?q?1748817647431905260?= This adds BRBE related register definitions and various other related field macros there in. These will be used subsequently in a BRBE driver which is being added later on. Cc: Catalin Marinas Cc: Will Deacon Cc: Marc Zyngier Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Anshuman Khandual Reviewed-by: Mark Brown --- arch/arm64/include/asm/sysreg.h | 103 ++++++++++++++++++++ arch/arm64/tools/sysreg | 161 ++++++++++++++++++++++++++++++++ 2 files changed, 264 insertions(+) diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h index 7d301700d1a9..78335c7807dc 100644 --- a/arch/arm64/include/asm/sysreg.h +++ b/arch/arm64/include/asm/sysreg.h @@ -161,6 +161,109 @@ #define SYS_DBGDTRTX_EL0 sys_reg(2, 3, 0, 5, 0) #define SYS_DBGVCR32_EL2 sys_reg(2, 4, 0, 7, 0) +#define __SYS_BRBINFO(n) sys_reg(2, 1, 8, ((n) & 0xf), ((((n) & 0x10)) >> 2 + 0)) +#define __SYS_BRBSRC(n) sys_reg(2, 1, 8, ((n) & 0xf), ((((n) & 0x10)) >> 2 + 1)) +#define __SYS_BRBTGT(n) sys_reg(2, 1, 8, ((n) & 0xf), ((((n) & 0x10)) >> 2 + 2)) + +#define SYS_BRBINF0_EL1 __SYS_BRBINFO(0) +#define SYS_BRBINF1_EL1 __SYS_BRBINFO(1) +#define SYS_BRBINF2_EL1 __SYS_BRBINFO(2) +#define SYS_BRBINF3_EL1 __SYS_BRBINFO(3) +#define SYS_BRBINF4_EL1 __SYS_BRBINFO(4) +#define SYS_BRBINF5_EL1 __SYS_BRBINFO(5) +#define SYS_BRBINF6_EL1 __SYS_BRBINFO(6) +#define SYS_BRBINF7_EL1 __SYS_BRBINFO(7) +#define SYS_BRBINF8_EL1 __SYS_BRBINFO(8) +#define SYS_BRBINF9_EL1 __SYS_BRBINFO(9) +#define SYS_BRBINF10_EL1 __SYS_BRBINFO(10) +#define SYS_BRBINF11_EL1 __SYS_BRBINFO(11) +#define SYS_BRBINF12_EL1 __SYS_BRBINFO(12) +#define SYS_BRBINF13_EL1 __SYS_BRBINFO(13) +#define SYS_BRBINF14_EL1 __SYS_BRBINFO(14) +#define SYS_BRBINF15_EL1 __SYS_BRBINFO(15) +#define SYS_BRBINF16_EL1 __SYS_BRBINFO(16) +#define SYS_BRBINF17_EL1 __SYS_BRBINFO(17) +#define SYS_BRBINF18_EL1 __SYS_BRBINFO(18) +#define SYS_BRBINF19_EL1 __SYS_BRBINFO(19) +#define SYS_BRBINF20_EL1 __SYS_BRBINFO(20) +#define SYS_BRBINF21_EL1 __SYS_BRBINFO(21) +#define SYS_BRBINF22_EL1 __SYS_BRBINFO(22) +#define SYS_BRBINF23_EL1 __SYS_BRBINFO(23) +#define SYS_BRBINF24_EL1 __SYS_BRBINFO(24) +#define SYS_BRBINF25_EL1 __SYS_BRBINFO(25) +#define SYS_BRBINF26_EL1 __SYS_BRBINFO(26) +#define SYS_BRBINF27_EL1 __SYS_BRBINFO(27) +#define SYS_BRBINF28_EL1 __SYS_BRBINFO(28) +#define SYS_BRBINF29_EL1 __SYS_BRBINFO(29) +#define SYS_BRBINF30_EL1 __SYS_BRBINFO(30) +#define SYS_BRBINF31_EL1 __SYS_BRBINFO(31) + +#define SYS_BRBSRC0_EL1 __SYS_BRBSRC(0) +#define SYS_BRBSRC1_EL1 __SYS_BRBSRC(1) +#define SYS_BRBSRC2_EL1 __SYS_BRBSRC(2) +#define SYS_BRBSRC3_EL1 __SYS_BRBSRC(3) +#define SYS_BRBSRC4_EL1 __SYS_BRBSRC(4) +#define SYS_BRBSRC5_EL1 __SYS_BRBSRC(5) +#define SYS_BRBSRC6_EL1 __SYS_BRBSRC(6) +#define SYS_BRBSRC7_EL1 __SYS_BRBSRC(7) +#define SYS_BRBSRC8_EL1 __SYS_BRBSRC(8) +#define SYS_BRBSRC9_EL1 __SYS_BRBSRC(9) +#define SYS_BRBSRC10_EL1 __SYS_BRBSRC(10) +#define SYS_BRBSRC11_EL1 __SYS_BRBSRC(11) +#define SYS_BRBSRC12_EL1 __SYS_BRBSRC(12) +#define SYS_BRBSRC13_EL1 __SYS_BRBSRC(13) +#define SYS_BRBSRC14_EL1 __SYS_BRBSRC(14) +#define SYS_BRBSRC15_EL1 __SYS_BRBSRC(15) +#define SYS_BRBSRC16_EL1 __SYS_BRBSRC(16) +#define SYS_BRBSRC17_EL1 __SYS_BRBSRC(17) +#define SYS_BRBSRC18_EL1 __SYS_BRBSRC(18) +#define SYS_BRBSRC19_EL1 __SYS_BRBSRC(19) +#define SYS_BRBSRC20_EL1 __SYS_BRBSRC(20) +#define SYS_BRBSRC21_EL1 __SYS_BRBSRC(21) +#define SYS_BRBSRC22_EL1 __SYS_BRBSRC(22) +#define SYS_BRBSRC23_EL1 __SYS_BRBSRC(23) +#define SYS_BRBSRC24_EL1 __SYS_BRBSRC(24) +#define SYS_BRBSRC25_EL1 __SYS_BRBSRC(25) +#define SYS_BRBSRC26_EL1 __SYS_BRBSRC(26) +#define SYS_BRBSRC27_EL1 __SYS_BRBSRC(27) +#define SYS_BRBSRC28_EL1 __SYS_BRBSRC(28) +#define SYS_BRBSRC29_EL1 __SYS_BRBSRC(29) +#define SYS_BRBSRC30_EL1 __SYS_BRBSRC(30) +#define SYS_BRBSRC31_EL1 __SYS_BRBSRC(31) + +#define SYS_BRBTGT0_EL1 __SYS_BRBTGT(0) +#define SYS_BRBTGT1_EL1 __SYS_BRBTGT(1) +#define SYS_BRBTGT2_EL1 __SYS_BRBTGT(2) +#define SYS_BRBTGT3_EL1 __SYS_BRBTGT(3) +#define SYS_BRBTGT4_EL1 __SYS_BRBTGT(4) +#define SYS_BRBTGT5_EL1 __SYS_BRBTGT(5) +#define SYS_BRBTGT6_EL1 __SYS_BRBTGT(6) +#define SYS_BRBTGT7_EL1 __SYS_BRBTGT(7) +#define SYS_BRBTGT8_EL1 __SYS_BRBTGT(8) +#define SYS_BRBTGT9_EL1 __SYS_BRBTGT(9) +#define SYS_BRBTGT10_EL1 __SYS_BRBTGT(10) +#define SYS_BRBTGT11_EL1 __SYS_BRBTGT(11) +#define SYS_BRBTGT12_EL1 __SYS_BRBTGT(12) +#define SYS_BRBTGT13_EL1 __SYS_BRBTGT(13) +#define SYS_BRBTGT14_EL1 __SYS_BRBTGT(14) +#define SYS_BRBTGT15_EL1 __SYS_BRBTGT(15) +#define SYS_BRBTGT16_EL1 __SYS_BRBTGT(16) +#define SYS_BRBTGT17_EL1 __SYS_BRBTGT(17) +#define SYS_BRBTGT18_EL1 __SYS_BRBTGT(18) +#define SYS_BRBTGT19_EL1 __SYS_BRBTGT(19) +#define SYS_BRBTGT20_EL1 __SYS_BRBTGT(20) +#define SYS_BRBTGT21_EL1 __SYS_BRBTGT(21) +#define SYS_BRBTGT22_EL1 __SYS_BRBTGT(22) +#define SYS_BRBTGT23_EL1 __SYS_BRBTGT(23) +#define SYS_BRBTGT24_EL1 __SYS_BRBTGT(24) +#define SYS_BRBTGT25_EL1 __SYS_BRBTGT(25) +#define SYS_BRBTGT26_EL1 __SYS_BRBTGT(26) +#define SYS_BRBTGT27_EL1 __SYS_BRBTGT(27) +#define SYS_BRBTGT28_EL1 __SYS_BRBTGT(28) +#define SYS_BRBTGT29_EL1 __SYS_BRBTGT(29) +#define SYS_BRBTGT30_EL1 __SYS_BRBTGT(30) +#define SYS_BRBTGT31_EL1 __SYS_BRBTGT(31) + #define SYS_MIDR_EL1 sys_reg(3, 0, 0, 0, 0) #define SYS_MPIDR_EL1 sys_reg(3, 0, 0, 0, 5) #define SYS_REVIDR_EL1 sys_reg(3, 0, 0, 0, 6) diff --git a/arch/arm64/tools/sysreg b/arch/arm64/tools/sysreg index 384757a7eda9..45b1834de1ae 100644 --- a/arch/arm64/tools/sysreg +++ b/arch/arm64/tools/sysreg @@ -167,6 +167,167 @@ Enum 3:0 BT EndEnum EndSysreg + +# This is just a dummy register declaration to get all common field masks and +# shifts for accessing given BRBINF contents. +Sysreg BRBINF_EL1 2 1 8 0 0 +Res0 63:47 +Field 46 CCU +Field 45:32 CC +Res0 31:18 +Field 17 LASTFAILED +Field 16 T +Res0 15:14 +Enum 13:8 TYPE + 0b000000 UNCOND_DIR + 0b000001 INDIR + 0b000010 DIR_LINK + 0b000011 INDIR_LINK + 0b000101 RET_SUB + 0b000111 RET_EXCPT + 0b001000 COND_DIR + 0b100001 DEBUG_HALT + 0b100010 CALL + 0b100011 TRAP + 0b100100 SERROR + 0b100110 INST_DEBUG + 0b100111 DATA_DEBUG + 0b101010 ALGN_FAULT + 0b101011 INST_FAULT + 0b101100 DATA_FAULT + 0b101110 IRQ + 0b101111 FIQ + 0b111001 DEBUG_EXIT +EndEnum +Enum 7:6 EL + 0b00 EL0 + 0b01 EL1 + 0b10 EL2 + 0b11 EL3 +EndEnum +Field 5 MPRED +Res0 4:2 +Enum 1:0 VALID + 0b00 NONE + 0b01 TARGET + 0b10 SOURCE + 0b11 FULL +EndEnum +EndSysreg + +Sysreg BRBCR_EL1 2 1 9 0 0 +Res0 63:24 +Field 23 EXCEPTION +Field 22 ERTN +Res0 21:9 +Field 8 FZP +Res0 7 +Enum 6:5 TS + 0b01 VIRTUAL + 0b10 GST_PHYSICAL + 0b11 PHYSICAL +EndEnum +Field 4 MPRED +Field 3 CC +Res0 2 +Field 1 E1BRE +Field 0 E0BRE +EndSysreg + +Sysreg BRBFCR_EL1 2 1 9 0 1 +Res0 63:30 +Enum 29:28 BANK + 0b0 FIRST + 0b1 SECOND +EndEnum +Res0 27:23 +Field 22 CONDDIR +Field 21 DIRCALL +Field 20 INDCALL +Field 19 RTN +Field 18 INDIRECT +Field 17 DIRECT +Field 16 EnI +Res0 15:8 +Field 7 PAUSED +Field 6 LASTFAILED +Res0 5:0 +EndSysreg + +Sysreg BRBTS_EL1 2 1 9 0 2 +Field 63:0 TS +EndSysreg + +Sysreg BRBINFINJ_EL1 2 1 9 1 0 +Res0 63:47 +Field 46 CCU +Field 45:32 CC +Res0 31:18 +Field 17 LASTFAILED +Field 16 T +Res0 15:14 +Enum 13:8 TYPE + 0b000000 UNCOND_DIR + 0b000001 INDIR + 0b000010 DIR_LINK + 0b000011 INDIR_LINK + 0b000100 RET_SUB + 0b000100 RET_SUB + 0b000111 RET_EXCPT + 0b001000 COND_DIR + 0b100001 DEBUG_HALT + 0b100010 CALL + 0b100011 TRAP + 0b100100 SERROR + 0b100110 INST_DEBUG + 0b100111 DATA_DEBUG + 0b101010 ALGN_FAULT + 0b101011 INST_FAULT + 0b101100 DATA_FAULT + 0b101110 IRQ + 0b101111 FIQ + 0b111001 DEBUG_EXIT +EndEnum +Enum 7:6 EL + 0b00 EL0 + 0b01 EL1 + 0b10 EL2 + 0b11 EL3 +EndEnum +Field 5 MPRED +Res0 4:2 +Enum 1:0 VALID + 0b00 NONE + 0b01 TARGET + 0b10 SOURCE + 0b00 FULL +EndEnum +EndSysreg + +Sysreg BRBSRCINJ_EL1 2 1 9 1 1 +Field 63:0 ADDRESS +EndSysreg + +Sysreg BRBTGTINJ_EL1 2 1 9 1 2 +Field 63:0 ADDRESS +EndSysreg + +Sysreg BRBIDR0_EL1 2 1 9 2 0 +Res0 63:16 +Enum 15:12 CC + 0b101 20_BIT +EndEnum +Enum 11:8 FORMAT + 0b0 0 +EndEnum +Enum 7:0 NUMREC + 0b1000 8 + 0b10000 16 + 0b100000 32 + 0b1000000 64 +EndEnum +EndSysreg + Sysreg ID_AA64ZFR0_EL1 3 0 0 4 4 Res0 63:60 Enum 59:56 F64MM From patchwork Mon Nov 7 06:25:09 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anshuman Khandual X-Patchwork-Id: 16246 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp1868956wru; Sun, 6 Nov 2022 22:27:41 -0800 (PST) X-Google-Smtp-Source: AMsMyM7iDsK79Q1Sb9ziLSnXCRXNtHkKjeUZw7ONLRw62NXqquaDPTuNHsl5ehpFGL2dFJQu/71w X-Received: by 2002:a17:907:31c3:b0:770:852b:71a2 with SMTP id xf3-20020a17090731c300b00770852b71a2mr46261639ejb.557.1667802461004; Sun, 06 Nov 2022 22:27:41 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1667802460; cv=none; d=google.com; s=arc-20160816; b=KPMcrp9fHMs6zA83OZA6gDPaofyWUgEWmM9/Y/8HPp4bAxdQooj4ih/Slfw+jpHObd edgijVaprnCzwz026EqYRGY6XvVwHsW3VHOYJ7HgZP70fI60HG2xbqN1bSKYn9fX5g/3 x18jPo0efin0UR3bDp/pemnPWr4xqu6mxsGlhYELekiMzcOiejlRTavHZ6GjM15kzOl9 uUROy2uEMYlMrX+jFNcZcxvhz/Ra9c3zMACZaawlJnraGQEpFmdRVxZgQr4tx4S5egPe V7vsvtfaD0ZkFmdQUaBGhq8O6R0dRurIp4RlJrEtrxKZx7upx3QTbUF7pK3Hebiot72E diOA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=aSxaEu9kpiMVce3FRDbVv1l6xBJNcUGiSLhfr+B59ZI=; b=FUpdVhOQxRFw+Xtn7U/o+OHiCLJlWBUOcexiXR31en+La6pY5mGH1hX1/LgQYDEC2c X8EaRGzkkVMZh1CDTANwSVreDtadsorL0MZ7Ogjhp5ahWSDaieC5u3ai0XoefKp9IXMq j0wf5VGaBu9JAWN24k0IIYt3pSFGYkZkUimrARmuTl+pw7sanXWctWZ5Wls8gFB/a+UT KK2aFFlXz5HY6fNUIO9blDeP3k9WPXplM6BU5g4SJw8Vfcg9tnkrxbgybfHvtRqBr5KZ DWNnZQteVUkC3MTxVuj6hm20eCjR6OGfsy/13DcylPlGAxX/1TqqwsbIymItpSRcoslj J+yA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id a16-20020aa7cf10000000b0045b50cee511si7340782edy.122.2022.11.06.22.27.17; Sun, 06 Nov 2022 22:27:40 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231292AbiKGGZu (ORCPT + 99 others); Mon, 7 Nov 2022 01:25:50 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40528 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231277AbiKGGZs (ORCPT ); Mon, 7 Nov 2022 01:25:48 -0500 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 9C7CC1275D; Sun, 6 Nov 2022 22:25:47 -0800 (PST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 8C2F3113E; Sun, 6 Nov 2022 22:25:53 -0800 (PST) Received: from a077893.blr.arm.com (unknown [10.162.42.7]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 951C43F534; Sun, 6 Nov 2022 22:25:42 -0800 (PST) From: Anshuman Khandual To: linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-arm-kernel@lists.infradead.org, peterz@infradead.org, acme@kernel.org, mark.rutland@arm.com, will@kernel.org, catalin.marinas@arm.com Cc: Anshuman Khandual , Mark Brown , James Clark , Rob Herring , Marc Zyngier , Suzuki Poulose , Ingo Molnar Subject: [PATCH V5 2/7] arm64/perf: Update struct arm_pmu for BRBE Date: Mon, 7 Nov 2022 11:55:09 +0530 Message-Id: <20221107062514.2851047-3-anshuman.khandual@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221107062514.2851047-1-anshuman.khandual@arm.com> References: <20221107062514.2851047-1-anshuman.khandual@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1748817633308874273?= X-GMAIL-MSGID: =?utf-8?q?1748817633308874273?= Although BRBE is an armv8 speciifc HW feature, abstracting out its various function callbacks at the struct arm_pmu level is preferred, as it cleaner , easier to follow and maintain. Besides some helpers i.e brbe_supported(), brbe_probe() and brbe_reset() might not fit seamlessly, when tried to be embedded via existing arm_pmu helpers in the armv8 implementation. Updates the struct arm_pmu to include all required helpers that will drive BRBE functionality for a given PMU implementation. These are the following. - brbe_filter : Convert perf event filters into BRBE HW filters - brbe_probe : Probe BRBE HW and capture its attributes - brbe_enable : Enable BRBE HW with a given config - brbe_disable : Disable BRBE HW - brbe_read : Read BRBE buffer for captured branch records - brbe_reset : Reset BRBE buffer - brbe_supported: Whether BRBE is supported or not A BRBE driver implementation needs to provide these functionalities. Cc: Will Deacon Cc: Mark Rutland Cc: Peter Zijlstra Cc: Ingo Molnar Cc: Catalin Marinas Cc: linux-arm-kernel@lists.infradead.org Cc: linux-perf-users@vger.kernel.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Anshuman Khandual --- arch/arm64/kernel/perf_event.c | 36 ++++++++++++++++++++++++++++++++++ include/linux/perf/arm_pmu.h | 21 ++++++++++++++++++++ 2 files changed, 57 insertions(+) diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c index 7b0643fe2f13..c97377e28288 100644 --- a/arch/arm64/kernel/perf_event.c +++ b/arch/arm64/kernel/perf_event.c @@ -1025,6 +1025,35 @@ static int armv8pmu_filter_match(struct perf_event *event) return evtype != ARMV8_PMUV3_PERFCTR_CHAIN; } +static void armv8pmu_brbe_filter(struct pmu_hw_events *hw_event, struct perf_event *event) +{ +} + +static void armv8pmu_brbe_enable(struct pmu_hw_events *hw_event) +{ +} + +static void armv8pmu_brbe_disable(struct pmu_hw_events *hw_event) +{ +} + +static void armv8pmu_brbe_read(struct pmu_hw_events *hw_event, struct perf_event *event) +{ +} + +static void armv8pmu_brbe_probe(struct pmu_hw_events *hw_event) +{ +} + +static void armv8pmu_brbe_reset(struct pmu_hw_events *hw_event) +{ +} + +static bool armv8pmu_brbe_supported(struct perf_event *event) +{ + return false; +} + static void armv8pmu_reset(void *info) { struct arm_pmu *cpu_pmu = (struct arm_pmu *)info; @@ -1257,6 +1286,13 @@ static int armv8_pmu_init(struct arm_pmu *cpu_pmu, char *name, cpu_pmu->pmu.event_idx = armv8pmu_user_event_idx; + cpu_pmu->brbe_filter = armv8pmu_brbe_filter; + cpu_pmu->brbe_enable = armv8pmu_brbe_enable; + cpu_pmu->brbe_disable = armv8pmu_brbe_disable; + cpu_pmu->brbe_read = armv8pmu_brbe_read; + cpu_pmu->brbe_probe = armv8pmu_brbe_probe; + cpu_pmu->brbe_reset = armv8pmu_brbe_reset; + cpu_pmu->brbe_supported = armv8pmu_brbe_supported; cpu_pmu->name = name; cpu_pmu->map_event = map_event; cpu_pmu->attr_groups[ARMPMU_ATTR_GROUP_EVENTS] = events ? diff --git a/include/linux/perf/arm_pmu.h b/include/linux/perf/arm_pmu.h index 0356cb6a215d..67a6d59786f2 100644 --- a/include/linux/perf/arm_pmu.h +++ b/include/linux/perf/arm_pmu.h @@ -101,6 +101,27 @@ struct arm_pmu { void (*reset)(void *); int (*map_event)(struct perf_event *event); int (*filter_match)(struct perf_event *event); + + /* Convert perf event filters into BRBE HW filters */ + void (*brbe_filter)(struct pmu_hw_events *hw_events, struct perf_event *event); + + /* Probe BRBE HW and capture its attributes */ + void (*brbe_probe)(struct pmu_hw_events *hw_events); + + /* Enable BRBE HW with a given config */ + void (*brbe_enable)(struct pmu_hw_events *hw_events); + + /* Disable BRBE HW */ + void (*brbe_disable)(struct pmu_hw_events *hw_events); + + /* Process BRBE buffer for captured branch records */ + void (*brbe_read)(struct pmu_hw_events *hw_events, struct perf_event *event); + + /* Reset BRBE buffer */ + void (*brbe_reset)(struct pmu_hw_events *hw_events); + + /* Check whether BRBE is supported */ + bool (*brbe_supported)(struct perf_event *event); int num_events; bool secure_access; /* 32-bit ARM only */ #define ARMV8_PMUV3_MAX_COMMON_EVENTS 0x40 From patchwork Mon Nov 7 06:25:10 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anshuman Khandual X-Patchwork-Id: 16253 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp1869453wru; Sun, 6 Nov 2022 22:29:27 -0800 (PST) X-Google-Smtp-Source: AMsMyM5ISN/zaUx8P4AUGoLGcXLPxEvqfCsKZ3R4HY/LJgJcmeViaXWOGCW+5ZhWnVLIRDFQeSXa X-Received: by 2002:a17:907:2c44:b0:7a9:6e50:4c42 with SMTP id hf4-20020a1709072c4400b007a96e504c42mr46027137ejc.768.1667802566853; Sun, 06 Nov 2022 22:29:26 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1667802566; cv=none; d=google.com; s=arc-20160816; b=jnBHQA14lAwFTJfm/jplMsJhE4lGFEbOiRHhOP18tgNkK72OmXZmmXd3tghRe6Peha lUSH6smZAGJ202UHfXRArgpJCnv7TNq9xbVriemslniB0RjJc1FJfQuVQ2sGz3SoUQmG 7EG1HwbLjYXPrZKc/Y7W7BIpa1iCO1xjDFO7mIODmXwD1VFlnjVG1Jjz2t4YuJuDN5XP S8ny3xHvqds7xCSFygamZAfHTcp08FO0iwQTHe634V5UhJ1ju5u9VunbefX+V2u/6T5G 0geQ2dmopcWqtHGWpuBPT4zobUfFUcFrHjpdqquuT29NJMZnaN0gOl/2wcU8dN52mdaQ QgVw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=ycoW0TX9QInRAorq7XvL5+W6CjYoCDTKU7c8pN8N0j4=; b=yHTLLgrhVUp6nEVWkQPYfD4v0Up+lyuYw3KTfg1MKJF9QeYquDBLyqrYTVfZCkmh6v 4qukBxQF+6AN7sB7GLC4T/TiHYR97wzJl0cNm0BNq99ImU+HaOCszkDUoUllzSgCxMsc X1FvGspdb8qNaXIFfCILb6NFpG0/xDP20PlC4fH3dEKcCjK6Qcm62SyCqHjeFEMbi3KH ZMLV+uzc7qcSrtZWR6pvlqt1oh5zlnUbBVnwl5F7YjB4jRmFrjpYSMonuoXo519EWUfS 51OHH1TELISabo9goCApukwIrUkhe/Fb5Rfz9GI9xMz6IaEba+78/pKQLPxYV6bZskCK vL2A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id m22-20020a056402431600b0045d03c18e5bsi10196084edc.560.2022.11.06.22.29.03; Sun, 06 Nov 2022 22:29:26 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231294AbiKGGZ7 (ORCPT + 99 others); Mon, 7 Nov 2022 01:25:59 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40642 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231180AbiKGGZx (ORCPT ); Mon, 7 Nov 2022 01:25:53 -0500 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 9F76812AA3; Sun, 6 Nov 2022 22:25:52 -0800 (PST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 9001C1FB; Sun, 6 Nov 2022 22:25:58 -0800 (PST) Received: from a077893.blr.arm.com (unknown [10.162.42.7]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id EEB0A3F534; Sun, 6 Nov 2022 22:25:47 -0800 (PST) From: Anshuman Khandual To: linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-arm-kernel@lists.infradead.org, peterz@infradead.org, acme@kernel.org, mark.rutland@arm.com, will@kernel.org, catalin.marinas@arm.com Cc: Anshuman Khandual , Mark Brown , James Clark , Rob Herring , Marc Zyngier , Suzuki Poulose , Ingo Molnar Subject: [PATCH V5 3/7] arm64/perf: Update struct pmu_hw_events for BRBE Date: Mon, 7 Nov 2022 11:55:10 +0530 Message-Id: <20221107062514.2851047-4-anshuman.khandual@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221107062514.2851047-1-anshuman.khandual@arm.com> References: <20221107062514.2851047-1-anshuman.khandual@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1748817744556847806?= X-GMAIL-MSGID: =?utf-8?q?1748817744556847806?= A single perf event instance BRBE related contexts and data will be tracked in struct pmu_hw_events. Hence update the structure to accommodate required details related to BRBE. Cc: Will Deacon Cc: Mark Rutland Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Anshuman Khandual --- drivers/perf/arm_pmu.c | 1 + include/linux/perf/arm_pmu.h | 27 +++++++++++++++++++++++++++ 2 files changed, 28 insertions(+) diff --git a/drivers/perf/arm_pmu.c b/drivers/perf/arm_pmu.c index 3f07df5a7e95..5048a500441e 100644 --- a/drivers/perf/arm_pmu.c +++ b/drivers/perf/arm_pmu.c @@ -905,6 +905,7 @@ static struct arm_pmu *__armpmu_alloc(gfp_t flags) events = per_cpu_ptr(pmu->hw_events, cpu); raw_spin_lock_init(&events->pmu_lock); + events->branches = kmalloc(sizeof(struct brbe_records), flags); events->percpu_pmu = pmu; } diff --git a/include/linux/perf/arm_pmu.h b/include/linux/perf/arm_pmu.h index 67a6d59786f2..bda0d9984a98 100644 --- a/include/linux/perf/arm_pmu.h +++ b/include/linux/perf/arm_pmu.h @@ -44,6 +44,16 @@ static_assert((PERF_EVENT_FLAG_ARCH & ARMPMU_EVT_47BIT) == ARMPMU_EVT_47BIT); }, \ } +/* + * Maximum branch records in BRBE + */ +#define BRBE_MAX_ENTRIES 64 + +struct brbe_records { + struct perf_branch_stack brbe_stack; + struct perf_branch_entry brbe_entries[BRBE_MAX_ENTRIES]; +}; + /* The events for a given PMU register set. */ struct pmu_hw_events { /* @@ -70,6 +80,23 @@ struct pmu_hw_events { struct arm_pmu *percpu_pmu; int irq; + + /* Detected BRBE attributes */ + bool brbe_v1p1; + int brbe_cc; + int brbe_nr; + int brbe_format; + + /* Evaluated BRBE configuration */ + u64 brbfcr; + u64 brbcr; + + /* Tracked BRBE context */ + unsigned int brbe_users; + void *brbe_context; + + /* Captured BRBE buffer - copied as is into perf_sample_data */ + struct brbe_records *branches; }; enum armpmu_attr_groups { From patchwork Mon Nov 7 06:25:11 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anshuman Khandual X-Patchwork-Id: 16250 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp1869331wru; Sun, 6 Nov 2022 22:29:01 -0800 (PST) X-Google-Smtp-Source: AA0mqf4p+CQPN3Ppp0Gzu8r0R1Bi+Cth6ttcesfW5/pDOLeweJoDbjTh23p6iKLnvC7zKdz5bWVT X-Received: by 2002:a17:907:c485:b0:7ae:566e:3eb5 with SMTP id tp5-20020a170907c48500b007ae566e3eb5mr8909494ejc.22.1667802541412; Sun, 06 Nov 2022 22:29:01 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1667802541; cv=none; d=google.com; s=arc-20160816; b=TSY/VItEiFQbAAAd8ZVG00Nse3ZyOKt7D/g9Ko+UXuicGNDIZ7eS2UlSKVPESZxlpi MjkqfS/FFCgp304kNfTvORqH/bhinf4akQryDOtw4Rwv083pZgbwMYSdPcuOn28Hl7Rr k6xEQlD3tI8NBGDk9iX75olr+4TUNOn76AUW7J0IRIciSPy8zhsHdy+Ar4PptKvc8Cym wEFXH7iBHQJGYoP9kknnff6oZU08br8uFh7NAlJrRzIlQjrmF8kCnCg4qhzKc65Ctgl/ wvMIPyuEWZr2e1kT4Az6VrnoDTIwyFEmsu2pcHmfONAg3b/4O2ryVtsv7CxXBcLP5YNF Y2QQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=jmKfVqBDx6Q1JsVfcf2bav3tkpdhoe5RzNBcN7OyJnU=; b=ZV5W+W7RhqfCEk24fWBQO94RSDbH6yNtkqaUsfmT8Swe1FYI2z0wcflv9CzE5r6ure gC1W37mB5XmSrZdWR8F+1UER+a9EYHQgYQC057lJOL7IinIf7Ne1UwX2XEFFXxx0Qe1y Z+db4/3eKGh9O/s4gjVmqjoZBPinx/xT14zS+u/7JXGNJLItTTUaqkFPEC77GzLumYzn zLloFBiw1S6VVAEZgX+pGGRhty6Z+ULhwVPVzBM/NHu89RPeSKSfJb46GqMMdjC9qohw wFDuzaCqB+I43eBiLWxJGx80q+zklQn2VZ3qCSto/EI5eaSw2T1a232DekfLBz2Zo2KO gHdQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id gs10-20020a1709072d0a00b0078d930212c0si7840236ejc.347.2022.11.06.22.28.37; Sun, 06 Nov 2022 22:29:01 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229840AbiKGG0G (ORCPT + 99 others); Mon, 7 Nov 2022 01:26:06 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40716 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231180AbiKGG0B (ORCPT ); Mon, 7 Nov 2022 01:26:01 -0500 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id EF32512A93; Sun, 6 Nov 2022 22:25:57 -0800 (PST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id DF2191FB; Sun, 6 Nov 2022 22:26:03 -0800 (PST) Received: from a077893.blr.arm.com (unknown [10.162.42.7]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 0DB8B3F534; Sun, 6 Nov 2022 22:25:52 -0800 (PST) From: Anshuman Khandual To: linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-arm-kernel@lists.infradead.org, peterz@infradead.org, acme@kernel.org, mark.rutland@arm.com, will@kernel.org, catalin.marinas@arm.com Cc: Anshuman Khandual , Mark Brown , James Clark , Rob Herring , Marc Zyngier , Suzuki Poulose , Ingo Molnar Subject: [PATCH V5 4/7] driver/perf/arm_pmu_platform: Add support for BRBE attributes detection Date: Mon, 7 Nov 2022 11:55:11 +0530 Message-Id: <20221107062514.2851047-5-anshuman.khandual@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221107062514.2851047-1-anshuman.khandual@arm.com> References: <20221107062514.2851047-1-anshuman.khandual@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1748817717544795561?= X-GMAIL-MSGID: =?utf-8?q?1748817717544795561?= This adds arm pmu infrastrure to probe BRBE implementation's attributes via driver exported callbacks later. The actual BRBE feature detection will be added by the driver itself. CPU specific BRBE entries, cycle count, format support gets detected during PMU init. This information gets saved in per-cpu struct pmu_hw_events which later helps in operating BRBE during a perf event context. Cc: Will Deacon Cc: Mark Rutland Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Anshuman Khandual --- drivers/perf/arm_pmu_platform.c | 34 +++++++++++++++++++++++++++++++++ 1 file changed, 34 insertions(+) diff --git a/drivers/perf/arm_pmu_platform.c b/drivers/perf/arm_pmu_platform.c index 933b96e243b8..acdc445081aa 100644 --- a/drivers/perf/arm_pmu_platform.c +++ b/drivers/perf/arm_pmu_platform.c @@ -172,6 +172,36 @@ static int armpmu_request_irqs(struct arm_pmu *armpmu) return err; } +static void arm_brbe_probe_cpu(void *info) +{ + struct pmu_hw_events *hw_events; + struct arm_pmu *armpmu = info; + + /* + * Return from here, if BRBE driver has not been + * implemented for this PMU. This helps prevent + * kernel crash later when brbe_probe() will be + * called on the PMU. + */ + if (!armpmu->brbe_probe) + return; + + hw_events = per_cpu_ptr(armpmu->hw_events, smp_processor_id()); + armpmu->brbe_probe(hw_events); +} + +static int armpmu_request_brbe(struct arm_pmu *armpmu) +{ + int cpu, err = 0; + + for_each_cpu(cpu, &armpmu->supported_cpus) { + err = smp_call_function_single(cpu, arm_brbe_probe_cpu, armpmu, 1); + if (err) + return err; + } + return err; +} + static void armpmu_free_irqs(struct arm_pmu *armpmu) { int cpu; @@ -229,6 +259,10 @@ int arm_pmu_device_probe(struct platform_device *pdev, if (ret) goto out_free_irqs; + ret = armpmu_request_brbe(pmu); + if (ret) + goto out_free_irqs; + ret = armpmu_register(pmu); if (ret) { dev_err(dev, "failed to register PMU devices!\n"); From patchwork Mon Nov 7 06:25:12 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anshuman Khandual X-Patchwork-Id: 16254 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp1869557wru; Sun, 6 Nov 2022 22:29:49 -0800 (PST) X-Google-Smtp-Source: AMsMyM5ZltPF1ExwZ2JIZcL+bcjcf6CSUwgG644v0yFqSPxqMzzAhXmlnxOrzXQzMoknJsVJPW7M X-Received: by 2002:a17:907:7286:b0:7a1:ba0:7d7a with SMTP id dt6-20020a170907728600b007a10ba07d7amr47511623ejc.227.1667802589487; Sun, 06 Nov 2022 22:29:49 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1667802589; cv=none; d=google.com; s=arc-20160816; b=lw3QK5wdURRSiWcPjTwn44WRzev+brGf4GQfGkmOHFX55p1z8+v4zmWA8HXXRYXLdq yDJhAgNISbsmJ3evg2CgioYmmcpxiUzhcfhhZrjmf0OJ2bODd9ScaKRdFlDNsDp3+3E7 XzB/CsQH2IRvA6pWQ4PUX7tX9Uu+avrscGgDiPUB/7Nqsdb7g4zMv7LlVUSRRz0TrPcN YReEsljkAgr4xVIz/w4IORR1SL3E9kSyOrJn/BSoves3vfbwoNzEWPwpy0AbQAW/EKDc qkdtelgXm0GMiMtMr3r6J8HD2Gvj99IRXF7NO1z1BlKzBx+oVFjfW4D5BPtIJyoUj+My fF+Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=cYamx6lYnElqtf/c8BDtVQTl+FmYIGUiLwujVWjJRkQ=; b=mDPOyhuw15UT+RuBYKTWlZTgNc+Zxur1VFNjVBykVgEup7amlpLf2GEWAYzWHnQ4GG JA1DEjzbDmhinaDj/uUa+MKYNBEzNvlUupYmMxTvJK21+pLPc4Wonf0uSBiWWk6SkV38 q4xvg/bDBQ1WNCu9pwIdtdGJqgHxqBP//ITgUgLTXj0xE5z7RMEE6zj+WgfdO4sQAGnc CGz6cLMZFkvkKdeJeyp5Qg/Esv1J5caLsTZ7ZNpi4c1WTM2s5Z1HyGEAUm+x7DSBfz+p h93fnjwx+faQHPXATkvGgnTkLXlyjED1R7hrgoc7AQ6KNp74xVTwvopjZ2YnG3sox3Od qYiQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id eh16-20020a0564020f9000b004614acc0706si7301814edb.250.2022.11.06.22.29.25; Sun, 06 Nov 2022 22:29:49 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231334AbiKGG0L (ORCPT + 99 others); Mon, 7 Nov 2022 01:26:11 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40998 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231326AbiKGG0F (ORCPT ); Mon, 7 Nov 2022 01:26:05 -0500 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 273EF1275D; Sun, 6 Nov 2022 22:26:03 -0800 (PST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 10E331FB; Sun, 6 Nov 2022 22:26:09 -0800 (PST) Received: from a077893.blr.arm.com (unknown [10.162.42.7]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 4DE3C3F534; Sun, 6 Nov 2022 22:25:58 -0800 (PST) From: Anshuman Khandual To: linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-arm-kernel@lists.infradead.org, peterz@infradead.org, acme@kernel.org, mark.rutland@arm.com, will@kernel.org, catalin.marinas@arm.com Cc: Anshuman Khandual , Mark Brown , James Clark , Rob Herring , Marc Zyngier , Suzuki Poulose , Ingo Molnar Subject: [PATCH V5 5/7] arm64/perf: Drive BRBE from perf event states Date: Mon, 7 Nov 2022 11:55:12 +0530 Message-Id: <20221107062514.2851047-6-anshuman.khandual@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221107062514.2851047-1-anshuman.khandual@arm.com> References: <20221107062514.2851047-1-anshuman.khandual@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1748817768078800906?= X-GMAIL-MSGID: =?utf-8?q?1748817768078800906?= Branch stack sampling rides along the normal perf event and all the branch records get captured during the PMU interrupt. This just changes perf event handling on the arm64 platform to accommodate required BRBE operations that will enable branch stack sampling support. Cc: Peter Zijlstra Cc: Ingo Molnar Cc: Arnaldo Carvalho de Melo Cc: Mark Rutland Cc: Will Deacon Cc: Catalin Marinas Cc: linux-perf-users@vger.kernel.org Cc: linux-kernel@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org Signed-off-by: Anshuman Khandual --- arch/arm64/kernel/perf_event.c | 7 ++++++ drivers/perf/arm_pmu.c | 40 ++++++++++++++++++++++++++++++++++ 2 files changed, 47 insertions(+) diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c index c97377e28288..97db333d1208 100644 --- a/arch/arm64/kernel/perf_event.c +++ b/arch/arm64/kernel/perf_event.c @@ -874,6 +874,13 @@ static irqreturn_t armv8pmu_handle_irq(struct arm_pmu *cpu_pmu) if (!armpmu_event_set_period(event)) continue; + if (has_branch_stack(event)) { + cpu_pmu->brbe_read(cpuc, event); + data.br_stack = &cpuc->branches->brbe_stack; + data.sample_flags |= PERF_SAMPLE_BRANCH_STACK; + cpu_pmu->brbe_reset(cpuc); + } + /* * Perf event overflow will queue the processing of the event as * an irq_work which will be taken care of in the handling of diff --git a/drivers/perf/arm_pmu.c b/drivers/perf/arm_pmu.c index 5048a500441e..1a8dca4e513e 100644 --- a/drivers/perf/arm_pmu.c +++ b/drivers/perf/arm_pmu.c @@ -271,12 +271,22 @@ armpmu_stop(struct perf_event *event, int flags) { struct arm_pmu *armpmu = to_arm_pmu(event->pmu); struct hw_perf_event *hwc = &event->hw; + struct pmu_hw_events *hw_events = this_cpu_ptr(armpmu->hw_events); /* * ARM pmu always has to update the counter, so ignore * PERF_EF_UPDATE, see comments in armpmu_start(). */ if (!(hwc->state & PERF_HES_STOPPED)) { + if (has_branch_stack(event)) { + WARN_ON_ONCE(!hw_events->brbe_users); + hw_events->brbe_users--; + if (!hw_events->brbe_users) { + hw_events->brbe_context = NULL; + armpmu->brbe_disable(hw_events); + } + } + armpmu->disable(event); armpmu_event_update(event); hwc->state |= PERF_HES_STOPPED | PERF_HES_UPTODATE; @@ -287,6 +297,7 @@ static void armpmu_start(struct perf_event *event, int flags) { struct arm_pmu *armpmu = to_arm_pmu(event->pmu); struct hw_perf_event *hwc = &event->hw; + struct pmu_hw_events *hw_events = this_cpu_ptr(armpmu->hw_events); /* * ARM pmu always has to reprogram the period, so ignore @@ -304,6 +315,14 @@ static void armpmu_start(struct perf_event *event, int flags) * happened since disabling. */ armpmu_event_set_period(event); + if (has_branch_stack(event)) { + if (event->ctx->task && hw_events->brbe_context != event->ctx) { + armpmu->brbe_reset(hw_events); + hw_events->brbe_context = event->ctx; + } + armpmu->brbe_enable(hw_events); + hw_events->brbe_users++; + } armpmu->enable(event); } @@ -349,6 +368,10 @@ armpmu_add(struct perf_event *event, int flags) hw_events->events[idx] = event; hwc->state = PERF_HES_STOPPED | PERF_HES_UPTODATE; + + if (has_branch_stack(event)) + armpmu->brbe_filter(hw_events, event); + if (flags & PERF_EF_START) armpmu_start(event, PERF_EF_RELOAD); @@ -443,6 +466,7 @@ __hw_perf_event_init(struct perf_event *event) { struct arm_pmu *armpmu = to_arm_pmu(event->pmu); struct hw_perf_event *hwc = &event->hw; + struct pmu_hw_events *hw_events = this_cpu_ptr(armpmu->hw_events); int mapping; hwc->flags = 0; @@ -492,6 +516,9 @@ __hw_perf_event_init(struct perf_event *event) local64_set(&hwc->period_left, hwc->sample_period); } + if (has_branch_stack(event)) + armpmu->brbe_filter(hw_events, event); + return validate_group(event); } @@ -520,6 +547,18 @@ static int armpmu_event_init(struct perf_event *event) return __hw_perf_event_init(event); } +static void armpmu_sched_task(struct perf_event_context *ctx, bool sched_in) +{ + struct arm_pmu *armpmu = to_arm_pmu(ctx->pmu); + struct pmu_hw_events *hw_events = this_cpu_ptr(armpmu->hw_events); + + if (!hw_events->brbe_users) + return; + + if (sched_in) + armpmu->brbe_reset(hw_events); +} + static void armpmu_enable(struct pmu *pmu) { struct arm_pmu *armpmu = to_arm_pmu(pmu); @@ -877,6 +916,7 @@ static struct arm_pmu *__armpmu_alloc(gfp_t flags) } pmu->pmu = (struct pmu) { + .sched_task = armpmu_sched_task, .pmu_enable = armpmu_enable, .pmu_disable = armpmu_disable, .event_init = armpmu_event_init, From patchwork Mon Nov 7 06:25:13 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anshuman Khandual X-Patchwork-Id: 16249 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp1869190wru; Sun, 6 Nov 2022 22:28:31 -0800 (PST) X-Google-Smtp-Source: AMsMyM7kgxZqoaI0OhUBxedsegtMuAYL2YdzVPSbAM8M052svMjlymq6aU5xV0WvOdk+uene5AC1 X-Received: by 2002:a17:907:7f25:b0:7aa:acf9:c07e with SMTP id qf37-20020a1709077f2500b007aaacf9c07emr46239710ejc.280.1667802511098; Sun, 06 Nov 2022 22:28:31 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1667802511; cv=none; d=google.com; s=arc-20160816; b=GdOKK+lLLO7Wg0bbkd/5u0BavAAv24XmVZfRq1+WCxDes1ZoUey13Ur1yxaoSNaso5 Ayumbs3LpaF6j+eKWCqWUpwqE9zcTz2O0KnywZWfJx9RjpeftQbot/JlEmMBz533kjIg bTaiF3bCdTjryNwWEJVLEPHceyXQbb5VRWm997b8YYpIs924SaLosGod8c1oyLYcAH2v msq9p7v95mEDppIW+SGJ2WTifEshEfQjGJ0No26sxLUQSI2GcX51ekdwiP7hdzECmNwb 1LXIaes8baEuCn8Sm/QM9Q/8CA6sbqwRV/JuSRVOUKNvOQHfc6CzBJFZzfamUoVt5uJG Xljw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=DJDKoCcu0S89CaPAiNr8M8Dj410L34JIWncr5PV4irE=; b=KM7kpS2AAaurs/aSZALOyMU5fXT4sDEN6PBTBeKX3VD5CVv1Eze95QPbXpiLPfk52D srruMgm2ytQNgnpSpQy4GtERTwk/zSGj1hBaD1pNBpmKQe+/jF0LfigwN2SUitpbVsoh kJyONMBOFy1QF+YVhlTcj0Is9HBbAUg8QOksdInksN/Jif79mslbRv2fwM4dwsv0n/l/ m+zjUTE6HiBcA5HSzyj336L63O0Jfyjrw0LITO0cQXvDRwMAmMuVryTjBescvj9mcrXL tUYcngSM4rXLdRUm1/7J60VK1zb8pAxhDSdQMwM5+/UMatYsErHNYCN1bgQHNUlt9G44 Y2AA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id g22-20020a056402321600b00461d28ac266si8009381eda.579.2022.11.06.22.28.06; Sun, 06 Nov 2022 22:28:31 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231355AbiKGG0W (ORCPT + 99 others); Mon, 7 Nov 2022 01:26:22 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40926 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231329AbiKGG0N (ORCPT ); Mon, 7 Nov 2022 01:26:13 -0500 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id AF05312AB3; Sun, 6 Nov 2022 22:26:08 -0800 (PST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 7A40723A; Sun, 6 Nov 2022 22:26:14 -0800 (PST) Received: from a077893.blr.arm.com (unknown [10.162.42.7]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 7A1923F534; Sun, 6 Nov 2022 22:26:03 -0800 (PST) From: Anshuman Khandual To: linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-arm-kernel@lists.infradead.org, peterz@infradead.org, acme@kernel.org, mark.rutland@arm.com, will@kernel.org, catalin.marinas@arm.com Cc: Anshuman Khandual , Mark Brown , James Clark , Rob Herring , Marc Zyngier , Suzuki Poulose , Ingo Molnar Subject: [PATCH V5 6/7] arm64/perf: Add BRBE driver Date: Mon, 7 Nov 2022 11:55:13 +0530 Message-Id: <20221107062514.2851047-7-anshuman.khandual@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221107062514.2851047-1-anshuman.khandual@arm.com> References: <20221107062514.2851047-1-anshuman.khandual@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1748817686091527903?= X-GMAIL-MSGID: =?utf-8?q?1748817686091527903?= This adds a BRBE driver which implements all the required helper functions for struct arm_pmu. Following functions are defined by this driver which will configure, enable, capture, reset and disable BRBE buffer HW as and when requested via perf branch stack sampling framework. - arm64_pmu_brbe_filter() - arm64_pmu_brbe_enable() - arm64_pmu_brbe_disable() - arm64_pmu_brbe_read() - arm64_pmu_brbe_probe() - arm64_pmu_brbe_reset() - arm64_pmu_brbe_supported() Cc: Peter Zijlstra Cc: Ingo Molnar Cc: Arnaldo Carvalho de Melo Cc: Mark Rutland Cc: Will Deacon Cc: Catalin Marinas Cc: linux-arm-kernel@lists.infradead.org Cc: linux-perf-users@vger.kernel.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Anshuman Khandual --- arch/arm64/kernel/perf_event.c | 8 +- drivers/perf/Kconfig | 11 + drivers/perf/Makefile | 1 + drivers/perf/arm_pmu_brbe.c | 441 +++++++++++++++++++++++++++++++++ drivers/perf/arm_pmu_brbe.h | 259 +++++++++++++++++++ include/linux/perf/arm_pmu.h | 20 ++ 6 files changed, 739 insertions(+), 1 deletion(-) create mode 100644 drivers/perf/arm_pmu_brbe.c create mode 100644 drivers/perf/arm_pmu_brbe.h diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c index 97db333d1208..85a3aaefc0fb 100644 --- a/arch/arm64/kernel/perf_event.c +++ b/arch/arm64/kernel/perf_event.c @@ -1034,31 +1034,37 @@ static int armv8pmu_filter_match(struct perf_event *event) static void armv8pmu_brbe_filter(struct pmu_hw_events *hw_event, struct perf_event *event) { + arm64_pmu_brbe_filter(hw_event, event); } static void armv8pmu_brbe_enable(struct pmu_hw_events *hw_event) { + arm64_pmu_brbe_enable(hw_event); } static void armv8pmu_brbe_disable(struct pmu_hw_events *hw_event) { + arm64_pmu_brbe_disable(hw_event); } static void armv8pmu_brbe_read(struct pmu_hw_events *hw_event, struct perf_event *event) { + arm64_pmu_brbe_read(hw_event, event); } static void armv8pmu_brbe_probe(struct pmu_hw_events *hw_event) { + arm64_pmu_brbe_probe(hw_event); } static void armv8pmu_brbe_reset(struct pmu_hw_events *hw_event) { + arm64_pmu_brbe_reset(hw_event); } static bool armv8pmu_brbe_supported(struct perf_event *event) { - return false; + return arm64_pmu_brbe_supported(event); } static void armv8pmu_reset(void *info) diff --git a/drivers/perf/Kconfig b/drivers/perf/Kconfig index 341010f20b77..cfb79eddeb02 100644 --- a/drivers/perf/Kconfig +++ b/drivers/perf/Kconfig @@ -190,6 +190,17 @@ config ALIBABA_UNCORE_DRW_PMU Support for Driveway PMU events monitoring on Yitian 710 DDR Sub-system. +config ARM_BRBE_PMU + bool "Enable support for Branch Record Buffer Extension (BRBE)" + depends on ARM64 && ARM_PMU + default y + help + Enable perf support for Branch Record Buffer Extension (BRBE) which + records all branches taken in an execution path. This supports some + branch types and privilege based filtering. It captured additional + relevant information such as cycle count, misprediction and branch + type, branch privilege level etc. + source "drivers/perf/hisilicon/Kconfig" config MARVELL_CN10K_DDR_PMU diff --git a/drivers/perf/Makefile b/drivers/perf/Makefile index 050d04ee19dd..00428793e66c 100644 --- a/drivers/perf/Makefile +++ b/drivers/perf/Makefile @@ -16,6 +16,7 @@ obj-$(CONFIG_RISCV_PMU_SBI) += riscv_pmu_sbi.o obj-$(CONFIG_THUNDERX2_PMU) += thunderx2_pmu.o obj-$(CONFIG_XGENE_PMU) += xgene_pmu.o obj-$(CONFIG_ARM_SPE_PMU) += arm_spe_pmu.o +obj-$(CONFIG_ARM_BRBE_PMU) += arm_pmu_brbe.o obj-$(CONFIG_ARM_DMC620_PMU) += arm_dmc620_pmu.o obj-$(CONFIG_MARVELL_CN10K_TAD_PMU) += marvell_cn10k_tad_pmu.o obj-$(CONFIG_MARVELL_CN10K_DDR_PMU) += marvell_cn10k_ddr_pmu.o diff --git a/drivers/perf/arm_pmu_brbe.c b/drivers/perf/arm_pmu_brbe.c new file mode 100644 index 000000000000..ce1aa4171481 --- /dev/null +++ b/drivers/perf/arm_pmu_brbe.c @@ -0,0 +1,441 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Branch Record Buffer Extension Driver. + * + * Copyright (C) 2021 ARM Limited + * + * Author: Anshuman Khandual + */ +#include "arm_pmu_brbe.h" + +#define BRBFCR_BRANCH_ALL (BRBFCR_EL1_DIRECT | BRBFCR_EL1_INDIRECT | \ + BRBFCR_EL1_RTN | BRBFCR_EL1_INDCALL | \ + BRBFCR_EL1_DIRCALL | BRBFCR_EL1_CONDDIR) + +#define BRBE_FCR_MASK (BRBFCR_BRANCH_ALL) +#define BRBE_CR_MASK (BRBCR_EL1_EXCEPTION | BRBCR_EL1_ERTN | BRBCR_EL1_CC | \ + BRBCR_EL1_MPRED | BRBCR_EL1_E1BRE | BRBCR_EL1_E0BRE) + +static void set_brbe_disabled(struct pmu_hw_events *cpuc) +{ + cpuc->brbe_nr = 0; +} + +static bool brbe_disabled(struct pmu_hw_events *cpuc) +{ + return !cpuc->brbe_nr; +} + +bool arm64_pmu_brbe_supported(struct perf_event *event) +{ + struct arm_pmu *armpmu = to_arm_pmu(event->pmu); + struct pmu_hw_events *hw_events = per_cpu_ptr(armpmu->hw_events, event->cpu); + + /* + * If the event does not have at least one of the privilege + * branch filters as in PERF_SAMPLE_BRANCH_PLM_ALL, the core + * perf will adjust its value based on perf event's existing + * privilege level via attr.exclude_[user|kernel|hv]. + * + * As event->attr.branch_sample_type might have been changed + * when the event reaches here, it is not possible to figure + * out whether the event originally had HV privilege request + * or got added via the core perf. Just report this situation + * once and continue ignoring if there are other instances. + */ + if (event->attr.branch_sample_type & PERF_SAMPLE_BRANCH_HV) + pr_warn_once("does not support hypervisor privilege branch filter\n"); + + if (event->attr.branch_sample_type & PERF_SAMPLE_BRANCH_ABORT_TX) { + pr_warn_once("does not support aborted transaction branch filter\n"); + return false; + } + + if (event->attr.branch_sample_type & PERF_SAMPLE_BRANCH_NO_TX) { + pr_warn_once("does not support non transaction branch filter\n"); + return false; + } + + if (event->attr.branch_sample_type & PERF_SAMPLE_BRANCH_IN_TX) { + pr_warn_once("does not support in transaction branch filter\n"); + return false; + } + return !brbe_disabled(hw_events); +} + +void arm64_pmu_brbe_probe(struct pmu_hw_events *cpuc) +{ + u64 aa64dfr0, brbidr; + unsigned int brbe; + + aa64dfr0 = read_sysreg_s(SYS_ID_AA64DFR0_EL1); + brbe = cpuid_feature_extract_unsigned_field(aa64dfr0, ID_AA64DFR0_EL1_BRBE_SHIFT); + if (!brbe) { + set_brbe_disabled(cpuc); + return; + } else if (brbe == ID_AA64DFR0_EL1_BRBE_IMP) { + cpuc->brbe_v1p1 = false; + } else if (brbe == ID_AA64DFR0_EL1_BRBE_BRBE_V1P1) { + cpuc->brbe_v1p1 = true; + } + + brbidr = read_sysreg_s(SYS_BRBIDR0_EL1); + cpuc->brbe_format = brbe_fetch_format(brbidr); + if (cpuc->brbe_format != BRBIDR0_EL1_FORMAT_0) { + set_brbe_disabled(cpuc); + return; + } + + cpuc->brbe_cc = brbe_fetch_cc_bits(brbidr); + if (cpuc->brbe_cc != BRBIDR0_EL1_CC_20_BIT) { + set_brbe_disabled(cpuc); + return; + } + + cpuc->brbe_nr = brbe_fetch_numrec(brbidr); + if (!valid_brbe_nr(cpuc->brbe_nr)) { + set_brbe_disabled(cpuc); + return; + } +} + +void arm64_pmu_brbe_enable(struct pmu_hw_events *cpuc) +{ + u64 brbfcr, brbcr; + + if (brbe_disabled(cpuc)) + return; + + brbfcr = read_sysreg_s(SYS_BRBFCR_EL1); + brbfcr &= ~BRBFCR_EL1_BANK_MASK; + brbfcr &= ~(BRBFCR_EL1_EnI | BRBFCR_EL1_PAUSED | BRBE_FCR_MASK); + brbfcr |= (cpuc->brbfcr & BRBE_FCR_MASK); + write_sysreg_s(brbfcr, SYS_BRBFCR_EL1); + isb(); + + brbcr = read_sysreg_s(SYS_BRBCR_EL1); + brbcr &= ~BRBE_CR_MASK; + brbcr |= BRBCR_EL1_FZP; + brbcr |= (BRBCR_EL1_TS_PHYSICAL << BRBCR_EL1_TS_SHIFT); + brbcr |= (cpuc->brbcr & BRBE_CR_MASK); + write_sysreg_s(brbcr, SYS_BRBCR_EL1); + isb(); +} + +void arm64_pmu_brbe_disable(struct pmu_hw_events *cpuc) +{ + u64 brbcr; + + if (brbe_disabled(cpuc)) + return; + + brbcr = read_sysreg_s(SYS_BRBCR_EL1); + brbcr &= ~(BRBCR_EL1_E0BRE | BRBCR_EL1_E1BRE); + write_sysreg_s(brbcr, SYS_BRBCR_EL1); + isb(); +} + +static void perf_branch_to_brbfcr(struct pmu_hw_events *cpuc, int branch_type) +{ + cpuc->brbfcr = 0; + + if (branch_type & PERF_SAMPLE_BRANCH_ANY) { + cpuc->brbfcr |= BRBFCR_BRANCH_ALL; + return; + } + + if (branch_type & PERF_SAMPLE_BRANCH_ANY_CALL) + cpuc->brbfcr |= (BRBFCR_EL1_INDCALL | BRBFCR_EL1_DIRCALL); + + if (branch_type & PERF_SAMPLE_BRANCH_ANY_RETURN) + cpuc->brbfcr |= BRBFCR_EL1_RTN; + + if (branch_type & PERF_SAMPLE_BRANCH_IND_CALL) + cpuc->brbfcr |= BRBFCR_EL1_INDCALL; + + if (branch_type & PERF_SAMPLE_BRANCH_COND) + cpuc->brbfcr |= BRBFCR_EL1_CONDDIR; + + if (branch_type & PERF_SAMPLE_BRANCH_IND_JUMP) + cpuc->brbfcr |= BRBFCR_EL1_INDIRECT; + + if (branch_type & PERF_SAMPLE_BRANCH_CALL) + cpuc->brbfcr |= BRBFCR_EL1_DIRCALL; +} + +static void perf_branch_to_brbcr(struct pmu_hw_events *cpuc, int branch_type) +{ + cpuc->brbcr = (BRBCR_EL1_CC | BRBCR_EL1_MPRED); + + if (branch_type & PERF_SAMPLE_BRANCH_USER) + cpuc->brbcr |= BRBCR_EL1_E0BRE; + + if (branch_type & PERF_SAMPLE_BRANCH_NO_CYCLES) + cpuc->brbcr &= ~BRBCR_EL1_CC; + + if (branch_type & PERF_SAMPLE_BRANCH_NO_FLAGS) + cpuc->brbcr &= ~BRBCR_EL1_MPRED; + + if (branch_type & PERF_SAMPLE_BRANCH_KERNEL) + cpuc->brbcr |= BRBCR_EL1_E1BRE; + else + return; + + /* + * The exception and exception return branches could be + * captured only when the event has necessary privilege + * indicated via branch type PERF_SAMPLE_BRANCH_KERNEL, + * which has been ascertained in generic perf. Please + * refer perf_copy_attr() for more details. + */ + if (branch_type & PERF_SAMPLE_BRANCH_ANY) { + cpuc->brbcr |= BRBCR_EL1_EXCEPTION; + cpuc->brbcr |= BRBCR_EL1_ERTN; + return; + } + + if (branch_type & PERF_SAMPLE_BRANCH_ANY_CALL) + cpuc->brbcr |= BRBCR_EL1_EXCEPTION; + + if (branch_type & PERF_SAMPLE_BRANCH_ANY_RETURN) + cpuc->brbcr |= BRBCR_EL1_ERTN; +} + + +void arm64_pmu_brbe_filter(struct pmu_hw_events *cpuc, struct perf_event *event) +{ + u64 branch_type = event->attr.branch_sample_type; + + if (brbe_disabled(cpuc)) + return; + + perf_branch_to_brbfcr(cpuc, branch_type); + perf_branch_to_brbcr(cpuc, branch_type); +} + +static int brbe_fetch_perf_type(u64 brbinf, bool *new_branch_type) +{ + int brbe_type = brbe_fetch_type(brbinf); + *new_branch_type = false; + + switch (brbe_type) { + case BRBINF_EL1_TYPE_UNCOND_DIR: + return PERF_BR_UNCOND; + case BRBINF_EL1_TYPE_INDIR: + return PERF_BR_IND; + case BRBINF_EL1_TYPE_DIR_LINK: + return PERF_BR_CALL; + case BRBINF_EL1_TYPE_INDIR_LINK: + return PERF_BR_IND_CALL; + case BRBINF_EL1_TYPE_RET_SUB: + return PERF_BR_RET; + case BRBINF_EL1_TYPE_COND_DIR: + return PERF_BR_COND; + case BRBINF_EL1_TYPE_CALL: + return PERF_BR_CALL; + case BRBINF_EL1_TYPE_TRAP: + return PERF_BR_SYSCALL; + case BRBINF_EL1_TYPE_RET_EXCPT: + return PERF_BR_ERET; + case BRBINF_EL1_TYPE_IRQ: + return PERF_BR_IRQ; + case BRBINF_EL1_TYPE_DEBUG_HALT: + *new_branch_type = true; + return PERF_BR_ARM64_DEBUG_HALT; + case BRBINF_EL1_TYPE_SERROR: + return PERF_BR_SERROR; + case BRBINF_EL1_TYPE_INST_DEBUG: + *new_branch_type = true; + return PERF_BR_ARM64_DEBUG_INST; + case BRBINF_EL1_TYPE_DATA_DEBUG: + *new_branch_type = true; + return PERF_BR_ARM64_DEBUG_DATA; + case BRBINF_EL1_TYPE_ALGN_FAULT: + *new_branch_type = true; + return PERF_BR_NEW_FAULT_ALGN; + case BRBINF_EL1_TYPE_INST_FAULT: + *new_branch_type = true; + return PERF_BR_NEW_FAULT_INST; + case BRBINF_EL1_TYPE_DATA_FAULT: + *new_branch_type = true; + return PERF_BR_NEW_FAULT_DATA; + case BRBINF_EL1_TYPE_FIQ: + *new_branch_type = true; + return PERF_BR_ARM64_FIQ; + case BRBINF_EL1_TYPE_DEBUG_EXIT: + *new_branch_type = true; + return PERF_BR_ARM64_DEBUG_EXIT; + default: + pr_warn("unknown branch type captured\n"); + return PERF_BR_UNKNOWN; + } +} + +static int brbe_fetch_perf_priv(u64 brbinf) +{ + int brbe_el = brbe_fetch_el(brbinf); + + switch (brbe_el) { + case BRBINF_EL1_EL_EL0: + return PERF_BR_PRIV_USER; + case BRBINF_EL1_EL_EL1: + return PERF_BR_PRIV_KERNEL; + case BRBINF_EL1_EL_EL2: + if (is_kernel_in_hyp_mode()) + return PERF_BR_PRIV_KERNEL; + return PERF_BR_PRIV_HV; + default: + pr_warn("unknown branch privilege captured\n"); + return PERF_BR_PRIV_UNKNOWN; + } +} + +static void capture_brbe_flags(struct pmu_hw_events *cpuc, struct perf_event *event, + u64 brbinf, int idx) +{ + int branch_type, type = brbe_record_valid(brbinf); + bool new_branch_type; + + if (!branch_sample_no_cycles(event)) + cpuc->branches->brbe_entries[idx].cycles = brbe_fetch_cycles(brbinf); + + if (branch_sample_type(event)) { + branch_type = brbe_fetch_perf_type(brbinf, &new_branch_type); + if (new_branch_type) { + cpuc->branches->brbe_entries[idx].type = PERF_BR_EXTEND_ABI; + cpuc->branches->brbe_entries[idx].new_type = branch_type; + } else { + cpuc->branches->brbe_entries[idx].type = branch_type; + } + } + + if (!branch_sample_no_flags(event)) { + /* + * BRBINF_LASTFAILED does not indicate that the last transaction + * got failed or aborted during the current branch record itself. + * Rather, this indicates that all the branch records which were + * in transaction until the curret branch record have failed. So + * the entire BRBE buffer needs to be processed later on to find + * all branch records which might have failed. + */ + cpuc->branches->brbe_entries[idx].abort = brbinf & BRBINF_EL1_LASTFAILED; + + /* + * All these information (i.e transaction state and mispredicts) + * are not available for target only branch records. + */ + if (type != BRBINF_EL1_VALID_TARGET) { + cpuc->branches->brbe_entries[idx].mispred = brbinf & BRBINF_EL1_MPRED; + cpuc->branches->brbe_entries[idx].predicted = !(brbinf & BRBINF_EL1_MPRED); + cpuc->branches->brbe_entries[idx].in_tx = brbinf & BRBINF_EL1_T; + } + } + + if (branch_sample_priv(event)) { + /* + * All these information (i.e branch privilege level) are not + * available for source only branch records. + */ + if (type != BRBINF_EL1_VALID_SOURCE) + cpuc->branches->brbe_entries[idx].priv = brbe_fetch_perf_priv(brbinf); + } +} + +/* + * A branch record with BRBINF_EL1.LASTFAILED set, implies that all + * preceding consecutive branch records, that were in a transaction + * (i.e their BRBINF_EL1.TX set) have been aborted. + * + * Similarly BRBFCR_EL1.LASTFAILED set, indicate that all preceding + * consecutive branch records upto the last record, which were in a + * transaction (i.e their BRBINF_EL1.TX set) have been aborted. + * + * --------------------------------- ------------------- + * | 00 | BRBSRC | BRBTGT | BRBINF | | TX = 1 | LF = 0 | [TX success] + * --------------------------------- ------------------- + * | 01 | BRBSRC | BRBTGT | BRBINF | | TX = 1 | LF = 0 | [TX success] + * --------------------------------- ------------------- + * | 02 | BRBSRC | BRBTGT | BRBINF | | TX = 0 | LF = 0 | + * --------------------------------- ------------------- + * | 03 | BRBSRC | BRBTGT | BRBINF | | TX = 1 | LF = 0 | [TX failed] + * --------------------------------- ------------------- + * | 04 | BRBSRC | BRBTGT | BRBINF | | TX = 1 | LF = 0 | [TX failed] + * --------------------------------- ------------------- + * | 05 | BRBSRC | BRBTGT | BRBINF | | TX = 0 | LF = 1 | + * --------------------------------- ------------------- + * | .. | BRBSRC | BRBTGT | BRBINF | | TX = 0 | LF = 0 | + * --------------------------------- ------------------- + * | 61 | BRBSRC | BRBTGT | BRBINF | | TX = 1 | LF = 0 | [TX failed] + * --------------------------------- ------------------- + * | 62 | BRBSRC | BRBTGT | BRBINF | | TX = 1 | LF = 0 | [TX failed] + * --------------------------------- ------------------- + * | 63 | BRBSRC | BRBTGT | BRBINF | | TX = 1 | LF = 0 | [TX failed] + * --------------------------------- ------------------- + * + * BRBFCR_EL1.LASTFAILED == 1 + * + * Here BRBFCR_EL1.LASTFAILED failes all those consecutive and also + * in transaction branches near the end of the BRBE buffer. + */ +static void process_branch_aborts(struct pmu_hw_events *cpuc) +{ + u64 brbfcr = read_sysreg_s(SYS_BRBFCR_EL1); + bool lastfailed = !!(brbfcr & BRBFCR_EL1_LASTFAILED); + int idx = cpuc->brbe_nr - 1; + + do { + if (cpuc->branches->brbe_entries[idx].in_tx) { + cpuc->branches->brbe_entries[idx].abort = lastfailed; + } else { + lastfailed = cpuc->branches->brbe_entries[idx].abort; + cpuc->branches->brbe_entries[idx].abort = false; + } + } while (idx--, idx >= 0); +} + +void arm64_pmu_brbe_read(struct pmu_hw_events *cpuc, struct perf_event *event) +{ + u64 brbinf; + int idx; + + if (brbe_disabled(cpuc)) + return; + + set_brbe_paused(); + for (idx = 0; idx < cpuc->brbe_nr; idx++) { + select_brbe_bank_index(idx); + brbinf = get_brbinf_reg(idx); + /* + * There are no valid entries anymore on the buffer. + * Abort the branch record processing to save some + * cycles and also reduce the capture/process load + * for the user space as well. + */ + if (brbe_invalid(brbinf)) + break; + + if (brbe_valid(brbinf)) { + cpuc->branches->brbe_entries[idx].from = get_brbsrc_reg(idx); + cpuc->branches->brbe_entries[idx].to = get_brbtgt_reg(idx); + } else if (brbe_source(brbinf)) { + cpuc->branches->brbe_entries[idx].from = get_brbsrc_reg(idx); + cpuc->branches->brbe_entries[idx].to = 0; + } else if (brbe_target(brbinf)) { + cpuc->branches->brbe_entries[idx].from = 0; + cpuc->branches->brbe_entries[idx].to = get_brbtgt_reg(idx); + } + capture_brbe_flags(cpuc, event, brbinf, idx); + } + cpuc->branches->brbe_stack.nr = idx; + cpuc->branches->brbe_stack.hw_idx = -1ULL; + process_branch_aborts(cpuc); +} + +void arm64_pmu_brbe_reset(struct pmu_hw_events *cpuc) +{ + if (brbe_disabled(cpuc)) + return; + + asm volatile(BRB_IALL); + isb(); +} diff --git a/drivers/perf/arm_pmu_brbe.h b/drivers/perf/arm_pmu_brbe.h new file mode 100644 index 000000000000..22c4b25b1777 --- /dev/null +++ b/drivers/perf/arm_pmu_brbe.h @@ -0,0 +1,259 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Branch Record Buffer Extension Helpers. + * + * Copyright (C) 2021 ARM Limited + * + * Author: Anshuman Khandual + */ +#define pr_fmt(fmt) "brbe: " fmt + +#include + +/* + * BRBE Instructions + * + * BRB_IALL : Invalidate the entire buffer + * BRB_INJ : Inject latest branch record derived from [BRBSRCINJ, BRBTGTINJ, BRBINFINJ] + */ +#define BRB_IALL __emit_inst(0xD5000000 | sys_insn(1, 1, 7, 2, 4) | (0x1f)) +#define BRB_INJ __emit_inst(0xD5000000 | sys_insn(1, 1, 7, 2, 5) | (0x1f)) + +/* + * BRBE Buffer Organization + * + * BRBE buffer is arranged as multiple banks of 32 branch record + * entries each. An indivdial branch record in a given bank could + * be accessedi, after selecting the bank in BRBFCR_EL1.BANK and + * accessing the registers i.e [BRBSRC, BRBTGT, BRBINF] set with + * indices [0..31]. + * + * Bank 0 + * + * --------------------------------- ------ + * | 00 | BRBSRC | BRBTGT | BRBINF | | 00 | + * --------------------------------- ------ + * | 01 | BRBSRC | BRBTGT | BRBINF | | 01 | + * --------------------------------- ------ + * | .. | BRBSRC | BRBTGT | BRBINF | | .. | + * --------------------------------- ------ + * | 31 | BRBSRC | BRBTGT | BRBINF | | 31 | + * --------------------------------- ------ + * + * Bank 1 + * + * --------------------------------- ------ + * | 32 | BRBSRC | BRBTGT | BRBINF | | 00 | + * --------------------------------- ------ + * | 33 | BRBSRC | BRBTGT | BRBINF | | 01 | + * --------------------------------- ------ + * | .. | BRBSRC | BRBTGT | BRBINF | | .. | + * --------------------------------- ------ + * | 63 | BRBSRC | BRBTGT | BRBINF | | 31 | + * --------------------------------- ------ + */ +#define BRBE_BANK0_IDX_MIN 0 +#define BRBE_BANK0_IDX_MAX 31 +#define BRBE_BANK1_IDX_MIN 32 +#define BRBE_BANK1_IDX_MAX 63 + +#define RETURN_READ_BRBSRCN(n) \ + read_sysreg_s(SYS_BRBSRC##n##_EL1) + +#define RETURN_READ_BRBTGTN(n) \ + read_sysreg_s(SYS_BRBTGT##n##_EL1) + +#define RETURN_READ_BRBINFN(n) \ + read_sysreg_s(SYS_BRBINF##n##_EL1) + +#define BRBE_REGN_CASE(n, case_macro) \ + case n: return case_macro(n); break + +#define BRBE_REGN_SWITCH(x, case_macro) \ + do { \ + switch (x) { \ + BRBE_REGN_CASE(0, case_macro); \ + BRBE_REGN_CASE(1, case_macro); \ + BRBE_REGN_CASE(2, case_macro); \ + BRBE_REGN_CASE(3, case_macro); \ + BRBE_REGN_CASE(4, case_macro); \ + BRBE_REGN_CASE(5, case_macro); \ + BRBE_REGN_CASE(6, case_macro); \ + BRBE_REGN_CASE(7, case_macro); \ + BRBE_REGN_CASE(8, case_macro); \ + BRBE_REGN_CASE(9, case_macro); \ + BRBE_REGN_CASE(10, case_macro); \ + BRBE_REGN_CASE(11, case_macro); \ + BRBE_REGN_CASE(12, case_macro); \ + BRBE_REGN_CASE(13, case_macro); \ + BRBE_REGN_CASE(14, case_macro); \ + BRBE_REGN_CASE(15, case_macro); \ + BRBE_REGN_CASE(16, case_macro); \ + BRBE_REGN_CASE(17, case_macro); \ + BRBE_REGN_CASE(18, case_macro); \ + BRBE_REGN_CASE(19, case_macro); \ + BRBE_REGN_CASE(20, case_macro); \ + BRBE_REGN_CASE(21, case_macro); \ + BRBE_REGN_CASE(22, case_macro); \ + BRBE_REGN_CASE(23, case_macro); \ + BRBE_REGN_CASE(24, case_macro); \ + BRBE_REGN_CASE(25, case_macro); \ + BRBE_REGN_CASE(26, case_macro); \ + BRBE_REGN_CASE(27, case_macro); \ + BRBE_REGN_CASE(28, case_macro); \ + BRBE_REGN_CASE(29, case_macro); \ + BRBE_REGN_CASE(30, case_macro); \ + BRBE_REGN_CASE(31, case_macro); \ + default: \ + pr_warn("unknown register index\n"); \ + return -1; \ + } \ + } while (0) + +static inline int buffer_to_brbe_idx(int buffer_idx) +{ + return buffer_idx % 32; +} + +static inline u64 get_brbsrc_reg(int buffer_idx) +{ + int brbe_idx = buffer_to_brbe_idx(buffer_idx); + + BRBE_REGN_SWITCH(brbe_idx, RETURN_READ_BRBSRCN); +} + +static inline u64 get_brbtgt_reg(int buffer_idx) +{ + int brbe_idx = buffer_to_brbe_idx(buffer_idx); + + BRBE_REGN_SWITCH(brbe_idx, RETURN_READ_BRBTGTN); +} + +static inline u64 get_brbinf_reg(int buffer_idx) +{ + int brbe_idx = buffer_to_brbe_idx(buffer_idx); + + BRBE_REGN_SWITCH(brbe_idx, RETURN_READ_BRBINFN); +} + +static inline u64 brbe_record_valid(u64 brbinf) +{ + return (brbinf & BRBINF_EL1_VALID_MASK) >> BRBINF_EL1_VALID_SHIFT; +} + +static inline bool brbe_invalid(u64 brbinf) +{ + return brbe_record_valid(brbinf) == BRBINF_EL1_VALID_NONE; +} + +static inline bool brbe_valid(u64 brbinf) +{ + return brbe_record_valid(brbinf) == BRBINF_EL1_VALID_FULL; +} + +static inline bool brbe_source(u64 brbinf) +{ + return brbe_record_valid(brbinf) == BRBINF_EL1_VALID_SOURCE; +} + +static inline bool brbe_target(u64 brbinf) +{ + return brbe_record_valid(brbinf) == BRBINF_EL1_VALID_TARGET; +} + +static inline int brbe_fetch_cycles(u64 brbinf) +{ + /* + * Captured cycle count is unknown and hence + * should not be passed on the user space. + */ + if (brbinf & BRBINF_EL1_CCU) + return 0; + + return (brbinf & BRBINF_EL1_CC_MASK) >> BRBINF_EL1_CC_SHIFT; +} + +static inline int brbe_fetch_type(u64 brbinf) +{ + return (brbinf & BRBINF_EL1_TYPE_MASK) >> BRBINF_EL1_TYPE_SHIFT; +} + +static inline int brbe_fetch_el(u64 brbinf) +{ + return (brbinf & BRBINF_EL1_EL_MASK) >> BRBINF_EL1_EL_SHIFT; +} + +static inline int brbe_fetch_numrec(u64 brbidr) +{ + return (brbidr & BRBIDR0_EL1_NUMREC_MASK) >> BRBIDR0_EL1_NUMREC_SHIFT; +} + +static inline int brbe_fetch_format(u64 brbidr) +{ + return (brbidr & BRBIDR0_EL1_FORMAT_MASK) >> BRBIDR0_EL1_FORMAT_SHIFT; +} + +static inline int brbe_fetch_cc_bits(u64 brbidr) +{ + return (brbidr & BRBIDR0_EL1_CC_MASK) >> BRBIDR0_EL1_CC_SHIFT; +} + +static inline void select_brbe_bank(int bank) +{ + static int brbe_current_bank = -1; + u64 brbfcr; + + if (brbe_current_bank == bank) + return; + + WARN_ON(bank > 1); + brbfcr = read_sysreg_s(SYS_BRBFCR_EL1); + brbfcr &= ~BRBFCR_EL1_BANK_MASK; + brbfcr |= ((bank << BRBFCR_EL1_BANK_SHIFT) & BRBFCR_EL1_BANK_MASK); + write_sysreg_s(brbfcr, SYS_BRBFCR_EL1); + isb(); + brbe_current_bank = bank; +} + +static inline void select_brbe_bank_index(int buffer_idx) +{ + switch (buffer_idx) { + case BRBE_BANK0_IDX_MIN ... BRBE_BANK0_IDX_MAX: + select_brbe_bank(0); + break; + case BRBE_BANK1_IDX_MIN ... BRBE_BANK1_IDX_MAX: + select_brbe_bank(1); + break; + default: + pr_warn("unsupported BRBE index\n"); + } +} + +static inline bool valid_brbe_nr(int brbe_nr) +{ + switch (brbe_nr) { + case BRBIDR0_EL1_NUMREC_8: + case BRBIDR0_EL1_NUMREC_16: + case BRBIDR0_EL1_NUMREC_32: + case BRBIDR0_EL1_NUMREC_64: + return true; + default: + pr_warn("unsupported BRBE entries\n"); + return false; + } +} + +static inline bool brbe_paused(void) +{ + u64 brbfcr = read_sysreg_s(SYS_BRBFCR_EL1); + + return brbfcr & BRBFCR_EL1_PAUSED; +} + +static inline void set_brbe_paused(void) +{ + u64 brbfcr = read_sysreg_s(SYS_BRBFCR_EL1); + + write_sysreg_s(brbfcr | BRBFCR_EL1_PAUSED, SYS_BRBFCR_EL1); + isb(); +} diff --git a/include/linux/perf/arm_pmu.h b/include/linux/perf/arm_pmu.h index bda0d9984a98..9c23b2b58b3d 100644 --- a/include/linux/perf/arm_pmu.h +++ b/include/linux/perf/arm_pmu.h @@ -168,6 +168,26 @@ struct arm_pmu { unsigned long acpi_cpuid; }; +#ifdef CONFIG_ARM_BRBE_PMU +void arm64_pmu_brbe_filter(struct pmu_hw_events *hw_events, struct perf_event *event); +void arm64_pmu_brbe_read(struct pmu_hw_events *cpuc, struct perf_event *event); +void arm64_pmu_brbe_disable(struct pmu_hw_events *cpuc); +void arm64_pmu_brbe_enable(struct pmu_hw_events *cpuc); +void arm64_pmu_brbe_probe(struct pmu_hw_events *cpuc); +void arm64_pmu_brbe_reset(struct pmu_hw_events *cpuc); +bool arm64_pmu_brbe_supported(struct perf_event *event); +#else +static inline void arm64_pmu_brbe_filter(struct pmu_hw_events *hw_events, struct perf_event *event) +{ +} +static inline void arm64_pmu_brbe_read(struct pmu_hw_events *cpuc, struct perf_event *event) { } +static inline void arm64_pmu_brbe_disable(struct pmu_hw_events *cpuc) { } +static inline void arm64_pmu_brbe_enable(struct pmu_hw_events *cpuc) { } +static inline void arm64_pmu_brbe_probe(struct pmu_hw_events *cpuc) { } +static inline void arm64_pmu_brbe_reset(struct pmu_hw_events *cpuc) { } +static inline bool arm64_pmu_brbe_supported(struct perf_event *event) {return false; } +#endif + #define to_arm_pmu(p) (container_of(p, struct arm_pmu, pmu)) u64 armpmu_event_update(struct perf_event *event); From patchwork Mon Nov 7 06:25:14 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anshuman Khandual X-Patchwork-Id: 16252 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp1869395wru; Sun, 6 Nov 2022 22:29:16 -0800 (PST) X-Google-Smtp-Source: AA0mqf59dXAZEFnicer+SKxqxGVZz5ak76ft1+jwc6Xu9ds83BC1n8JthbyzYINooNrcCEUtksTb X-Received: by 2002:a05:6402:111a:b0:466:849d:eb5c with SMTP id u26-20020a056402111a00b00466849deb5cmr366187edv.131.1667802556376; Sun, 06 Nov 2022 22:29:16 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1667802556; cv=none; d=google.com; s=arc-20160816; b=RXmGCXZ8cepp7MAFkicUo3tsVYXpoGxkF8/7pRtjjesEQEOcyYWPqqV0862grgTlkn 7cu2O+Su5L6TZ8ybAnc14hlQetKZwVmStZoAZmfE7InmcHxVwmyEzC8g19arN5nmZ1sv 5fIMSLjAqWXpANjxq2Q6st1lnBiUP8XVnoWAbNi4J17vasshXNPczqwm7Jqway1yPIKn PXFID5TX26Xei0kFOc76Ur5pMW0Y4Jj4mBLObYXhZeSshh914TufFO6UcedttfPaYM87 Jo1A4TtCBBSmMrJqVBYaX6Wbh7B3vt02/Yo0nXDsPhXlT5WJzJjPcEamKLGQNsFpLoj+ k1Iw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=agDJp7zatiBR+mLOzJfKT3bAIHTIAxFyHnThaZDAjBc=; b=JlxglQSigNsfFBKJJrHbDgi/uRlefNIvX9ViPCXwRaNW7KUdgdXY1SuD61jFsqS9on JrfpQwR3OcfatkwPhNXpK7JqYrH7hZ6kXcjiiaW0GYlA7eeS18/BbhF+qseCz5+HATs+ lvnzf6bmlz3zkU9xPEh0xjpmS9yO5qqoorkkaqow/x9tc1sQKO9KPAPAmlvhy/kqZ+9i c5WZWRNNJhBwPnPfRpRCCp3Z1e0f1x9BPWESANTXeA4cXskds7ltHcbdTrzU/qiIGg7X UAn6/a2MTOhafvhey7gxxspKhMP8hz4l0fKlYij0ybjIhj3dMm9AMKrddezYCLbCD2Dd xLxg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id e22-20020a17090658d600b007836ec6999dsi3434867ejs.904.2022.11.06.22.28.49; Sun, 06 Nov 2022 22:29:16 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231328AbiKGG0g (ORCPT + 99 others); Mon, 7 Nov 2022 01:26:36 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41476 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231324AbiKGG0T (ORCPT ); Mon, 7 Nov 2022 01:26:19 -0500 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 2809612AC9; Sun, 6 Nov 2022 22:26:14 -0800 (PST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 1752C113E; Sun, 6 Nov 2022 22:26:20 -0800 (PST) Received: from a077893.blr.arm.com (unknown [10.162.42.7]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id E30903F534; Sun, 6 Nov 2022 22:26:08 -0800 (PST) From: Anshuman Khandual To: linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-arm-kernel@lists.infradead.org, peterz@infradead.org, acme@kernel.org, mark.rutland@arm.com, will@kernel.org, catalin.marinas@arm.com Cc: Anshuman Khandual , Mark Brown , James Clark , Rob Herring , Marc Zyngier , Suzuki Poulose , Ingo Molnar Subject: [PATCH V5 7/7] arm64/perf: Enable branch stack sampling Date: Mon, 7 Nov 2022 11:55:14 +0530 Message-Id: <20221107062514.2851047-8-anshuman.khandual@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221107062514.2851047-1-anshuman.khandual@arm.com> References: <20221107062514.2851047-1-anshuman.khandual@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1748817610980106743?= X-GMAIL-MSGID: =?utf-8?q?1748817732926118784?= Now that all the required pieces are already in place, just enable the perf branch stack sampling support on arm64 platform, by removing the gate which blocks it in armpmu_event_init(). Cc: Mark Rutland Cc: Will Deacon Cc: Catalin Marinas Cc: linux-kernel@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org Signed-off-by: Anshuman Khandual --- drivers/perf/arm_pmu.c | 25 ++++++++++++++++++++++--- 1 file changed, 22 insertions(+), 3 deletions(-) diff --git a/drivers/perf/arm_pmu.c b/drivers/perf/arm_pmu.c index 1a8dca4e513e..dc5e4f9aca22 100644 --- a/drivers/perf/arm_pmu.c +++ b/drivers/perf/arm_pmu.c @@ -537,9 +537,28 @@ static int armpmu_event_init(struct perf_event *event) !cpumask_test_cpu(event->cpu, &armpmu->supported_cpus)) return -ENOENT; - /* does not support taken branch sampling */ - if (has_branch_stack(event)) - return -EOPNOTSUPP; + if (has_branch_stack(event)) { + /* + * BRBE support is absent. Select CONFIG_ARM_BRBE_PMU + * in the config, before branch stack sampling events + * can be requested. + */ + if (!IS_ENABLED(CONFIG_ARM_BRBE_PMU)) { + pr_info("BRBE is disabled, select CONFIG_ARM_BRBE_PMU\n"); + return -EOPNOTSUPP; + } + + /* + * Branch stack sampling event can not be supported in + * case either the required driver itself is absent or + * BRBE buffer, is not supported. Besides checking for + * the callback prevents a crash in case it's absent. + */ + if (!armpmu->brbe_supported || !armpmu->brbe_supported(event)) { + pr_info("BRBE is not supported\n"); + return -EOPNOTSUPP; + } + } if (armpmu->map_event(event) == -ENOENT) return -ENOENT;