From patchwork Mon Oct 17 05:57:07 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anshuman Khandual X-Patchwork-Id: 3210 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4ac7:0:0:0:0:0 with SMTP id y7csp1289180wrs; Sun, 16 Oct 2022 22:58:58 -0700 (PDT) X-Google-Smtp-Source: AMsMyM6beu3QY9DTelEypI8rggBlynQbQ84q0UV2ZXmPbb3QU+n6fFk2TRhWjE9Wyx787rPPpKlU X-Received: by 2002:a17:903:120d:b0:178:a6ca:b974 with SMTP id l13-20020a170903120d00b00178a6cab974mr10525093plh.8.1665986337610; Sun, 16 Oct 2022 22:58:57 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1665986337; cv=none; d=google.com; s=arc-20160816; b=eTlg03PDzDnTpD5Sube/lPbpJLyE9dQf7dTryb3b+ljEUpb2uLJL1AD1KKfQGBw4lH Uhe9uCLMezFMfKQIiGd2Gl/1QfZa/8qR74dKexkei7qmr7ipGEx7LSz8y5Q0j2xGmZ/h 4VAmJTI+cewHuJ+Mk55couFMwXICMw7UJHxR4JG/2fPfkNo8p/5xA2KQVVu+2azBUiXY dXJvBRdL8oOG6vyIL+xOIWaS8Ff2zMJdtpZc3ZCTlZbrOJK+DjHBnD8G+nAKI3HMKt7y QiJCUegtPZ7LtkkjkZDw5VYFgF8toSWz3wmhIWzIPXdkbWNeN5j9EmMdJdEkZ9k/eRqe AxIg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=Tgc/21j9y5/107lgFMR3KlsEB4yIjjyz6qj0vm7wNyc=; b=FGJwZ/LDksDBKTnkhqGPZ9wJYQe2htdPXMGhVMnbpui3aVRFe88suvt2wjLvY3ammW OLZGOrYrCwglKCFnatHLg8Dr/HkXYQw2XJcKVUnHt42qgcXY3dNpwLWanmkHPQCBfdhw Qktgy7jKr8QSLZ9CHGBSAc2FYMhdKaW1XXVzBv4ByNEW+KkRpUq90GbXHrR4m8lBhJjD qIpPxqO3Pi+kDkaCf2SP44BGOQ3inpyQM3kBQb6t7e2QRelW8637dSg7EECokTKwF1Ri +TbBgJZ/wSaqer693SGSr8aN2U/5Ec5tbg5D92LlMYNIaRFX3KYT7FEgRmyjWmSglTVN tJ7w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id ja3-20020a170902efc300b00176841aa2e3si10107044plb.93.2022.10.16.22.58.44; Sun, 16 Oct 2022 22:58:57 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230092AbiJQF6A (ORCPT + 99 others); Mon, 17 Oct 2022 01:58:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38414 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230085AbiJQF5z (ORCPT ); Mon, 17 Oct 2022 01:57:55 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 8D82B56005; Sun, 16 Oct 2022 22:57:51 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 03DFD113E; Sun, 16 Oct 2022 22:57:57 -0700 (PDT) Received: from a077893.blr.arm.com (unknown [10.162.42.7]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id DBFC13F7D8; Sun, 16 Oct 2022 22:57:45 -0700 (PDT) From: Anshuman Khandual To: linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-arm-kernel@lists.infradead.org, peterz@infradead.org, acme@kernel.org, mark.rutland@arm.com, will@kernel.org, catalin.marinas@arm.com Cc: Anshuman Khandual , Mark Brown , James Clark , Rob Herring , Marc Zyngier , Suzuki Poulose , Ingo Molnar Subject: [PATCH V4 1/7] arm64/perf: Add BRBE registers and fields Date: Mon, 17 Oct 2022 11:27:07 +0530 Message-Id: <20221017055713.451092-2-anshuman.khandual@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221017055713.451092-1-anshuman.khandual@arm.com> References: <20221017055713.451092-1-anshuman.khandual@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_LOW, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1746913289831863377?= X-GMAIL-MSGID: =?utf-8?q?1746913289831863377?= This adds BRBE related register definitions and various other related field macros there in. These will be used subsequently in a BRBE driver which is being added later on. Cc: Catalin Marinas Cc: Will Deacon Cc: Marc Zyngier Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Anshuman Khandual --- arch/arm64/include/asm/sysreg.h | 103 ++++++++++++++++++++ arch/arm64/tools/sysreg | 161 ++++++++++++++++++++++++++++++++ 2 files changed, 264 insertions(+) diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h index 7d301700d1a9..78335c7807dc 100644 --- a/arch/arm64/include/asm/sysreg.h +++ b/arch/arm64/include/asm/sysreg.h @@ -161,6 +161,109 @@ #define SYS_DBGDTRTX_EL0 sys_reg(2, 3, 0, 5, 0) #define SYS_DBGVCR32_EL2 sys_reg(2, 4, 0, 7, 0) +#define __SYS_BRBINFO(n) sys_reg(2, 1, 8, ((n) & 0xf), ((((n) & 0x10)) >> 2 + 0)) +#define __SYS_BRBSRC(n) sys_reg(2, 1, 8, ((n) & 0xf), ((((n) & 0x10)) >> 2 + 1)) +#define __SYS_BRBTGT(n) sys_reg(2, 1, 8, ((n) & 0xf), ((((n) & 0x10)) >> 2 + 2)) + +#define SYS_BRBINF0_EL1 __SYS_BRBINFO(0) +#define SYS_BRBINF1_EL1 __SYS_BRBINFO(1) +#define SYS_BRBINF2_EL1 __SYS_BRBINFO(2) +#define SYS_BRBINF3_EL1 __SYS_BRBINFO(3) +#define SYS_BRBINF4_EL1 __SYS_BRBINFO(4) +#define SYS_BRBINF5_EL1 __SYS_BRBINFO(5) +#define SYS_BRBINF6_EL1 __SYS_BRBINFO(6) +#define SYS_BRBINF7_EL1 __SYS_BRBINFO(7) +#define SYS_BRBINF8_EL1 __SYS_BRBINFO(8) +#define SYS_BRBINF9_EL1 __SYS_BRBINFO(9) +#define SYS_BRBINF10_EL1 __SYS_BRBINFO(10) +#define SYS_BRBINF11_EL1 __SYS_BRBINFO(11) +#define SYS_BRBINF12_EL1 __SYS_BRBINFO(12) +#define SYS_BRBINF13_EL1 __SYS_BRBINFO(13) +#define SYS_BRBINF14_EL1 __SYS_BRBINFO(14) +#define SYS_BRBINF15_EL1 __SYS_BRBINFO(15) +#define SYS_BRBINF16_EL1 __SYS_BRBINFO(16) +#define SYS_BRBINF17_EL1 __SYS_BRBINFO(17) +#define SYS_BRBINF18_EL1 __SYS_BRBINFO(18) +#define SYS_BRBINF19_EL1 __SYS_BRBINFO(19) +#define SYS_BRBINF20_EL1 __SYS_BRBINFO(20) +#define SYS_BRBINF21_EL1 __SYS_BRBINFO(21) +#define SYS_BRBINF22_EL1 __SYS_BRBINFO(22) +#define SYS_BRBINF23_EL1 __SYS_BRBINFO(23) +#define SYS_BRBINF24_EL1 __SYS_BRBINFO(24) +#define SYS_BRBINF25_EL1 __SYS_BRBINFO(25) +#define SYS_BRBINF26_EL1 __SYS_BRBINFO(26) +#define SYS_BRBINF27_EL1 __SYS_BRBINFO(27) +#define SYS_BRBINF28_EL1 __SYS_BRBINFO(28) +#define SYS_BRBINF29_EL1 __SYS_BRBINFO(29) +#define SYS_BRBINF30_EL1 __SYS_BRBINFO(30) +#define SYS_BRBINF31_EL1 __SYS_BRBINFO(31) + +#define SYS_BRBSRC0_EL1 __SYS_BRBSRC(0) +#define SYS_BRBSRC1_EL1 __SYS_BRBSRC(1) +#define SYS_BRBSRC2_EL1 __SYS_BRBSRC(2) +#define SYS_BRBSRC3_EL1 __SYS_BRBSRC(3) +#define SYS_BRBSRC4_EL1 __SYS_BRBSRC(4) +#define SYS_BRBSRC5_EL1 __SYS_BRBSRC(5) +#define SYS_BRBSRC6_EL1 __SYS_BRBSRC(6) +#define SYS_BRBSRC7_EL1 __SYS_BRBSRC(7) +#define SYS_BRBSRC8_EL1 __SYS_BRBSRC(8) +#define SYS_BRBSRC9_EL1 __SYS_BRBSRC(9) +#define SYS_BRBSRC10_EL1 __SYS_BRBSRC(10) +#define SYS_BRBSRC11_EL1 __SYS_BRBSRC(11) +#define SYS_BRBSRC12_EL1 __SYS_BRBSRC(12) +#define SYS_BRBSRC13_EL1 __SYS_BRBSRC(13) +#define SYS_BRBSRC14_EL1 __SYS_BRBSRC(14) +#define SYS_BRBSRC15_EL1 __SYS_BRBSRC(15) +#define SYS_BRBSRC16_EL1 __SYS_BRBSRC(16) +#define SYS_BRBSRC17_EL1 __SYS_BRBSRC(17) +#define SYS_BRBSRC18_EL1 __SYS_BRBSRC(18) +#define SYS_BRBSRC19_EL1 __SYS_BRBSRC(19) +#define SYS_BRBSRC20_EL1 __SYS_BRBSRC(20) +#define SYS_BRBSRC21_EL1 __SYS_BRBSRC(21) +#define SYS_BRBSRC22_EL1 __SYS_BRBSRC(22) +#define SYS_BRBSRC23_EL1 __SYS_BRBSRC(23) +#define SYS_BRBSRC24_EL1 __SYS_BRBSRC(24) +#define SYS_BRBSRC25_EL1 __SYS_BRBSRC(25) +#define SYS_BRBSRC26_EL1 __SYS_BRBSRC(26) +#define SYS_BRBSRC27_EL1 __SYS_BRBSRC(27) +#define SYS_BRBSRC28_EL1 __SYS_BRBSRC(28) +#define SYS_BRBSRC29_EL1 __SYS_BRBSRC(29) +#define SYS_BRBSRC30_EL1 __SYS_BRBSRC(30) +#define SYS_BRBSRC31_EL1 __SYS_BRBSRC(31) + +#define SYS_BRBTGT0_EL1 __SYS_BRBTGT(0) +#define SYS_BRBTGT1_EL1 __SYS_BRBTGT(1) +#define SYS_BRBTGT2_EL1 __SYS_BRBTGT(2) +#define SYS_BRBTGT3_EL1 __SYS_BRBTGT(3) +#define SYS_BRBTGT4_EL1 __SYS_BRBTGT(4) +#define SYS_BRBTGT5_EL1 __SYS_BRBTGT(5) +#define SYS_BRBTGT6_EL1 __SYS_BRBTGT(6) +#define SYS_BRBTGT7_EL1 __SYS_BRBTGT(7) +#define SYS_BRBTGT8_EL1 __SYS_BRBTGT(8) +#define SYS_BRBTGT9_EL1 __SYS_BRBTGT(9) +#define SYS_BRBTGT10_EL1 __SYS_BRBTGT(10) +#define SYS_BRBTGT11_EL1 __SYS_BRBTGT(11) +#define SYS_BRBTGT12_EL1 __SYS_BRBTGT(12) +#define SYS_BRBTGT13_EL1 __SYS_BRBTGT(13) +#define SYS_BRBTGT14_EL1 __SYS_BRBTGT(14) +#define SYS_BRBTGT15_EL1 __SYS_BRBTGT(15) +#define SYS_BRBTGT16_EL1 __SYS_BRBTGT(16) +#define SYS_BRBTGT17_EL1 __SYS_BRBTGT(17) +#define SYS_BRBTGT18_EL1 __SYS_BRBTGT(18) +#define SYS_BRBTGT19_EL1 __SYS_BRBTGT(19) +#define SYS_BRBTGT20_EL1 __SYS_BRBTGT(20) +#define SYS_BRBTGT21_EL1 __SYS_BRBTGT(21) +#define SYS_BRBTGT22_EL1 __SYS_BRBTGT(22) +#define SYS_BRBTGT23_EL1 __SYS_BRBTGT(23) +#define SYS_BRBTGT24_EL1 __SYS_BRBTGT(24) +#define SYS_BRBTGT25_EL1 __SYS_BRBTGT(25) +#define SYS_BRBTGT26_EL1 __SYS_BRBTGT(26) +#define SYS_BRBTGT27_EL1 __SYS_BRBTGT(27) +#define SYS_BRBTGT28_EL1 __SYS_BRBTGT(28) +#define SYS_BRBTGT29_EL1 __SYS_BRBTGT(29) +#define SYS_BRBTGT30_EL1 __SYS_BRBTGT(30) +#define SYS_BRBTGT31_EL1 __SYS_BRBTGT(31) + #define SYS_MIDR_EL1 sys_reg(3, 0, 0, 0, 0) #define SYS_MPIDR_EL1 sys_reg(3, 0, 0, 0, 5) #define SYS_REVIDR_EL1 sys_reg(3, 0, 0, 0, 6) diff --git a/arch/arm64/tools/sysreg b/arch/arm64/tools/sysreg index 384757a7eda9..e14af67efc51 100644 --- a/arch/arm64/tools/sysreg +++ b/arch/arm64/tools/sysreg @@ -167,6 +167,167 @@ Enum 3:0 BT EndEnum EndSysreg + +# This is just a dummy register declaration to get all common field masks and +# shifts for accessing given BRBINF contents. +Sysreg BRBINF_EL1 2 1 8 0 0 +Res0 63:47 +Field 46 CCU +Field 45:32 CC +Res0 31:18 +Field 17 LASTFAILED +Field 16 T +Res0 15:14 +Enum 13:8 TYPE + 0b000000 UNCOND_DIR + 0b000001 INDIR + 0b000010 DIR_LINK + 0b000011 INDIR_LINK + 0b000101 RET_SUB + 0b000111 RET_EXCPT + 0b001000 COND_DIR + 0b100001 DEBUG_HALT + 0b100010 CALL + 0b100011 TRAP + 0b100100 SERROR + 0b100110 INST_DEBUG + 0b100111 DATA_DEBUG + 0b101010 ALGN_FAULT + 0b101011 INST_FAULT + 0b101100 DATA_FAULT + 0b101110 IRQ + 0b101111 FIQ + 0b111001 DEBUG_EXIT +EndEnum +Enum 7:6 EL + 0b00 EL0 + 0b01 EL1 + 0b10 EL2 + 0b11 EL3 +EndEnum +Field 5 MPRED +Res0 4:2 +Enum 1:0 VALID + 0b00 NONE + 0b01 TARGET + 0b10 SOURCE + 0b11 FULL +EndEnum +EndSysreg + +Sysreg BRBCR_EL1 2 1 9 0 0 +Res0 63:24 +Field 23 EXCEPTION +Field 22 ERTN +Res0 21:9 +Field 8 FZP +Res0 7 +Enum 6:5 TS + 0b1 VIRTUAL + 0b10 GST_PHYSICAL + 0b11 PHYSICAL +EndEnum +Field 4 MPRED +Field 3 CC +Res0 2 +Field 1 E1BRE +Field 0 E0BRE +EndSysreg + +Sysreg BRBFCR_EL1 2 1 9 0 1 +Res0 63:30 +Enum 29:28 BANK + 0b0 FIRST + 0b1 SECOND +EndEnum +Res0 27:23 +Field 22 CONDDIR +Field 21 DIRCALL +Field 20 INDCALL +Field 19 RTN +Field 18 INDIRECT +Field 17 DIRECT +Field 16 EnL +Res0 15:8 +Field 7 PAUSED +Field 6 LASTFAILED +Res0 5:0 +EndSysreg + +Sysreg BRBTS_EL1 2 1 9 0 2 +Field 63:0 TS +EndSysreg + +Sysreg BRBINFINJ_EL1 2 1 9 1 0 +Res0 63:47 +Field 46 CCU +Field 45:32 CC +Res0 31:18 +Field 17 LASTFAILED +Field 16 T +Res0 15:14 +Enum 13:8 TYPE + 0b000000 UNCOND_DIR + 0b000001 INDIR + 0b000010 DIR_LINK + 0b000011 INDIR_LINK + 0b000100 RET_SUB + 0b000100 RET_SUB + 0b000111 RET_EXCPT + 0b001000 COND_DIR + 0b100001 DEBUG_HALT + 0b100010 CALL + 0b100011 TRAP + 0b100100 SERROR + 0b100110 INST_DEBUG + 0b100111 DATA_DEBUG + 0b101010 ALGN_FAULT + 0b101011 INST_FAULT + 0b101100 DATA_FAULT + 0b101110 IRQ + 0b101111 FIQ + 0b111001 DEBUG_EXIT +EndEnum +Enum 7:6 EL + 0b00 EL0 + 0b01 EL1 + 0b10 EL2 + 0b11 EL3 +EndEnum +Field 5 MPRED +Res0 4:2 +Enum 1:0 VALID + 0b00 NONE + 0b01 TARGET + 0b10 SOURCE + 0b00 FULL +EndEnum +EndSysreg + +Sysreg BRBSRCINJ_EL1 2 1 9 1 1 +Field 63:0 ADDRESS +EndSysreg + +Sysreg BRBTGTINJ_EL1 2 1 9 1 2 +Field 63:0 ADDRESS +EndSysreg + +Sysreg BRBIDR0_EL1 2 1 9 2 0 +Res0 63:16 +Enum 15:12 CC + 0b101 20_BIT +EndEnum +Enum 11:8 FORMAT + 0b0 0 +EndEnum +Enum 7:0 NUMREC + 0b1000 8 + 0b10000 16 + 0b100000 32 + 0b1000000 64 +EndEnum +EndSysreg + Sysreg ID_AA64ZFR0_EL1 3 0 0 4 4 Res0 63:60 Enum 59:56 F64MM From patchwork Mon Oct 17 05:57:08 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anshuman Khandual X-Patchwork-Id: 3211 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4ac7:0:0:0:0:0 with SMTP id y7csp1289216wrs; Sun, 16 Oct 2022 22:59:06 -0700 (PDT) X-Google-Smtp-Source: AMsMyM7zHtNydXRwJ/M1yi9GpSLHiyul8PN0NUfPm83+w0OtS6Zp6wQc9bihew2oLRhF9QEvNMF0 X-Received: by 2002:a65:6bd4:0:b0:443:94a1:429c with SMTP id e20-20020a656bd4000000b0044394a1429cmr9339858pgw.606.1665986345901; Sun, 16 Oct 2022 22:59:05 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1665986345; cv=none; d=google.com; s=arc-20160816; b=1ITPdBKTd4bKuc1jxCTp4LsiKrXVE4EM0zNqGH+w1aLU4gmQU/C/Fmr8wj2/dVQg4Z /Z02AVaGdVa/K9Tf4wCIowMeZ+8QdNX/29Zb7+G9IlHP7z6lj+GHkFoLqELYyj5ykKKj ATrq7/ea3pl/GdPoq/KmgaAcGn+9jyXfch/2FCmKuesUaX5LUB//zW682XXGSkPNJH/p DL5ojiu3BPBsdT03HwLNg2M8SWq2mfbFWStD0psO1PwRXfLsDGNBEOaPCM+5683scfG7 P1xs4s/Zh2fODDz1aU3XmkhRvrINbVH+gikg4TZBy48Mo0T4qFLlZy0vJLhDBhWKW0SR oATw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=aSxaEu9kpiMVce3FRDbVv1l6xBJNcUGiSLhfr+B59ZI=; b=VUV/5F+i49eVlX9gR8Obcs0JsGNUEkW8Yp4kqLOTVfP7GnykcHce8IqhiURqHUxKYf 8W56bvWGj1MEpGTQ/NsbKVDRNq1G+53Mtbp463MA0/9GBJXa6F8cAeifBKlXE9i2RFyK IRh/kknujruM0Y4fnUW8nNZ1+BeYauDB5/97Q/AwQrxkOcFJeyQKbsst8keYGDQRZqb/ yGg2+nHVT+hINHcGhsYh35jodV7TvYgTK+WehK81n8i/41Zw7Dj6ciVCRtJ0/AzvPw/w NBY1HMSiel3BtOCDYXCnZmO7ytPAyKUeHGvGnJM7Hg5S4LgNUtE2E4fVybWZTUj3kSHy vkFw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id nl5-20020a17090b384500b00205dfd2c794si17260210pjb.167.2022.10.16.22.58.53; Sun, 16 Oct 2022 22:59:05 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230111AbiJQF6H (ORCPT + 99 others); Mon, 17 Oct 2022 01:58:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38474 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230095AbiJQF56 (ORCPT ); Mon, 17 Oct 2022 01:57:58 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id C3DDE56001; Sun, 16 Oct 2022 22:57:56 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C1A62113E; Sun, 16 Oct 2022 22:58:02 -0700 (PDT) Received: from a077893.blr.arm.com (unknown [10.162.42.7]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 6834B3F7D8; Sun, 16 Oct 2022 22:57:51 -0700 (PDT) From: Anshuman Khandual To: linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-arm-kernel@lists.infradead.org, peterz@infradead.org, acme@kernel.org, mark.rutland@arm.com, will@kernel.org, catalin.marinas@arm.com Cc: Anshuman Khandual , Mark Brown , James Clark , Rob Herring , Marc Zyngier , Suzuki Poulose , Ingo Molnar Subject: [PATCH V4 2/7] arm64/perf: Update struct arm_pmu for BRBE Date: Mon, 17 Oct 2022 11:27:08 +0530 Message-Id: <20221017055713.451092-3-anshuman.khandual@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221017055713.451092-1-anshuman.khandual@arm.com> References: <20221017055713.451092-1-anshuman.khandual@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_LOW, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1746913298627273273?= X-GMAIL-MSGID: =?utf-8?q?1746913298627273273?= Although BRBE is an armv8 speciifc HW feature, abstracting out its various function callbacks at the struct arm_pmu level is preferred, as it cleaner , easier to follow and maintain. Besides some helpers i.e brbe_supported(), brbe_probe() and brbe_reset() might not fit seamlessly, when tried to be embedded via existing arm_pmu helpers in the armv8 implementation. Updates the struct arm_pmu to include all required helpers that will drive BRBE functionality for a given PMU implementation. These are the following. - brbe_filter : Convert perf event filters into BRBE HW filters - brbe_probe : Probe BRBE HW and capture its attributes - brbe_enable : Enable BRBE HW with a given config - brbe_disable : Disable BRBE HW - brbe_read : Read BRBE buffer for captured branch records - brbe_reset : Reset BRBE buffer - brbe_supported: Whether BRBE is supported or not A BRBE driver implementation needs to provide these functionalities. Cc: Will Deacon Cc: Mark Rutland Cc: Peter Zijlstra Cc: Ingo Molnar Cc: Catalin Marinas Cc: linux-arm-kernel@lists.infradead.org Cc: linux-perf-users@vger.kernel.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Anshuman Khandual --- arch/arm64/kernel/perf_event.c | 36 ++++++++++++++++++++++++++++++++++ include/linux/perf/arm_pmu.h | 21 ++++++++++++++++++++ 2 files changed, 57 insertions(+) diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c index 7b0643fe2f13..c97377e28288 100644 --- a/arch/arm64/kernel/perf_event.c +++ b/arch/arm64/kernel/perf_event.c @@ -1025,6 +1025,35 @@ static int armv8pmu_filter_match(struct perf_event *event) return evtype != ARMV8_PMUV3_PERFCTR_CHAIN; } +static void armv8pmu_brbe_filter(struct pmu_hw_events *hw_event, struct perf_event *event) +{ +} + +static void armv8pmu_brbe_enable(struct pmu_hw_events *hw_event) +{ +} + +static void armv8pmu_brbe_disable(struct pmu_hw_events *hw_event) +{ +} + +static void armv8pmu_brbe_read(struct pmu_hw_events *hw_event, struct perf_event *event) +{ +} + +static void armv8pmu_brbe_probe(struct pmu_hw_events *hw_event) +{ +} + +static void armv8pmu_brbe_reset(struct pmu_hw_events *hw_event) +{ +} + +static bool armv8pmu_brbe_supported(struct perf_event *event) +{ + return false; +} + static void armv8pmu_reset(void *info) { struct arm_pmu *cpu_pmu = (struct arm_pmu *)info; @@ -1257,6 +1286,13 @@ static int armv8_pmu_init(struct arm_pmu *cpu_pmu, char *name, cpu_pmu->pmu.event_idx = armv8pmu_user_event_idx; + cpu_pmu->brbe_filter = armv8pmu_brbe_filter; + cpu_pmu->brbe_enable = armv8pmu_brbe_enable; + cpu_pmu->brbe_disable = armv8pmu_brbe_disable; + cpu_pmu->brbe_read = armv8pmu_brbe_read; + cpu_pmu->brbe_probe = armv8pmu_brbe_probe; + cpu_pmu->brbe_reset = armv8pmu_brbe_reset; + cpu_pmu->brbe_supported = armv8pmu_brbe_supported; cpu_pmu->name = name; cpu_pmu->map_event = map_event; cpu_pmu->attr_groups[ARMPMU_ATTR_GROUP_EVENTS] = events ? diff --git a/include/linux/perf/arm_pmu.h b/include/linux/perf/arm_pmu.h index 0356cb6a215d..67a6d59786f2 100644 --- a/include/linux/perf/arm_pmu.h +++ b/include/linux/perf/arm_pmu.h @@ -101,6 +101,27 @@ struct arm_pmu { void (*reset)(void *); int (*map_event)(struct perf_event *event); int (*filter_match)(struct perf_event *event); + + /* Convert perf event filters into BRBE HW filters */ + void (*brbe_filter)(struct pmu_hw_events *hw_events, struct perf_event *event); + + /* Probe BRBE HW and capture its attributes */ + void (*brbe_probe)(struct pmu_hw_events *hw_events); + + /* Enable BRBE HW with a given config */ + void (*brbe_enable)(struct pmu_hw_events *hw_events); + + /* Disable BRBE HW */ + void (*brbe_disable)(struct pmu_hw_events *hw_events); + + /* Process BRBE buffer for captured branch records */ + void (*brbe_read)(struct pmu_hw_events *hw_events, struct perf_event *event); + + /* Reset BRBE buffer */ + void (*brbe_reset)(struct pmu_hw_events *hw_events); + + /* Check whether BRBE is supported */ + bool (*brbe_supported)(struct perf_event *event); int num_events; bool secure_access; /* 32-bit ARM only */ #define ARMV8_PMUV3_MAX_COMMON_EVENTS 0x40 From patchwork Mon Oct 17 05:57:09 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anshuman Khandual X-Patchwork-Id: 3212 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4ac7:0:0:0:0:0 with SMTP id y7csp1289225wrs; Sun, 16 Oct 2022 22:59:09 -0700 (PDT) X-Google-Smtp-Source: AMsMyM5O9o/ISInRcw5UxtBiMEIP6wor/0XJ+07e0qXYaBcQfIT4MYUHynirkfsXAVY4bOLye3gi X-Received: by 2002:a17:90b:17c4:b0:20d:4d79:55b1 with SMTP id me4-20020a17090b17c400b0020d4d7955b1mr11810889pjb.125.1665986349116; Sun, 16 Oct 2022 22:59:09 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1665986349; cv=none; d=google.com; s=arc-20160816; b=ssNtNtv8kDqOjVQnQDMRxL6rBCFthtJUAlL/TWQmHzm42DM7sQ4d9IeWykS8MyXCka rNz00dggt3Wripr3hfFNQLGuVeCjrvImKb2/V6NxC+7zpWjCY5U+mjU026EqRMHbtovU sbo69BP3OoU2B8cfcOrE/HrbO5srGQbgLyog9lWYey4dyPbd45o+LBC/PLAkWfylpmoO lxCi6joCoGAAvagS6cCQ5GKKUHUq0Aael1Z2IK2jtO1bIx8KFyKLE11a+QJs7MOfMKjO HR5SLSkeuJrqn5iVQpJ51h1U0NzaaqCdIwRqa1uAdD6oF3qMDM+7eg2n3e37bfcoZXzx 3lKg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=ycoW0TX9QInRAorq7XvL5+W6CjYoCDTKU7c8pN8N0j4=; b=RVhjZdOp0FepHYWEror+gjNhHtYTHaqBJYdf2AocxrMIhuCyVueSlgGemTgccKs/5V 1u0G0Wlgjlwe6Hzvw8VzVoin2Z2N5flmRysJ3hFEpLURvIy9NIfaime2TH0mW5lKkL29 QSzHIl8iP1x+y2P/nzUbBlFBz0CEdFncRKDJRZChY+8BuX3Plu5YiNuH8utk3UkgXBXz NCQkuYxt2hjnUpkemYlvzkwoQl94Uw4GVBBkY9zAGDAH+Xw1EGNS2lLhmak905JrYDUf /+6AlsinrYkzquXaz1W5qlVIdRTlrB6NqUsDPsVkZ2ncF9zWj9OliG+squrgxtbfbgfJ GSCA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id s3-20020a170902ea0300b001853a1fddecsi12032136plg.403.2022.10.16.22.58.56; Sun, 16 Oct 2022 22:59:09 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230140AbiJQF6Q (ORCPT + 99 others); Mon, 17 Oct 2022 01:58:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38732 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230116AbiJQF6J (ORCPT ); Mon, 17 Oct 2022 01:58:09 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id A9F9E56031; Sun, 16 Oct 2022 22:58:02 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 44ED71042; Sun, 16 Oct 2022 22:58:08 -0700 (PDT) Received: from a077893.blr.arm.com (unknown [10.162.42.7]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 27FC73F7D8; Sun, 16 Oct 2022 22:57:56 -0700 (PDT) From: Anshuman Khandual To: linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-arm-kernel@lists.infradead.org, peterz@infradead.org, acme@kernel.org, mark.rutland@arm.com, will@kernel.org, catalin.marinas@arm.com Cc: Anshuman Khandual , Mark Brown , James Clark , Rob Herring , Marc Zyngier , Suzuki Poulose , Ingo Molnar Subject: [PATCH V4 3/7] arm64/perf: Update struct pmu_hw_events for BRBE Date: Mon, 17 Oct 2022 11:27:09 +0530 Message-Id: <20221017055713.451092-4-anshuman.khandual@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221017055713.451092-1-anshuman.khandual@arm.com> References: <20221017055713.451092-1-anshuman.khandual@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_LOW, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1746913302081629399?= X-GMAIL-MSGID: =?utf-8?q?1746913302081629399?= A single perf event instance BRBE related contexts and data will be tracked in struct pmu_hw_events. Hence update the structure to accommodate required details related to BRBE. Cc: Will Deacon Cc: Mark Rutland Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Anshuman Khandual --- drivers/perf/arm_pmu.c | 1 + include/linux/perf/arm_pmu.h | 27 +++++++++++++++++++++++++++ 2 files changed, 28 insertions(+) diff --git a/drivers/perf/arm_pmu.c b/drivers/perf/arm_pmu.c index 3f07df5a7e95..5048a500441e 100644 --- a/drivers/perf/arm_pmu.c +++ b/drivers/perf/arm_pmu.c @@ -905,6 +905,7 @@ static struct arm_pmu *__armpmu_alloc(gfp_t flags) events = per_cpu_ptr(pmu->hw_events, cpu); raw_spin_lock_init(&events->pmu_lock); + events->branches = kmalloc(sizeof(struct brbe_records), flags); events->percpu_pmu = pmu; } diff --git a/include/linux/perf/arm_pmu.h b/include/linux/perf/arm_pmu.h index 67a6d59786f2..bda0d9984a98 100644 --- a/include/linux/perf/arm_pmu.h +++ b/include/linux/perf/arm_pmu.h @@ -44,6 +44,16 @@ static_assert((PERF_EVENT_FLAG_ARCH & ARMPMU_EVT_47BIT) == ARMPMU_EVT_47BIT); }, \ } +/* + * Maximum branch records in BRBE + */ +#define BRBE_MAX_ENTRIES 64 + +struct brbe_records { + struct perf_branch_stack brbe_stack; + struct perf_branch_entry brbe_entries[BRBE_MAX_ENTRIES]; +}; + /* The events for a given PMU register set. */ struct pmu_hw_events { /* @@ -70,6 +80,23 @@ struct pmu_hw_events { struct arm_pmu *percpu_pmu; int irq; + + /* Detected BRBE attributes */ + bool brbe_v1p1; + int brbe_cc; + int brbe_nr; + int brbe_format; + + /* Evaluated BRBE configuration */ + u64 brbfcr; + u64 brbcr; + + /* Tracked BRBE context */ + unsigned int brbe_users; + void *brbe_context; + + /* Captured BRBE buffer - copied as is into perf_sample_data */ + struct brbe_records *branches; }; enum armpmu_attr_groups { From patchwork Mon Oct 17 05:57:10 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anshuman Khandual X-Patchwork-Id: 3213 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4ac7:0:0:0:0:0 with SMTP id y7csp1289366wrs; Sun, 16 Oct 2022 22:59:42 -0700 (PDT) X-Google-Smtp-Source: AMsMyM7WZPnteQKd2tVvKacfjvYtZUeyShk4gBpsVssJp0q4BZ9MuyTWiph53Zlyx2n5siFC5oe1 X-Received: by 2002:a63:6a85:0:b0:43b:d845:f67d with SMTP id f127-20020a636a85000000b0043bd845f67dmr9043944pgc.349.1665986381728; Sun, 16 Oct 2022 22:59:41 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1665986381; cv=none; d=google.com; s=arc-20160816; b=R6Tjb+OxyPLsI98cVA6UzKLY2hfyymODQ6OKlvU3ac21Db+7cOkP7LkTiWryygPYr/ G53OybUtTv82Sfb16o7wHB0w4e++mBoV4MfR/zU/SwPziKMbSQT1EsKgjb1C9x01fi7m iYGaiQPYkX3gP4vChyqqaEeTFC/8+Fw1LTaOYzvPDsel8SIJdf3ebPRvyp/huafPR38b j11Sia4E3vebaZmi74NQZYkBiBckwIszUUytSriRuXTfZ1BU8WHPW/FXKVgsRukPihvG reWwQ0hgirXGNT2wjHov7QOHi7RGemjYoqVJHh3SjlZq2o2d12wyvDvZEptu7wokCxS9 Z97A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=jmKfVqBDx6Q1JsVfcf2bav3tkpdhoe5RzNBcN7OyJnU=; b=Y2DgV/mvZ8q0ACKb/Mos8vt1hvAkhBSEk2pMbtMKG0fOvGm+wx5j5MXCM95Q/ZdOY5 WQcCfb0CQeX7tmKsWeXsgKiiln22aSlg51bYG/LlWxT4Qg0eSw4ysUQBinddQZMrU9LA aGGc1iq/5YbMcWSpOXEiKd0dPH6mMo8foESfoFbK6HqB/fduvjMRlhODYajCZgkDdf2D 9uR+YpSB1ylKBi1vTAtY+Mbbpk1zfUfK9imIeMsLpEY5X2PoNm36FCdt70DXcdBaCDtD EO8leTGOOg+PvgvtdVqMX7O/1yWdjlGhahEqIqnrjsDZbfvw0ZQycTKJf/zIfC+YjxU8 N4+w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id 184-20020a6304c1000000b0046afcf0fa93si10491105pge.522.2022.10.16.22.59.29; Sun, 16 Oct 2022 22:59:41 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230164AbiJQF6Y (ORCPT + 99 others); Mon, 17 Oct 2022 01:58:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38774 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230137AbiJQF6Q (ORCPT ); Mon, 17 Oct 2022 01:58:16 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 273B3564EA; Sun, 16 Oct 2022 22:58:08 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id DF3E31042; Sun, 16 Oct 2022 22:58:13 -0700 (PDT) Received: from a077893.blr.arm.com (unknown [10.162.42.7]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id A96C03F7D8; Sun, 16 Oct 2022 22:58:02 -0700 (PDT) From: Anshuman Khandual To: linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-arm-kernel@lists.infradead.org, peterz@infradead.org, acme@kernel.org, mark.rutland@arm.com, will@kernel.org, catalin.marinas@arm.com Cc: Anshuman Khandual , Mark Brown , James Clark , Rob Herring , Marc Zyngier , Suzuki Poulose , Ingo Molnar Subject: [PATCH V4 4/7] driver/perf/arm_pmu_platform: Add support for BRBE attributes detection Date: Mon, 17 Oct 2022 11:27:10 +0530 Message-Id: <20221017055713.451092-5-anshuman.khandual@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221017055713.451092-1-anshuman.khandual@arm.com> References: <20221017055713.451092-1-anshuman.khandual@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_LOW, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1746913336239431861?= X-GMAIL-MSGID: =?utf-8?q?1746913336239431861?= This adds arm pmu infrastrure to probe BRBE implementation's attributes via driver exported callbacks later. The actual BRBE feature detection will be added by the driver itself. CPU specific BRBE entries, cycle count, format support gets detected during PMU init. This information gets saved in per-cpu struct pmu_hw_events which later helps in operating BRBE during a perf event context. Cc: Will Deacon Cc: Mark Rutland Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Anshuman Khandual --- drivers/perf/arm_pmu_platform.c | 34 +++++++++++++++++++++++++++++++++ 1 file changed, 34 insertions(+) diff --git a/drivers/perf/arm_pmu_platform.c b/drivers/perf/arm_pmu_platform.c index 933b96e243b8..acdc445081aa 100644 --- a/drivers/perf/arm_pmu_platform.c +++ b/drivers/perf/arm_pmu_platform.c @@ -172,6 +172,36 @@ static int armpmu_request_irqs(struct arm_pmu *armpmu) return err; } +static void arm_brbe_probe_cpu(void *info) +{ + struct pmu_hw_events *hw_events; + struct arm_pmu *armpmu = info; + + /* + * Return from here, if BRBE driver has not been + * implemented for this PMU. This helps prevent + * kernel crash later when brbe_probe() will be + * called on the PMU. + */ + if (!armpmu->brbe_probe) + return; + + hw_events = per_cpu_ptr(armpmu->hw_events, smp_processor_id()); + armpmu->brbe_probe(hw_events); +} + +static int armpmu_request_brbe(struct arm_pmu *armpmu) +{ + int cpu, err = 0; + + for_each_cpu(cpu, &armpmu->supported_cpus) { + err = smp_call_function_single(cpu, arm_brbe_probe_cpu, armpmu, 1); + if (err) + return err; + } + return err; +} + static void armpmu_free_irqs(struct arm_pmu *armpmu) { int cpu; @@ -229,6 +259,10 @@ int arm_pmu_device_probe(struct platform_device *pdev, if (ret) goto out_free_irqs; + ret = armpmu_request_brbe(pmu); + if (ret) + goto out_free_irqs; + ret = armpmu_register(pmu); if (ret) { dev_err(dev, "failed to register PMU devices!\n"); From patchwork Mon Oct 17 05:57:11 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anshuman Khandual X-Patchwork-Id: 3215 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4ac7:0:0:0:0:0 with SMTP id y7csp1293519wrs; Sun, 16 Oct 2022 23:12:30 -0700 (PDT) X-Google-Smtp-Source: AMsMyM6gQmBpy+2B68PW42PEP28RA0ynalhy2/27m1fuAY35RGQQeN1tqZN8CLGqeOVZ8XqSe7nY X-Received: by 2002:a63:5d64:0:b0:446:983:b21d with SMTP id o36-20020a635d64000000b004460983b21dmr9289303pgm.441.1665987150587; Sun, 16 Oct 2022 23:12:30 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1665987150; cv=none; d=google.com; s=arc-20160816; b=Gb9Zp0NXw9Pqgj3amNbH6mXRhGsKCF/37HhXQB0fQEaf3VsO4pOppaa7p1TnpT5+O9 w6LSott97XUNcraJTaaYwv2xNE1nJ9j74q2GCpuwPbXzLH5lbtNdxkcOCVygNG6sKp62 CwpbVS+/8niqti8/0RFum+vhUCvyDaLOWcIsu8auIKBxQWin2yy8g+SVw8NswuF2OLt6 FtrxJW6gzxmxuhjikBbHr/WfCEE62+HqGtcMoIEF2bh3zQYnyjlefPbXSyuNXGAfA1gc Bq0ujPo4bZ8P/NBnCtSv+PiLxfx8++KT8I1qgVu1J7LimUOj5S4yUWMkgBUbPtoWKt1v wfRQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=cYamx6lYnElqtf/c8BDtVQTl+FmYIGUiLwujVWjJRkQ=; b=FS6mH5XwPVMZQcg9jT8SPFzUABpgJotVGhBQSg2g4kDFNVxx1U43Ikx5hwOrRUMg/o AlYqeIKNCfYh2IsXKhCAOXpD/AFRGLlQHJd1gLb8dHZi4OZLkix+xLRA1MGuRCSH1ncP es4sHH4eBTlXxw6Csw4oTZqdaHQx4oJQ48dUBM9s5HKzixQGCTP8KKt7DnN/URd0jIk8 QdRTfxQmik+NBFAi849Hwqgs4ujjLYBow96ym4EN05zZUVKioDp6oaCjOVzioPCeQyql kf/oYmD6wi0ThSHS/W8tsI01+5hc1n5vWuc5dxZ7lOz1jRNB4tYEWCCwiPdetAcvCRVn 5BMA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id h191-20020a6383c8000000b004620970e0e9si10339408pge.116.2022.10.16.23.12.15; Sun, 16 Oct 2022 23:12:30 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230101AbiJQF6l (ORCPT + 99 others); Mon, 17 Oct 2022 01:58:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39048 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230156AbiJQF6S (ORCPT ); Mon, 17 Oct 2022 01:58:18 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 0EFBC564F8; Sun, 16 Oct 2022 22:58:14 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B74551042; Sun, 16 Oct 2022 22:58:19 -0700 (PDT) Received: from a077893.blr.arm.com (unknown [10.162.42.7]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 4F63B3F7D8; Sun, 16 Oct 2022 22:58:08 -0700 (PDT) From: Anshuman Khandual To: linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-arm-kernel@lists.infradead.org, peterz@infradead.org, acme@kernel.org, mark.rutland@arm.com, will@kernel.org, catalin.marinas@arm.com Cc: Anshuman Khandual , Mark Brown , James Clark , Rob Herring , Marc Zyngier , Suzuki Poulose , Ingo Molnar Subject: [PATCH V4 5/7] arm64/perf: Drive BRBE from perf event states Date: Mon, 17 Oct 2022 11:27:11 +0530 Message-Id: <20221017055713.451092-6-anshuman.khandual@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221017055713.451092-1-anshuman.khandual@arm.com> References: <20221017055713.451092-1-anshuman.khandual@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_LOW, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1746914142339196331?= X-GMAIL-MSGID: =?utf-8?q?1746914142339196331?= Branch stack sampling rides along the normal perf event and all the branch records get captured during the PMU interrupt. This just changes perf event handling on the arm64 platform to accommodate required BRBE operations that will enable branch stack sampling support. Cc: Peter Zijlstra Cc: Ingo Molnar Cc: Arnaldo Carvalho de Melo Cc: Mark Rutland Cc: Will Deacon Cc: Catalin Marinas Cc: linux-perf-users@vger.kernel.org Cc: linux-kernel@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org Signed-off-by: Anshuman Khandual --- arch/arm64/kernel/perf_event.c | 7 ++++++ drivers/perf/arm_pmu.c | 40 ++++++++++++++++++++++++++++++++++ 2 files changed, 47 insertions(+) diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c index c97377e28288..97db333d1208 100644 --- a/arch/arm64/kernel/perf_event.c +++ b/arch/arm64/kernel/perf_event.c @@ -874,6 +874,13 @@ static irqreturn_t armv8pmu_handle_irq(struct arm_pmu *cpu_pmu) if (!armpmu_event_set_period(event)) continue; + if (has_branch_stack(event)) { + cpu_pmu->brbe_read(cpuc, event); + data.br_stack = &cpuc->branches->brbe_stack; + data.sample_flags |= PERF_SAMPLE_BRANCH_STACK; + cpu_pmu->brbe_reset(cpuc); + } + /* * Perf event overflow will queue the processing of the event as * an irq_work which will be taken care of in the handling of diff --git a/drivers/perf/arm_pmu.c b/drivers/perf/arm_pmu.c index 5048a500441e..1a8dca4e513e 100644 --- a/drivers/perf/arm_pmu.c +++ b/drivers/perf/arm_pmu.c @@ -271,12 +271,22 @@ armpmu_stop(struct perf_event *event, int flags) { struct arm_pmu *armpmu = to_arm_pmu(event->pmu); struct hw_perf_event *hwc = &event->hw; + struct pmu_hw_events *hw_events = this_cpu_ptr(armpmu->hw_events); /* * ARM pmu always has to update the counter, so ignore * PERF_EF_UPDATE, see comments in armpmu_start(). */ if (!(hwc->state & PERF_HES_STOPPED)) { + if (has_branch_stack(event)) { + WARN_ON_ONCE(!hw_events->brbe_users); + hw_events->brbe_users--; + if (!hw_events->brbe_users) { + hw_events->brbe_context = NULL; + armpmu->brbe_disable(hw_events); + } + } + armpmu->disable(event); armpmu_event_update(event); hwc->state |= PERF_HES_STOPPED | PERF_HES_UPTODATE; @@ -287,6 +297,7 @@ static void armpmu_start(struct perf_event *event, int flags) { struct arm_pmu *armpmu = to_arm_pmu(event->pmu); struct hw_perf_event *hwc = &event->hw; + struct pmu_hw_events *hw_events = this_cpu_ptr(armpmu->hw_events); /* * ARM pmu always has to reprogram the period, so ignore @@ -304,6 +315,14 @@ static void armpmu_start(struct perf_event *event, int flags) * happened since disabling. */ armpmu_event_set_period(event); + if (has_branch_stack(event)) { + if (event->ctx->task && hw_events->brbe_context != event->ctx) { + armpmu->brbe_reset(hw_events); + hw_events->brbe_context = event->ctx; + } + armpmu->brbe_enable(hw_events); + hw_events->brbe_users++; + } armpmu->enable(event); } @@ -349,6 +368,10 @@ armpmu_add(struct perf_event *event, int flags) hw_events->events[idx] = event; hwc->state = PERF_HES_STOPPED | PERF_HES_UPTODATE; + + if (has_branch_stack(event)) + armpmu->brbe_filter(hw_events, event); + if (flags & PERF_EF_START) armpmu_start(event, PERF_EF_RELOAD); @@ -443,6 +466,7 @@ __hw_perf_event_init(struct perf_event *event) { struct arm_pmu *armpmu = to_arm_pmu(event->pmu); struct hw_perf_event *hwc = &event->hw; + struct pmu_hw_events *hw_events = this_cpu_ptr(armpmu->hw_events); int mapping; hwc->flags = 0; @@ -492,6 +516,9 @@ __hw_perf_event_init(struct perf_event *event) local64_set(&hwc->period_left, hwc->sample_period); } + if (has_branch_stack(event)) + armpmu->brbe_filter(hw_events, event); + return validate_group(event); } @@ -520,6 +547,18 @@ static int armpmu_event_init(struct perf_event *event) return __hw_perf_event_init(event); } +static void armpmu_sched_task(struct perf_event_context *ctx, bool sched_in) +{ + struct arm_pmu *armpmu = to_arm_pmu(ctx->pmu); + struct pmu_hw_events *hw_events = this_cpu_ptr(armpmu->hw_events); + + if (!hw_events->brbe_users) + return; + + if (sched_in) + armpmu->brbe_reset(hw_events); +} + static void armpmu_enable(struct pmu *pmu) { struct arm_pmu *armpmu = to_arm_pmu(pmu); @@ -877,6 +916,7 @@ static struct arm_pmu *__armpmu_alloc(gfp_t flags) } pmu->pmu = (struct pmu) { + .sched_task = armpmu_sched_task, .pmu_enable = armpmu_enable, .pmu_disable = armpmu_disable, .event_init = armpmu_event_init, From patchwork Mon Oct 17 05:57:12 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anshuman Khandual X-Patchwork-Id: 3214 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4ac7:0:0:0:0:0 with SMTP id y7csp1291686wrs; Sun, 16 Oct 2022 23:06:15 -0700 (PDT) X-Google-Smtp-Source: AMsMyM4GXV0mF8pL30oc0hWBCJRgOLGsOnkxbI4r4KNa21MaGZEXmOSSWD/rczaFZKch6HvO2YHg X-Received: by 2002:a17:907:7286:b0:78d:2848:bc88 with SMTP id dt6-20020a170907728600b0078d2848bc88mr7596502ejc.67.1665986775250; Sun, 16 Oct 2022 23:06:15 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1665986775; cv=none; d=google.com; s=arc-20160816; b=oaMNVj7x8aJe4mQ2c53xdGgnG3rASmiQT74HNdcSYVbG+DkwFC7VMieXiCjdzg332+ BrsD57Zlyod9pC/vENgDABMSDH160Vk832N9EiUx+GFCscDZ2U1g487so7G9DWPPNGOZ rLaOx1qWJLxT6zr1LpbJ/WRiUDMGmsB7PdirCdAHpvBvHL1kYTXX1FDvH/vn8T/+r8+c 3Qj6mp6Fj3dDFzl3sYmTY+274zHaqloFbUtEL0qaqWLm91fMnckLKCR2wGTOueXtc8rB pJYGatjFHzuQ5R05f9CSOhUlt6scaBWRemvvfSySRMINw6tYHXPZBIEt+m7Dgubf6w3y Juog== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=e3emlZoU5xTQwLTwADcpQbGIKKreECXj/YLGzE363uU=; b=Whq3E5qOo36sny/paEZo5m7JRNEXMypiOLLHYoW4xwQx0/alcOKMikeiY4k45FGXmT OTQR1RMeZ0czD/kUWUEz1gKVdVKkQRmOLuno+JPfy4Fqe1PwrXe/u3k0ti5V0ChUbghq Vsn0PnkMb6k8Vj7uY+46Ti3FMblkc1wvkfZYmg8Ps35g3UW4oDL/tYRpRJAusHWw7iXV LBbh0aDNp8tXsm5mVsQ874awyhRfjENB6N6WWrwmnzd6ovaK9hDac7H4Ch+opbvyjfXT Zff99vxD2NlvSw0VHiNI+8WTzX/tdFUTfduMeSAriNOgpzyuiEElGxxkWeSOD49/MNjr IvlQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id y15-20020a50e60f000000b004599f697680si8160890edm.458.2022.10.16.23.05.46; Sun, 16 Oct 2022 23:06:15 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230166AbiJQF6v (ORCPT + 99 others); Mon, 17 Oct 2022 01:58:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38930 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230116AbiJQF62 (ORCPT ); Mon, 17 Oct 2022 01:58:28 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 6C94856036; Sun, 16 Oct 2022 22:58:19 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 76A0A1042; Sun, 16 Oct 2022 22:58:25 -0700 (PDT) Received: from a077893.blr.arm.com (unknown [10.162.42.7]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 27C123F7D8; Sun, 16 Oct 2022 22:58:13 -0700 (PDT) From: Anshuman Khandual To: linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-arm-kernel@lists.infradead.org, peterz@infradead.org, acme@kernel.org, mark.rutland@arm.com, will@kernel.org, catalin.marinas@arm.com Cc: Anshuman Khandual , Mark Brown , James Clark , Rob Herring , Marc Zyngier , Suzuki Poulose , Ingo Molnar Subject: [PATCH V4 6/7] arm64/perf: Add BRBE driver Date: Mon, 17 Oct 2022 11:27:12 +0530 Message-Id: <20221017055713.451092-7-anshuman.khandual@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221017055713.451092-1-anshuman.khandual@arm.com> References: <20221017055713.451092-1-anshuman.khandual@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_LOW, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1746913749081361192?= X-GMAIL-MSGID: =?utf-8?q?1746913749081361192?= This adds a BRBE driver which implements all the required helper functions for struct arm_pmu. Following functions are defined by this driver which will configure, enable, capture, reset and disable BRBE buffer HW as and when requested via perf branch stack sampling framework. - arm64_pmu_brbe_filter() - arm64_pmu_brbe_enable() - arm64_pmu_brbe_disable() - arm64_pmu_brbe_read() - arm64_pmu_brbe_probe() - arm64_pmu_brbe_reset() - arm64_pmu_brbe_supported() Cc: Peter Zijlstra Cc: Ingo Molnar Cc: Arnaldo Carvalho de Melo Cc: Mark Rutland Cc: Will Deacon Cc: Catalin Marinas Cc: linux-arm-kernel@lists.infradead.org Cc: linux-perf-users@vger.kernel.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Anshuman Khandual --- arch/arm64/kernel/perf_event.c | 8 +- drivers/perf/Kconfig | 11 + drivers/perf/Makefile | 1 + drivers/perf/arm_pmu_brbe.c | 441 +++++++++++++++++++++++++++++++++ drivers/perf/arm_pmu_brbe.h | 259 +++++++++++++++++++ include/linux/perf/arm_pmu.h | 20 ++ 6 files changed, 739 insertions(+), 1 deletion(-) create mode 100644 drivers/perf/arm_pmu_brbe.c create mode 100644 drivers/perf/arm_pmu_brbe.h diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c index 97db333d1208..85a3aaefc0fb 100644 --- a/arch/arm64/kernel/perf_event.c +++ b/arch/arm64/kernel/perf_event.c @@ -1034,31 +1034,37 @@ static int armv8pmu_filter_match(struct perf_event *event) static void armv8pmu_brbe_filter(struct pmu_hw_events *hw_event, struct perf_event *event) { + arm64_pmu_brbe_filter(hw_event, event); } static void armv8pmu_brbe_enable(struct pmu_hw_events *hw_event) { + arm64_pmu_brbe_enable(hw_event); } static void armv8pmu_brbe_disable(struct pmu_hw_events *hw_event) { + arm64_pmu_brbe_disable(hw_event); } static void armv8pmu_brbe_read(struct pmu_hw_events *hw_event, struct perf_event *event) { + arm64_pmu_brbe_read(hw_event, event); } static void armv8pmu_brbe_probe(struct pmu_hw_events *hw_event) { + arm64_pmu_brbe_probe(hw_event); } static void armv8pmu_brbe_reset(struct pmu_hw_events *hw_event) { + arm64_pmu_brbe_reset(hw_event); } static bool armv8pmu_brbe_supported(struct perf_event *event) { - return false; + return arm64_pmu_brbe_supported(event); } static void armv8pmu_reset(void *info) diff --git a/drivers/perf/Kconfig b/drivers/perf/Kconfig index 341010f20b77..4efd0a77c5ff 100644 --- a/drivers/perf/Kconfig +++ b/drivers/perf/Kconfig @@ -190,6 +190,17 @@ config ALIBABA_UNCORE_DRW_PMU Support for Driveway PMU events monitoring on Yitian 710 DDR Sub-system. +config ARM_BRBE_PMU + tristate "Enable support for Branch Record Buffer Extension (BRBE)" + depends on ARM64 && ARM_PMU + default y + help + Enable perf support for Branch Record Buffer Extension (BRBE) which + records all branches taken in an execution path. This supports some + branch types and privilege based filtering. It captured additional + relevant information such as cycle count, misprediction and branch + type, branch privilege level etc. + source "drivers/perf/hisilicon/Kconfig" config MARVELL_CN10K_DDR_PMU diff --git a/drivers/perf/Makefile b/drivers/perf/Makefile index 050d04ee19dd..00428793e66c 100644 --- a/drivers/perf/Makefile +++ b/drivers/perf/Makefile @@ -16,6 +16,7 @@ obj-$(CONFIG_RISCV_PMU_SBI) += riscv_pmu_sbi.o obj-$(CONFIG_THUNDERX2_PMU) += thunderx2_pmu.o obj-$(CONFIG_XGENE_PMU) += xgene_pmu.o obj-$(CONFIG_ARM_SPE_PMU) += arm_spe_pmu.o +obj-$(CONFIG_ARM_BRBE_PMU) += arm_pmu_brbe.o obj-$(CONFIG_ARM_DMC620_PMU) += arm_dmc620_pmu.o obj-$(CONFIG_MARVELL_CN10K_TAD_PMU) += marvell_cn10k_tad_pmu.o obj-$(CONFIG_MARVELL_CN10K_DDR_PMU) += marvell_cn10k_ddr_pmu.o diff --git a/drivers/perf/arm_pmu_brbe.c b/drivers/perf/arm_pmu_brbe.c new file mode 100644 index 000000000000..8a1d613c80c6 --- /dev/null +++ b/drivers/perf/arm_pmu_brbe.c @@ -0,0 +1,441 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Branch Record Buffer Extension Driver. + * + * Copyright (C) 2021 ARM Limited + * + * Author: Anshuman Khandual + */ +#include "arm_pmu_brbe.h" + +#define BRBFCR_BRANCH_ALL (BRBFCR_EL1_DIRECT | BRBFCR_EL1_INDIRECT | \ + BRBFCR_EL1_RTN | BRBFCR_EL1_INDCALL | \ + BRBFCR_EL1_DIRCALL | BRBFCR_EL1_CONDDIR) + +#define BRBE_FCR_MASK (BRBFCR_BRANCH_ALL) +#define BRBE_CR_MASK (BRBCR_EL1_EXCEPTION | BRBCR_EL1_ERTN | BRBCR_EL1_CC | \ + BRBCR_EL1_MPRED | BRBCR_EL1_E1BRE | BRBCR_EL1_E0BRE) + +static void set_brbe_disabled(struct pmu_hw_events *cpuc) +{ + cpuc->brbe_nr = 0; +} + +static bool brbe_disabled(struct pmu_hw_events *cpuc) +{ + return !cpuc->brbe_nr; +} + +bool arm64_pmu_brbe_supported(struct perf_event *event) +{ + struct arm_pmu *armpmu = to_arm_pmu(event->pmu); + struct pmu_hw_events *hw_events = per_cpu_ptr(armpmu->hw_events, event->cpu); + + /* + * If the event does not have at least one of the privilege + * branch filters as in PERF_SAMPLE_BRANCH_PLM_ALL, the core + * perf will adjust its value based on perf event's existing + * privilege level via attr.exclude_[user|kernel|hv]. + * + * As event->attr.branch_sample_type might have been changed + * when the event reaches here, it is not possible to figure + * out whether the event originally had HV privilege request + * or got added via the core perf. Just report this situation + * once and continue ignoring if there are other instances. + */ + if (event->attr.branch_sample_type & PERF_SAMPLE_BRANCH_HV) + pr_warn_once("does not support hypervisor privilege branch filter\n"); + + if (event->attr.branch_sample_type & PERF_SAMPLE_BRANCH_ABORT_TX) { + pr_warn_once("does not support aborted transaction branch filter\n"); + return false; + } + + if (event->attr.branch_sample_type & PERF_SAMPLE_BRANCH_NO_TX) { + pr_warn_once("does not support non transaction branch filter\n"); + return false; + } + + if (event->attr.branch_sample_type & PERF_SAMPLE_BRANCH_IN_TX) { + pr_warn_once("does not support in transaction branch filter\n"); + return false; + } + return !brbe_disabled(hw_events); +} + +void arm64_pmu_brbe_probe(struct pmu_hw_events *cpuc) +{ + u64 aa64dfr0, brbidr; + unsigned int brbe; + + aa64dfr0 = read_sysreg_s(SYS_ID_AA64DFR0_EL1); + brbe = cpuid_feature_extract_unsigned_field(aa64dfr0, ID_AA64DFR0_EL1_BRBE_SHIFT); + if (!brbe) { + set_brbe_disabled(cpuc); + return; + } else if (brbe == ID_AA64DFR0_EL1_BRBE_IMP) { + cpuc->brbe_v1p1 = false; + } else if (brbe == ID_AA64DFR0_EL1_BRBE_BRBE_V1P1) { + cpuc->brbe_v1p1 = true; + } + + brbidr = read_sysreg_s(SYS_BRBIDR0_EL1); + cpuc->brbe_format = brbe_fetch_format(brbidr); + if (cpuc->brbe_format != BRBIDR0_EL1_FORMAT_0) { + set_brbe_disabled(cpuc); + return; + } + + cpuc->brbe_cc = brbe_fetch_cc_bits(brbidr); + if (cpuc->brbe_cc != BRBIDR0_EL1_CC_20_BIT) { + set_brbe_disabled(cpuc); + return; + } + + cpuc->brbe_nr = brbe_fetch_numrec(brbidr); + if (!valid_brbe_nr(cpuc->brbe_nr)) { + set_brbe_disabled(cpuc); + return; + } +} + +void arm64_pmu_brbe_enable(struct pmu_hw_events *cpuc) +{ + u64 brbfcr, brbcr; + + if (brbe_disabled(cpuc)) + return; + + brbfcr = read_sysreg_s(SYS_BRBFCR_EL1); + brbfcr &= ~BRBFCR_EL1_BANK_MASK; + brbfcr &= ~(BRBFCR_EL1_EnL | BRBFCR_EL1_PAUSED | BRBE_FCR_MASK); + brbfcr |= (cpuc->brbfcr & BRBE_FCR_MASK); + write_sysreg_s(brbfcr, SYS_BRBFCR_EL1); + isb(); + + brbcr = read_sysreg_s(SYS_BRBCR_EL1); + brbcr &= ~BRBE_CR_MASK; + brbcr |= BRBCR_EL1_FZP; + brbcr |= (BRBCR_EL1_TS_PHYSICAL << BRBCR_EL1_TS_SHIFT); + brbcr |= (cpuc->brbcr & BRBE_CR_MASK); + write_sysreg_s(brbcr, SYS_BRBCR_EL1); + isb(); +} + +void arm64_pmu_brbe_disable(struct pmu_hw_events *cpuc) +{ + u64 brbcr; + + if (brbe_disabled(cpuc)) + return; + + brbcr = read_sysreg_s(SYS_BRBCR_EL1); + brbcr &= ~(BRBCR_EL1_E0BRE | BRBCR_EL1_E1BRE); + write_sysreg_s(brbcr, SYS_BRBCR_EL1); + isb(); +} + +static void perf_branch_to_brbfcr(struct pmu_hw_events *cpuc, int branch_type) +{ + cpuc->brbfcr = 0; + + if (branch_type & PERF_SAMPLE_BRANCH_ANY) { + cpuc->brbfcr |= BRBFCR_BRANCH_ALL; + return; + } + + if (branch_type & PERF_SAMPLE_BRANCH_ANY_CALL) + cpuc->brbfcr |= (BRBFCR_EL1_INDCALL | BRBFCR_EL1_DIRCALL); + + if (branch_type & PERF_SAMPLE_BRANCH_ANY_RETURN) + cpuc->brbfcr |= BRBFCR_EL1_RTN; + + if (branch_type & PERF_SAMPLE_BRANCH_IND_CALL) + cpuc->brbfcr |= BRBFCR_EL1_INDCALL; + + if (branch_type & PERF_SAMPLE_BRANCH_COND) + cpuc->brbfcr |= BRBFCR_EL1_CONDDIR; + + if (branch_type & PERF_SAMPLE_BRANCH_IND_JUMP) + cpuc->brbfcr |= BRBFCR_EL1_INDIRECT; + + if (branch_type & PERF_SAMPLE_BRANCH_CALL) + cpuc->brbfcr |= BRBFCR_EL1_DIRCALL; +} + +static void perf_branch_to_brbcr(struct pmu_hw_events *cpuc, int branch_type) +{ + cpuc->brbcr = (BRBCR_EL1_CC | BRBCR_EL1_MPRED); + + if (branch_type & PERF_SAMPLE_BRANCH_USER) + cpuc->brbcr |= BRBCR_EL1_E0BRE; + + if (branch_type & PERF_SAMPLE_BRANCH_NO_CYCLES) + cpuc->brbcr &= ~BRBCR_EL1_CC; + + if (branch_type & PERF_SAMPLE_BRANCH_NO_FLAGS) + cpuc->brbcr &= ~BRBCR_EL1_MPRED; + + if (branch_type & PERF_SAMPLE_BRANCH_KERNEL) + cpuc->brbcr |= BRBCR_EL1_E1BRE; + else + return; + + /* + * The exception and exception return branches could be + * captured only when the event has necessary privilege + * indicated via branch type PERF_SAMPLE_BRANCH_KERNEL, + * which has been ascertained in generic perf. Please + * refer perf_copy_attr() for more details. + */ + if (branch_type & PERF_SAMPLE_BRANCH_ANY) { + cpuc->brbcr |= BRBCR_EL1_EXCEPTION; + cpuc->brbcr |= BRBCR_EL1_ERTN; + return; + } + + if (branch_type & PERF_SAMPLE_BRANCH_ANY_CALL) + cpuc->brbcr |= BRBCR_EL1_EXCEPTION; + + if (branch_type & PERF_SAMPLE_BRANCH_ANY_RETURN) + cpuc->brbcr |= BRBCR_EL1_ERTN; +} + + +void arm64_pmu_brbe_filter(struct pmu_hw_events *cpuc, struct perf_event *event) +{ + u64 branch_type = event->attr.branch_sample_type; + + if (brbe_disabled(cpuc)) + return; + + perf_branch_to_brbfcr(cpuc, branch_type); + perf_branch_to_brbcr(cpuc, branch_type); +} + +static int brbe_fetch_perf_type(u64 brbinf, bool *new_branch_type) +{ + int brbe_type = brbe_fetch_type(brbinf); + *new_branch_type = false; + + switch (brbe_type) { + case BRBINF_EL1_TYPE_UNCOND_DIR: + return PERF_BR_UNCOND; + case BRBINF_EL1_TYPE_INDIR: + return PERF_BR_IND; + case BRBINF_EL1_TYPE_DIR_LINK: + return PERF_BR_CALL; + case BRBINF_EL1_TYPE_INDIR_LINK: + return PERF_BR_IND_CALL; + case BRBINF_EL1_TYPE_RET_SUB: + return PERF_BR_RET; + case BRBINF_EL1_TYPE_COND_DIR: + return PERF_BR_COND; + case BRBINF_EL1_TYPE_CALL: + return PERF_BR_CALL; + case BRBINF_EL1_TYPE_TRAP: + return PERF_BR_SYSCALL; + case BRBINF_EL1_TYPE_RET_EXCPT: + return PERF_BR_ERET; + case BRBINF_EL1_TYPE_IRQ: + return PERF_BR_IRQ; + case BRBINF_EL1_TYPE_DEBUG_HALT: + *new_branch_type = true; + return PERF_BR_ARM64_DEBUG_HALT; + case BRBINF_EL1_TYPE_SERROR: + return PERF_BR_SERROR; + case BRBINF_EL1_TYPE_INST_DEBUG: + *new_branch_type = true; + return PERF_BR_ARM64_DEBUG_INST; + case BRBINF_EL1_TYPE_DATA_DEBUG: + *new_branch_type = true; + return PERF_BR_ARM64_DEBUG_DATA; + case BRBINF_EL1_TYPE_ALGN_FAULT: + *new_branch_type = true; + return PERF_BR_NEW_FAULT_ALGN; + case BRBINF_EL1_TYPE_INST_FAULT: + *new_branch_type = true; + return PERF_BR_NEW_FAULT_INST; + case BRBINF_EL1_TYPE_DATA_FAULT: + *new_branch_type = true; + return PERF_BR_NEW_FAULT_DATA; + case BRBINF_EL1_TYPE_FIQ: + *new_branch_type = true; + return PERF_BR_ARM64_FIQ; + case BRBINF_EL1_TYPE_DEBUG_EXIT: + *new_branch_type = true; + return PERF_BR_ARM64_DEBUG_EXIT; + default: + pr_warn("unknown branch type captured\n"); + return PERF_BR_UNKNOWN; + } +} + +static int brbe_fetch_perf_priv(u64 brbinf) +{ + int brbe_el = brbe_fetch_el(brbinf); + + switch (brbe_el) { + case BRBINF_EL1_EL_EL0: + return PERF_BR_PRIV_USER; + case BRBINF_EL1_EL_EL1: + return PERF_BR_PRIV_KERNEL; + case BRBINF_EL1_EL_EL2: + if (is_kernel_in_hyp_mode()) + return PERF_BR_PRIV_KERNEL; + return PERF_BR_PRIV_HV; + default: + pr_warn("unknown branch privilege captured\n"); + return PERF_BR_PRIV_UNKNOWN; + } +} + +static void capture_brbe_flags(struct pmu_hw_events *cpuc, struct perf_event *event, + u64 brbinf, int idx) +{ + int branch_type, type = brbe_record_valid(brbinf); + bool new_branch_type; + + if (!branch_sample_no_cycles(event)) + cpuc->branches->brbe_entries[idx].cycles = brbe_fetch_cycles(brbinf); + + if (branch_sample_type(event)) { + branch_type = brbe_fetch_perf_type(brbinf, &new_branch_type); + if (new_branch_type) { + cpuc->branches->brbe_entries[idx].type = PERF_BR_EXTEND_ABI; + cpuc->branches->brbe_entries[idx].new_type = branch_type; + } else { + cpuc->branches->brbe_entries[idx].type = branch_type; + } + } + + if (!branch_sample_no_flags(event)) { + /* + * BRBINF_LASTFAILED does not indicate that the last transaction + * got failed or aborted during the current branch record itself. + * Rather, this indicates that all the branch records which were + * in transaction until the curret branch record have failed. So + * the entire BRBE buffer needs to be processed later on to find + * all branch records which might have failed. + */ + cpuc->branches->brbe_entries[idx].abort = brbinf & BRBINF_EL1_LASTFAILED; + + /* + * All these information (i.e transaction state and mispredicts) + * are not available for target only branch records. + */ + if (type != BRBINF_EL1_VALID_TARGET) { + cpuc->branches->brbe_entries[idx].mispred = brbinf & BRBINF_EL1_MPRED; + cpuc->branches->brbe_entries[idx].predicted = !(brbinf & BRBINF_EL1_MPRED); + cpuc->branches->brbe_entries[idx].in_tx = brbinf & BRBINF_EL1_T; + } + } + + if (branch_sample_priv(event)) { + /* + * All these information (i.e branch privilege level) are not + * available for source only branch records. + */ + if (type != BRBINF_EL1_VALID_SOURCE) + cpuc->branches->brbe_entries[idx].priv = brbe_fetch_perf_priv(brbinf); + } +} + +/* + * A branch record with BRBINF_EL1.LASTFAILED set, implies that all + * preceding consecutive branch records, that were in a transaction + * (i.e their BRBINF_EL1.TX set) have been aborted. + * + * Similarly BRBFCR_EL1.LASTFAILED set, indicate that all preceding + * consecutive branch records upto the last record, which were in a + * transaction (i.e their BRBINF_EL1.TX set) have been aborted. + * + * --------------------------------- ------------------- + * | 00 | BRBSRC | BRBTGT | BRBINF | | TX = 1 | LF = 0 | [TX success] + * --------------------------------- ------------------- + * | 01 | BRBSRC | BRBTGT | BRBINF | | TX = 1 | LF = 0 | [TX success] + * --------------------------------- ------------------- + * | 02 | BRBSRC | BRBTGT | BRBINF | | TX = 0 | LF = 0 | + * --------------------------------- ------------------- + * | 03 | BRBSRC | BRBTGT | BRBINF | | TX = 1 | LF = 0 | [TX failed] + * --------------------------------- ------------------- + * | 04 | BRBSRC | BRBTGT | BRBINF | | TX = 1 | LF = 0 | [TX failed] + * --------------------------------- ------------------- + * | 05 | BRBSRC | BRBTGT | BRBINF | | TX = 0 | LF = 1 | + * --------------------------------- ------------------- + * | .. | BRBSRC | BRBTGT | BRBINF | | TX = 0 | LF = 0 | + * --------------------------------- ------------------- + * | 61 | BRBSRC | BRBTGT | BRBINF | | TX = 1 | LF = 0 | [TX failed] + * --------------------------------- ------------------- + * | 62 | BRBSRC | BRBTGT | BRBINF | | TX = 1 | LF = 0 | [TX failed] + * --------------------------------- ------------------- + * | 63 | BRBSRC | BRBTGT | BRBINF | | TX = 1 | LF = 0 | [TX failed] + * --------------------------------- ------------------- + * + * BRBFCR_EL1.LASTFAILED == 1 + * + * Here BRBFCR_EL1.LASTFAILED failes all those consecutive and also + * in transaction branches near the end of the BRBE buffer. + */ +static void process_branch_aborts(struct pmu_hw_events *cpuc) +{ + u64 brbfcr = read_sysreg_s(SYS_BRBFCR_EL1); + bool lastfailed = !!(brbfcr & BRBFCR_EL1_LASTFAILED); + int idx = cpuc->brbe_nr - 1; + + do { + if (cpuc->branches->brbe_entries[idx].in_tx) { + cpuc->branches->brbe_entries[idx].abort = lastfailed; + } else { + lastfailed = cpuc->branches->brbe_entries[idx].abort; + cpuc->branches->brbe_entries[idx].abort = false; + } + } while (idx--, idx >= 0); +} + +void arm64_pmu_brbe_read(struct pmu_hw_events *cpuc, struct perf_event *event) +{ + u64 brbinf; + int idx; + + if (brbe_disabled(cpuc)) + return; + + set_brbe_paused(); + for (idx = 0; idx < cpuc->brbe_nr; idx++) { + select_brbe_bank_index(idx); + brbinf = get_brbinf_reg(idx); + /* + * There are no valid entries anymore on the buffer. + * Abort the branch record processing to save some + * cycles and also reduce the capture/process load + * for the user space as well. + */ + if (brbe_invalid(brbinf)) + break; + + if (brbe_valid(brbinf)) { + cpuc->branches->brbe_entries[idx].from = get_brbsrc_reg(idx); + cpuc->branches->brbe_entries[idx].to = get_brbtgt_reg(idx); + } else if (brbe_source(brbinf)) { + cpuc->branches->brbe_entries[idx].from = get_brbsrc_reg(idx); + cpuc->branches->brbe_entries[idx].to = 0; + } else if (brbe_target(brbinf)) { + cpuc->branches->brbe_entries[idx].from = 0; + cpuc->branches->brbe_entries[idx].to = get_brbtgt_reg(idx); + } + capture_brbe_flags(cpuc, event, brbinf, idx); + } + cpuc->branches->brbe_stack.nr = idx; + cpuc->branches->brbe_stack.hw_idx = -1ULL; + process_branch_aborts(cpuc); +} + +void arm64_pmu_brbe_reset(struct pmu_hw_events *cpuc) +{ + if (brbe_disabled(cpuc)) + return; + + asm volatile(BRB_IALL); + isb(); +} diff --git a/drivers/perf/arm_pmu_brbe.h b/drivers/perf/arm_pmu_brbe.h new file mode 100644 index 000000000000..22c4b25b1777 --- /dev/null +++ b/drivers/perf/arm_pmu_brbe.h @@ -0,0 +1,259 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Branch Record Buffer Extension Helpers. + * + * Copyright (C) 2021 ARM Limited + * + * Author: Anshuman Khandual + */ +#define pr_fmt(fmt) "brbe: " fmt + +#include + +/* + * BRBE Instructions + * + * BRB_IALL : Invalidate the entire buffer + * BRB_INJ : Inject latest branch record derived from [BRBSRCINJ, BRBTGTINJ, BRBINFINJ] + */ +#define BRB_IALL __emit_inst(0xD5000000 | sys_insn(1, 1, 7, 2, 4) | (0x1f)) +#define BRB_INJ __emit_inst(0xD5000000 | sys_insn(1, 1, 7, 2, 5) | (0x1f)) + +/* + * BRBE Buffer Organization + * + * BRBE buffer is arranged as multiple banks of 32 branch record + * entries each. An indivdial branch record in a given bank could + * be accessedi, after selecting the bank in BRBFCR_EL1.BANK and + * accessing the registers i.e [BRBSRC, BRBTGT, BRBINF] set with + * indices [0..31]. + * + * Bank 0 + * + * --------------------------------- ------ + * | 00 | BRBSRC | BRBTGT | BRBINF | | 00 | + * --------------------------------- ------ + * | 01 | BRBSRC | BRBTGT | BRBINF | | 01 | + * --------------------------------- ------ + * | .. | BRBSRC | BRBTGT | BRBINF | | .. | + * --------------------------------- ------ + * | 31 | BRBSRC | BRBTGT | BRBINF | | 31 | + * --------------------------------- ------ + * + * Bank 1 + * + * --------------------------------- ------ + * | 32 | BRBSRC | BRBTGT | BRBINF | | 00 | + * --------------------------------- ------ + * | 33 | BRBSRC | BRBTGT | BRBINF | | 01 | + * --------------------------------- ------ + * | .. | BRBSRC | BRBTGT | BRBINF | | .. | + * --------------------------------- ------ + * | 63 | BRBSRC | BRBTGT | BRBINF | | 31 | + * --------------------------------- ------ + */ +#define BRBE_BANK0_IDX_MIN 0 +#define BRBE_BANK0_IDX_MAX 31 +#define BRBE_BANK1_IDX_MIN 32 +#define BRBE_BANK1_IDX_MAX 63 + +#define RETURN_READ_BRBSRCN(n) \ + read_sysreg_s(SYS_BRBSRC##n##_EL1) + +#define RETURN_READ_BRBTGTN(n) \ + read_sysreg_s(SYS_BRBTGT##n##_EL1) + +#define RETURN_READ_BRBINFN(n) \ + read_sysreg_s(SYS_BRBINF##n##_EL1) + +#define BRBE_REGN_CASE(n, case_macro) \ + case n: return case_macro(n); break + +#define BRBE_REGN_SWITCH(x, case_macro) \ + do { \ + switch (x) { \ + BRBE_REGN_CASE(0, case_macro); \ + BRBE_REGN_CASE(1, case_macro); \ + BRBE_REGN_CASE(2, case_macro); \ + BRBE_REGN_CASE(3, case_macro); \ + BRBE_REGN_CASE(4, case_macro); \ + BRBE_REGN_CASE(5, case_macro); \ + BRBE_REGN_CASE(6, case_macro); \ + BRBE_REGN_CASE(7, case_macro); \ + BRBE_REGN_CASE(8, case_macro); \ + BRBE_REGN_CASE(9, case_macro); \ + BRBE_REGN_CASE(10, case_macro); \ + BRBE_REGN_CASE(11, case_macro); \ + BRBE_REGN_CASE(12, case_macro); \ + BRBE_REGN_CASE(13, case_macro); \ + BRBE_REGN_CASE(14, case_macro); \ + BRBE_REGN_CASE(15, case_macro); \ + BRBE_REGN_CASE(16, case_macro); \ + BRBE_REGN_CASE(17, case_macro); \ + BRBE_REGN_CASE(18, case_macro); \ + BRBE_REGN_CASE(19, case_macro); \ + BRBE_REGN_CASE(20, case_macro); \ + BRBE_REGN_CASE(21, case_macro); \ + BRBE_REGN_CASE(22, case_macro); \ + BRBE_REGN_CASE(23, case_macro); \ + BRBE_REGN_CASE(24, case_macro); \ + BRBE_REGN_CASE(25, case_macro); \ + BRBE_REGN_CASE(26, case_macro); \ + BRBE_REGN_CASE(27, case_macro); \ + BRBE_REGN_CASE(28, case_macro); \ + BRBE_REGN_CASE(29, case_macro); \ + BRBE_REGN_CASE(30, case_macro); \ + BRBE_REGN_CASE(31, case_macro); \ + default: \ + pr_warn("unknown register index\n"); \ + return -1; \ + } \ + } while (0) + +static inline int buffer_to_brbe_idx(int buffer_idx) +{ + return buffer_idx % 32; +} + +static inline u64 get_brbsrc_reg(int buffer_idx) +{ + int brbe_idx = buffer_to_brbe_idx(buffer_idx); + + BRBE_REGN_SWITCH(brbe_idx, RETURN_READ_BRBSRCN); +} + +static inline u64 get_brbtgt_reg(int buffer_idx) +{ + int brbe_idx = buffer_to_brbe_idx(buffer_idx); + + BRBE_REGN_SWITCH(brbe_idx, RETURN_READ_BRBTGTN); +} + +static inline u64 get_brbinf_reg(int buffer_idx) +{ + int brbe_idx = buffer_to_brbe_idx(buffer_idx); + + BRBE_REGN_SWITCH(brbe_idx, RETURN_READ_BRBINFN); +} + +static inline u64 brbe_record_valid(u64 brbinf) +{ + return (brbinf & BRBINF_EL1_VALID_MASK) >> BRBINF_EL1_VALID_SHIFT; +} + +static inline bool brbe_invalid(u64 brbinf) +{ + return brbe_record_valid(brbinf) == BRBINF_EL1_VALID_NONE; +} + +static inline bool brbe_valid(u64 brbinf) +{ + return brbe_record_valid(brbinf) == BRBINF_EL1_VALID_FULL; +} + +static inline bool brbe_source(u64 brbinf) +{ + return brbe_record_valid(brbinf) == BRBINF_EL1_VALID_SOURCE; +} + +static inline bool brbe_target(u64 brbinf) +{ + return brbe_record_valid(brbinf) == BRBINF_EL1_VALID_TARGET; +} + +static inline int brbe_fetch_cycles(u64 brbinf) +{ + /* + * Captured cycle count is unknown and hence + * should not be passed on the user space. + */ + if (brbinf & BRBINF_EL1_CCU) + return 0; + + return (brbinf & BRBINF_EL1_CC_MASK) >> BRBINF_EL1_CC_SHIFT; +} + +static inline int brbe_fetch_type(u64 brbinf) +{ + return (brbinf & BRBINF_EL1_TYPE_MASK) >> BRBINF_EL1_TYPE_SHIFT; +} + +static inline int brbe_fetch_el(u64 brbinf) +{ + return (brbinf & BRBINF_EL1_EL_MASK) >> BRBINF_EL1_EL_SHIFT; +} + +static inline int brbe_fetch_numrec(u64 brbidr) +{ + return (brbidr & BRBIDR0_EL1_NUMREC_MASK) >> BRBIDR0_EL1_NUMREC_SHIFT; +} + +static inline int brbe_fetch_format(u64 brbidr) +{ + return (brbidr & BRBIDR0_EL1_FORMAT_MASK) >> BRBIDR0_EL1_FORMAT_SHIFT; +} + +static inline int brbe_fetch_cc_bits(u64 brbidr) +{ + return (brbidr & BRBIDR0_EL1_CC_MASK) >> BRBIDR0_EL1_CC_SHIFT; +} + +static inline void select_brbe_bank(int bank) +{ + static int brbe_current_bank = -1; + u64 brbfcr; + + if (brbe_current_bank == bank) + return; + + WARN_ON(bank > 1); + brbfcr = read_sysreg_s(SYS_BRBFCR_EL1); + brbfcr &= ~BRBFCR_EL1_BANK_MASK; + brbfcr |= ((bank << BRBFCR_EL1_BANK_SHIFT) & BRBFCR_EL1_BANK_MASK); + write_sysreg_s(brbfcr, SYS_BRBFCR_EL1); + isb(); + brbe_current_bank = bank; +} + +static inline void select_brbe_bank_index(int buffer_idx) +{ + switch (buffer_idx) { + case BRBE_BANK0_IDX_MIN ... BRBE_BANK0_IDX_MAX: + select_brbe_bank(0); + break; + case BRBE_BANK1_IDX_MIN ... BRBE_BANK1_IDX_MAX: + select_brbe_bank(1); + break; + default: + pr_warn("unsupported BRBE index\n"); + } +} + +static inline bool valid_brbe_nr(int brbe_nr) +{ + switch (brbe_nr) { + case BRBIDR0_EL1_NUMREC_8: + case BRBIDR0_EL1_NUMREC_16: + case BRBIDR0_EL1_NUMREC_32: + case BRBIDR0_EL1_NUMREC_64: + return true; + default: + pr_warn("unsupported BRBE entries\n"); + return false; + } +} + +static inline bool brbe_paused(void) +{ + u64 brbfcr = read_sysreg_s(SYS_BRBFCR_EL1); + + return brbfcr & BRBFCR_EL1_PAUSED; +} + +static inline void set_brbe_paused(void) +{ + u64 brbfcr = read_sysreg_s(SYS_BRBFCR_EL1); + + write_sysreg_s(brbfcr | BRBFCR_EL1_PAUSED, SYS_BRBFCR_EL1); + isb(); +} diff --git a/include/linux/perf/arm_pmu.h b/include/linux/perf/arm_pmu.h index bda0d9984a98..9c23b2b58b3d 100644 --- a/include/linux/perf/arm_pmu.h +++ b/include/linux/perf/arm_pmu.h @@ -168,6 +168,26 @@ struct arm_pmu { unsigned long acpi_cpuid; }; +#ifdef CONFIG_ARM_BRBE_PMU +void arm64_pmu_brbe_filter(struct pmu_hw_events *hw_events, struct perf_event *event); +void arm64_pmu_brbe_read(struct pmu_hw_events *cpuc, struct perf_event *event); +void arm64_pmu_brbe_disable(struct pmu_hw_events *cpuc); +void arm64_pmu_brbe_enable(struct pmu_hw_events *cpuc); +void arm64_pmu_brbe_probe(struct pmu_hw_events *cpuc); +void arm64_pmu_brbe_reset(struct pmu_hw_events *cpuc); +bool arm64_pmu_brbe_supported(struct perf_event *event); +#else +static inline void arm64_pmu_brbe_filter(struct pmu_hw_events *hw_events, struct perf_event *event) +{ +} +static inline void arm64_pmu_brbe_read(struct pmu_hw_events *cpuc, struct perf_event *event) { } +static inline void arm64_pmu_brbe_disable(struct pmu_hw_events *cpuc) { } +static inline void arm64_pmu_brbe_enable(struct pmu_hw_events *cpuc) { } +static inline void arm64_pmu_brbe_probe(struct pmu_hw_events *cpuc) { } +static inline void arm64_pmu_brbe_reset(struct pmu_hw_events *cpuc) { } +static inline bool arm64_pmu_brbe_supported(struct perf_event *event) {return false; } +#endif + #define to_arm_pmu(p) (container_of(p, struct arm_pmu, pmu)) u64 armpmu_event_update(struct perf_event *event); From patchwork Mon Oct 17 05:57:13 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anshuman Khandual X-Patchwork-Id: 3216 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4ac7:0:0:0:0:0 with SMTP id y7csp1297063wrs; Sun, 16 Oct 2022 23:26:31 -0700 (PDT) X-Google-Smtp-Source: AMsMyM7TJyqQB7g/aIlgrW/cPqFuqByByIMtx1tBlq6Nz8tGVrP8dvclBx8PRUUEuJQO8wEHlzSW X-Received: by 2002:a17:90b:224c:b0:20d:8828:3056 with SMTP id hk12-20020a17090b224c00b0020d88283056mr31428447pjb.136.1665987991021; Sun, 16 Oct 2022 23:26:31 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1665987991; cv=none; d=google.com; s=arc-20160816; b=aMALFgoiVlSdxv57K4/JJB/wOZNorm2VxWZ449qg9abVMEf30j/d1RCP+DMZ7B66Em 0ZqiZMnwX63W70d7uKvn7/sCUvNRjo3+Smby5JN1T8C9c/Wrpc4WCNaGAKfJHrXq9/FI aNIng7gk/6xo6jOgnmidwtokVKZro7PjOpCOJFH3bY0l/nISYnB50EtrzVWwMD3vFrKH Fnrl+tg4sK6+ySeq43LhSrJEaqIjU727m8yzewjxpJCNDETCf6np77/EG+pTPrql5yZS NWFqXo4WU1yL5krZzHZtRm1XL73ZAmzfozwiIuA5u3F/61OWXPkA3mA7R+vd6WnBLnlU Ty3g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=agDJp7zatiBR+mLOzJfKT3bAIHTIAxFyHnThaZDAjBc=; b=Madl7Uq+T2bB1Ccycy787ffI4WzkQcwSUgq7fk95G4FR0C2w16vIoSlRzi0Ex5BUhg L6sN7or+pRFPVYt27/AojUg1J7wk/fny6x8Ew0kSDy7SmURx5tqVWvrS2lBDh70JCBpG aiYiqrMgbkraMnl9ZsKvHUX4DpwG69M7cnhpUnAEWEb9hmErtj39STUWerAAglHVCFad geO7UJtMyh5TMuzq8WgUS4dn9hWXHQhoupk52ClHsRehp6WMYx0RuSWZqz1CH6848+sp rQQAORlXLmM7EBuPBTuDh9IOQr//QVwPzwQT8rIyTk1YQJ+kQJ04FveTOgFUe5630iok 9RrQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id f13-20020a056a00228d00b0056349604a6bsi12378066pfe.334.2022.10.16.23.26.12; Sun, 16 Oct 2022 23:26:31 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230198AbiJQF6z (ORCPT + 99 others); Mon, 17 Oct 2022 01:58:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39080 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230196AbiJQF6b (ORCPT ); Mon, 17 Oct 2022 01:58:31 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id AFCFB564F0; Sun, 16 Oct 2022 22:58:25 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 11F2111FB; Sun, 16 Oct 2022 22:58:31 -0700 (PDT) Received: from a077893.blr.arm.com (unknown [10.162.42.7]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id DACDB3F7D8; Sun, 16 Oct 2022 22:58:19 -0700 (PDT) From: Anshuman Khandual To: linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-arm-kernel@lists.infradead.org, peterz@infradead.org, acme@kernel.org, mark.rutland@arm.com, will@kernel.org, catalin.marinas@arm.com Cc: Anshuman Khandual , Mark Brown , James Clark , Rob Herring , Marc Zyngier , Suzuki Poulose , Ingo Molnar Subject: [PATCH V4 7/7] arm64/perf: Enable branch stack sampling Date: Mon, 17 Oct 2022 11:27:13 +0530 Message-Id: <20221017055713.451092-8-anshuman.khandual@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221017055713.451092-1-anshuman.khandual@arm.com> References: <20221017055713.451092-1-anshuman.khandual@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_LOW, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1746913271716269435?= X-GMAIL-MSGID: =?utf-8?q?1746915023459110274?= Now that all the required pieces are already in place, just enable the perf branch stack sampling support on arm64 platform, by removing the gate which blocks it in armpmu_event_init(). Cc: Mark Rutland Cc: Will Deacon Cc: Catalin Marinas Cc: linux-kernel@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org Signed-off-by: Anshuman Khandual --- drivers/perf/arm_pmu.c | 25 ++++++++++++++++++++++--- 1 file changed, 22 insertions(+), 3 deletions(-) diff --git a/drivers/perf/arm_pmu.c b/drivers/perf/arm_pmu.c index 1a8dca4e513e..dc5e4f9aca22 100644 --- a/drivers/perf/arm_pmu.c +++ b/drivers/perf/arm_pmu.c @@ -537,9 +537,28 @@ static int armpmu_event_init(struct perf_event *event) !cpumask_test_cpu(event->cpu, &armpmu->supported_cpus)) return -ENOENT; - /* does not support taken branch sampling */ - if (has_branch_stack(event)) - return -EOPNOTSUPP; + if (has_branch_stack(event)) { + /* + * BRBE support is absent. Select CONFIG_ARM_BRBE_PMU + * in the config, before branch stack sampling events + * can be requested. + */ + if (!IS_ENABLED(CONFIG_ARM_BRBE_PMU)) { + pr_info("BRBE is disabled, select CONFIG_ARM_BRBE_PMU\n"); + return -EOPNOTSUPP; + } + + /* + * Branch stack sampling event can not be supported in + * case either the required driver itself is absent or + * BRBE buffer, is not supported. Besides checking for + * the callback prevents a crash in case it's absent. + */ + if (!armpmu->brbe_supported || !armpmu->brbe_supported(event)) { + pr_info("BRBE is not supported\n"); + return -EOPNOTSUPP; + } + } if (armpmu->map_event(event) == -ENOENT) return -ENOENT;