From patchwork Fri Apr 21 18:45:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Liang, Kan" X-Patchwork-Id: 86469 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp1278019vqo; Fri, 21 Apr 2023 11:55:43 -0700 (PDT) X-Google-Smtp-Source: AKy350Y8F+2slkCGFnl980Q4qnLfKsINWMTqf8vjRj+j/3OTQW+I4TBC+R7ER1a7I4oYP/4BzeGx X-Received: by 2002:a05:6a00:2ea6:b0:639:435:1373 with SMTP id fd38-20020a056a002ea600b0063904351373mr7582574pfb.10.1682103343616; Fri, 21 Apr 2023 11:55:43 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682103343; cv=none; d=google.com; s=arc-20160816; b=CJLHfYbSQ4CT4WPp1mFe9mgA6RHRCGPZGzC5j3Q0VJwC9UnkxV67hTwFzKawyEYgj9 3Ji9yWZ3tFP8JeatOvJmA8imrZfJjG1CUHyWkhTzl8ZlBNEVW8XseAz0RZ2sVgJlSc1N L2iCrFdW1Uy8mgzOkduLph9pQoYUHRpl0VAdj3uhUwG6WlZWW8+i2pOnmJvChLuUEOlL vvb7n0p89p+MTivjsnpgl/gs8Xk5EIaguXraLbC8nuGvQ9/7/7qsFdSDinVaQBFm/PCy xTzGnfZuPHwKvQXJ4wpQ2pIrTpDu70Oc2cXxW9VFBgFx/POmumvkvuxVcWBNr4sMXxUZ HZBQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from:dkim-signature; bh=UABaLZmu242YbDhXO1nwJMY3FEAoNhH4HHziAD/6EHA=; b=eQ+U8tcyShilkXsuNkfXcKrbX1qvlMD601EJbMMREQyDvCgb/6aaCWj7JkMOx5EUW0 9e7eq0Sll4va4o43gSFXjGhJlGCRPbfXYb+fVzoG248GB11MYRAwTAU2ToREgHagGz6X pNg6l3mk5AlpUTS9r0kaQHINwG3y4P0PiUcuNV2B0foK4Yams/Xh8pyzZij9maZznsjV b6TOYFtWRJU2vE5IjhqzgxtxPAPQ+XvCK+Jx54tsbjI7TQGLi05bL/JdT4JKZYMiPXC7 vZ2DsVC9cvpQ1ciSHaOkUeyf8MgbWtFBnFlyJvLiZyxf30GdNBvl+XtXL+hjKR/P0r45 CTMA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=BKkfpEhl; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id s8-20020aa78bc8000000b0063c29725444si4842123pfd.347.2023.04.21.11.55.29; Fri, 21 Apr 2023 11:55:43 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=BKkfpEhl; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232764AbjDUSp6 (ORCPT + 99 others); Fri, 21 Apr 2023 14:45:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51628 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229935AbjDUSp5 (ORCPT ); Fri, 21 Apr 2023 14:45:57 -0400 Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 74094173A for ; Fri, 21 Apr 2023 11:45:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1682102753; x=1713638753; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=Gj29/8RHpZU2aU2OLpmCoOn/WQFnEXB7WJK3pS9JWYs=; b=BKkfpEhlXpI9ePf4UBsr9G9omK1ZydEAXegu5y0G/cJ930jIlyNWxHor ZcHT+VOtxMwB35wGHdZ5LWrEzsKryjb+HVksTsLqxLgQhZrUgMPTNP65Y szlJs/KXK2U65l6bjRCkwoh6hEVXEHgAqDUe4l6yyIJ9JAZVWeOYNUBIE OlmvsYcOt3L2kOfuGr5E9TjRS/9tcS9uhgMuYgKndEawXLUu8rIAZJOTW J2lHrnZZICM00YztR9GWyKzxMhEptyYGxlUa7c9TR6aHh7uwHjs/m4JDD kWyOyC7g+og7sJKvmbpa1JDnbUlNRM9FNzAK0TdGDmr8cAMlRfBb8P3XI Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10687"; a="348850427" X-IronPort-AV: E=Sophos;i="5.99,216,1677571200"; d="scan'208";a="348850427" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Apr 2023 11:45:52 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10687"; a="1022004493" X-IronPort-AV: E=Sophos;i="5.99,216,1677571200"; d="scan'208";a="1022004493" Received: from kanliang-dev.jf.intel.com ([10.165.154.102]) by fmsmga005.fm.intel.com with ESMTP; 21 Apr 2023 11:45:52 -0700 From: kan.liang@linux.intel.com To: peterz@infradead.org, mingo@redhat.com, linux-kernel@vger.kernel.org Cc: eranian@google.com, ak@linux.intel.com, Kan Liang Subject: [PATCH V4 1/2] perf/x86/intel/ds: Flush the PEBS buffer in PEBS enable Date: Fri, 21 Apr 2023 11:45:28 -0700 Message-Id: <20230421184529.3320912-1-kan.liang@linux.intel.com> X-Mailer: git-send-email 2.35.1 MIME-Version: 1.0 X-Spam-Status: No, score=-4.3 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1763813195294549226?= X-GMAIL-MSGID: =?utf-8?q?1763813195294549226?= From: Kan Liang Several similar kernel warnings can be triggered, [56605.607840] CPU0 PEBS record size 0, expected 32, config 0 cpuc->record_size=208 when the below commands are running in parallel for a while on SPR. while true; do perf record --no-buildid -a --intr-regs=AX -e cpu/event=0xd0,umask=0x81/pp -c 10003 -o /dev/null ./triad; done & while true; do perf record -o /tmp/out -W -d -e '{ld_blocks.store_forward:period=1000000, MEM_TRANS_RETIRED.LOAD_LATENCY:u:precise=2:ldlat=4}' -c 1037 ./triad; done *The triad program is just the generation of loads/stores. The warnings are triggered when an unexpected PEBS record (with a different config and size) is found. A system-wide PEBS event with the large PEBS config may be enabled during a context switch. Some PEBS records for the system-wide PEBS may be generated while the old task is sched out but the new one hasn't been sched in yet. When the new task is sched in, the cpuc->pebs_record_size may be updated for the per-task PEBS events. So the existing system-wide PEBS records have a different size from the later PEBS records. The PEBS buffer should be flushed right before the hardware is reprogrammed. The new size and threshold should be updated after the old buffer has been flushed. Reported-by: Stephane Eranian Suggested-by: Peter Zijlstra (Intel) Signed-off-by: Kan Liang --- Changes since V3: - update comments arch/x86/events/intel/ds.c | 41 +++++++++++++++++++++++++++----------- 1 file changed, 29 insertions(+), 12 deletions(-) diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c index a2e566e53076..94043232991c 100644 --- a/arch/x86/events/intel/ds.c +++ b/arch/x86/events/intel/ds.c @@ -1252,22 +1252,26 @@ pebs_update_state(bool needed_cb, struct cpu_hw_events *cpuc, if (x86_pmu.intel_cap.pebs_baseline && add) { u64 pebs_data_cfg; - /* Clear pebs_data_cfg and pebs_record_size for first PEBS. */ - if (cpuc->n_pebs == 1) { + /* Clear pebs_data_cfg for first PEBS. */ + if (cpuc->n_pebs == 1) cpuc->pebs_data_cfg = 0; - cpuc->pebs_record_size = sizeof(struct pebs_basic); - } pebs_data_cfg = pebs_update_adaptive_cfg(event); - /* Update pebs_record_size if new event requires more data. */ - if (pebs_data_cfg & ~cpuc->pebs_data_cfg) { + /* + * Only update the pebs_data_cfg here. The pebs_record_size + * will be updated later when the new pebs_data_cfg takes effect. + */ + if (pebs_data_cfg & ~cpuc->pebs_data_cfg) cpuc->pebs_data_cfg |= pebs_data_cfg; - adaptive_pebs_record_size_update(); - update = true; - } } + /* + * For the adaptive PEBS, the threshold will be updated later + * when the new pebs_data_cfg takes effect. + * The threshold may not be accurate before that, but that + * does not hurt. + */ if (update) pebs_update_threshold(cpuc); } @@ -1326,6 +1330,13 @@ static void intel_pmu_pebs_via_pt_enable(struct perf_event *event) wrmsrl(base + idx, value); } +static inline void intel_pmu_drain_large_pebs(struct cpu_hw_events *cpuc) +{ + if (cpuc->n_pebs == cpuc->n_large_pebs && + cpuc->n_pebs != cpuc->n_pebs_via_pt) + intel_pmu_drain_pebs_buffer(); +} + void intel_pmu_pebs_enable(struct perf_event *event) { struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events); @@ -1345,6 +1356,14 @@ void intel_pmu_pebs_enable(struct perf_event *event) if (x86_pmu.intel_cap.pebs_baseline) { hwc->config |= ICL_EVENTSEL_ADAPTIVE; if (cpuc->pebs_data_cfg != cpuc->active_pebs_data_cfg) { + /* + * drain_pebs() assumes uniform record size; + * hence we need to drain when changing said + * size. + */ + intel_pmu_drain_large_pebs(cpuc); + adaptive_pebs_record_size_update(); + pebs_update_threshold(cpuc); wrmsrl(MSR_PEBS_DATA_CFG, cpuc->pebs_data_cfg); cpuc->active_pebs_data_cfg = cpuc->pebs_data_cfg; } @@ -1391,9 +1410,7 @@ void intel_pmu_pebs_disable(struct perf_event *event) struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events); struct hw_perf_event *hwc = &event->hw; - if (cpuc->n_pebs == cpuc->n_large_pebs && - cpuc->n_pebs != cpuc->n_pebs_via_pt) - intel_pmu_drain_pebs_buffer(); + intel_pmu_drain_large_pebs(cpuc); cpuc->pebs_enabled &= ~(1ULL << hwc->idx); From patchwork Fri Apr 21 18:45:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Liang, Kan" X-Patchwork-Id: 86470 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp1278877vqo; Fri, 21 Apr 2023 11:57:45 -0700 (PDT) X-Google-Smtp-Source: AKy350ZuU7ibffptCBfIj5uRp0aR4Lg3vb9g6giJMcrx6STiq3ytI9mG6a7uhmCSOr8I5NU3d0wD X-Received: by 2002:a05:6a00:18a8:b0:638:7c22:6fd with SMTP id x40-20020a056a0018a800b006387c2206fdmr8303339pfh.1.1682103465147; Fri, 21 Apr 2023 11:57:45 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682103465; cv=none; d=google.com; s=arc-20160816; b=k/EVLMUG35WtUX4vhBvuRyjAOTirrENbksLyv9ppdx1gqKPC2vsAR4T37kZXov5Hv4 pl1VfFODfOW09DsPMg1sTx2SmqaVN28XfpPg1cQ0+nq90lKPsfY0jJYnrCnFDyEBdcy4 IvT2DLA6l6dqNfWNClBEevJHWzy2hwwK0HtArZLDZQ0mOPE6dn/o7waDlmtFiGN9dLkk Lew0UIEGe3uEIYKOHVAPINTlNJH1zZFuer3rOKbyT0FY6b0/yZ2ITzXj7Nz4370JyujI qWzPeLiwWCOZ7VgCWn7FLYpW7nK+csEP/tdAIhnuwBP4k8y1qZ8Lx+pMba2y8Inrfa9F mEaA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=zeKhT+fKPtfr3KWpmDk7MWC9DlWdlPlwa7MUzs86KJ0=; b=ZCAoxKLjETKEuufgHer5Eo7iL0XGxXaWRFtdPTcEb2YU0XepyQyvg/NyhqFLdis+8Q 2QBW9H3teVsZaXLMJlO2a0ra+uKwT30q9e04nwZQt1u0aJV1K6AJ3XKQ/5gNrf4gSZSw oMkt4fs6gyM2nwDjpYET7TiBwX7uzR0aYyme/buC2gzyJkH/G64HmEidsg8yAvpqPybl qr9J0QHT33guK57fgiPJ/vlWsF5l2cTmENOThrei/2ibnkkrq6BxKBqEA4sFCvLIPk1Q r38UPRc6roeOJZ+1gUcOvpBQFRpmqfjz4Zud04fdGp3E1OtFLuB0SSW4W/g2My3ThAa6 f21w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=jqHYkohT; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id s8-20020aa78bc8000000b0063c29725444si4842123pfd.347.2023.04.21.11.57.30; Fri, 21 Apr 2023 11:57:45 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=jqHYkohT; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233394AbjDUSqE (ORCPT + 99 others); Fri, 21 Apr 2023 14:46:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51678 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233111AbjDUSqA (ORCPT ); Fri, 21 Apr 2023 14:46:00 -0400 Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4D0EE1BD4 for ; Fri, 21 Apr 2023 11:45:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1682102757; x=1713638757; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=dtXqDpaUMTDNmzTTwXyzsupMupYjcyBu6Vb73+EN3Mg=; b=jqHYkohTjoK+GuHLcpkPMR8AoUaDMvWpk3DRyOgeXOj+BOLYGO4v1q3R XF4Ee8Mo05M1toA2VZbquwt/LfQ/q7crKa5j4LKI3E0Ydocgtt5zp2tHp v2zE+r7fU12h9JPkvQuiTsPPKfTpwio3qJdoM8kog9fjKwGtDzaiByz25 C5JCk6JnEBjLoLgnFec+iUA/dhjXw7lG8cY+2pX5xeiimbq0iFtdiL41t d5NvhK3SsygoyQFJxbfrcljeyEWyHQoPWWKaXZk1NbcRTUyEnouhBZufg 6xncMaIqBEMt6+HOQRpERkojFjOS+5sfPQZ1MZHUPfmvNPqImn1DuJzUB g==; X-IronPort-AV: E=McAfee;i="6600,9927,10687"; a="348850432" X-IronPort-AV: E=Sophos;i="5.99,216,1677571200"; d="scan'208";a="348850432" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Apr 2023 11:45:53 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10687"; a="1022004494" X-IronPort-AV: E=Sophos;i="5.99,216,1677571200"; d="scan'208";a="1022004494" Received: from kanliang-dev.jf.intel.com ([10.165.154.102]) by fmsmga005.fm.intel.com with ESMTP; 21 Apr 2023 11:45:53 -0700 From: kan.liang@linux.intel.com To: peterz@infradead.org, mingo@redhat.com, linux-kernel@vger.kernel.org Cc: eranian@google.com, ak@linux.intel.com, Kan Liang Subject: [PATCH V4 2/2] perf/x86/intel/ds: Delay the threshold update Date: Fri, 21 Apr 2023 11:45:29 -0700 Message-Id: <20230421184529.3320912-2-kan.liang@linux.intel.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20230421184529.3320912-1-kan.liang@linux.intel.com> References: <20230421184529.3320912-1-kan.liang@linux.intel.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.3 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1763813323247818152?= X-GMAIL-MSGID: =?utf-8?q?1763813323247818152?= From: Kan Liang The update of the pebs_record_size has been delayed to the place right before the new pebs_data_cfg takes effect for the adaptive PEBS. But the update of the DS threshold is still in the event_add stage. The threshold is calculated from the pebs_record_size. So it may contain inaccurate data. The data will be corrected in the event_enable stage. So there is no real harm. But the logic is quite a mess and hard to follow. Move the threshold update to the event_enable stage where all the configures have been settled down. Steal the highest bit of cpuc->pebs_data_cfg to track whether the threshold update is required. Just need to update the threshold once. It's possible that the first event is eligible for the large PEBS, while the second event is not. The current perf implementation may update the threshold twice in the event_add stage. This patch could also improve such kind of cases by avoiding the extra update. No functional change. Signed-off-by: Kan Liang --- This is a cleanup patch to address the comment. https://lore.kernel.org/lkml/20230414102908.GC83892@hirez.programming.kicks-ass.net/ It doesn't fix any real issues. It just tries to make the logic clear and consistent. arch/x86/events/intel/ds.c | 34 ++++++++++++------------------- arch/x86/include/asm/perf_event.h | 8 ++++++++ 2 files changed, 21 insertions(+), 21 deletions(-) diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c index 94043232991c..554a58318787 100644 --- a/arch/x86/events/intel/ds.c +++ b/arch/x86/events/intel/ds.c @@ -1229,12 +1229,14 @@ pebs_update_state(bool needed_cb, struct cpu_hw_events *cpuc, struct perf_event *event, bool add) { struct pmu *pmu = event->pmu; + /* * Make sure we get updated with the first PEBS * event. It will trigger also during removal, but * that does not hurt: */ - bool update = cpuc->n_pebs == 1; + if (cpuc->n_pebs == 1) + cpuc->pebs_data_cfg = PEBS_UPDATE_DS_SW; if (needed_cb != pebs_needs_sched_cb(cpuc)) { if (!needed_cb) @@ -1242,7 +1244,7 @@ pebs_update_state(bool needed_cb, struct cpu_hw_events *cpuc, else perf_sched_cb_dec(pmu); - update = true; + cpuc->pebs_data_cfg |= PEBS_UPDATE_DS_SW; } /* @@ -1252,28 +1254,15 @@ pebs_update_state(bool needed_cb, struct cpu_hw_events *cpuc, if (x86_pmu.intel_cap.pebs_baseline && add) { u64 pebs_data_cfg; - /* Clear pebs_data_cfg for first PEBS. */ - if (cpuc->n_pebs == 1) - cpuc->pebs_data_cfg = 0; - pebs_data_cfg = pebs_update_adaptive_cfg(event); /* * Only update the pebs_data_cfg here. The pebs_record_size * will be updated later when the new pebs_data_cfg takes effect. */ - if (pebs_data_cfg & ~cpuc->pebs_data_cfg) - cpuc->pebs_data_cfg |= pebs_data_cfg; + if (pebs_data_cfg & ~get_pebs_datacfg_hw(cpuc->pebs_data_cfg)) + cpuc->pebs_data_cfg |= pebs_data_cfg | PEBS_UPDATE_DS_SW; } - - /* - * For the adaptive PEBS, the threshold will be updated later - * when the new pebs_data_cfg takes effect. - * The threshold may not be accurate before that, but that - * does not hurt. - */ - if (update) - pebs_update_threshold(cpuc); } void intel_pmu_pebs_add(struct perf_event *event) @@ -1355,7 +1344,7 @@ void intel_pmu_pebs_enable(struct perf_event *event) if (x86_pmu.intel_cap.pebs_baseline) { hwc->config |= ICL_EVENTSEL_ADAPTIVE; - if (cpuc->pebs_data_cfg != cpuc->active_pebs_data_cfg) { + if (get_pebs_datacfg_hw(cpuc->pebs_data_cfg) != cpuc->active_pebs_data_cfg) { /* * drain_pebs() assumes uniform record size; * hence we need to drain when changing said @@ -1363,11 +1352,14 @@ void intel_pmu_pebs_enable(struct perf_event *event) */ intel_pmu_drain_large_pebs(cpuc); adaptive_pebs_record_size_update(); - pebs_update_threshold(cpuc); - wrmsrl(MSR_PEBS_DATA_CFG, cpuc->pebs_data_cfg); - cpuc->active_pebs_data_cfg = cpuc->pebs_data_cfg; + wrmsrl(MSR_PEBS_DATA_CFG, get_pebs_datacfg_hw(cpuc->pebs_data_cfg)); + cpuc->active_pebs_data_cfg = get_pebs_datacfg_hw(cpuc->pebs_data_cfg); } } + if (cpuc->pebs_data_cfg & PEBS_UPDATE_DS_SW) { + cpuc->pebs_data_cfg &= ~PEBS_UPDATE_DS_SW; + pebs_update_threshold(cpuc); + } if (idx >= INTEL_PMC_IDX_FIXED) { if (x86_pmu.intel_cap.pebs_format < 5) diff --git a/arch/x86/include/asm/perf_event.h b/arch/x86/include/asm/perf_event.h index 8fc15ed5e60b..259a2a8afe2b 100644 --- a/arch/x86/include/asm/perf_event.h +++ b/arch/x86/include/asm/perf_event.h @@ -121,6 +121,14 @@ #define PEBS_DATACFG_LBRS BIT_ULL(3) #define PEBS_DATACFG_LBR_SHIFT 24 +/* Steal the highest bit of pebs_data_cfg for SW usage */ +#define PEBS_UPDATE_DS_SW BIT_ULL(63) + +static inline u64 get_pebs_datacfg_hw(u64 config) +{ + return config & ~PEBS_UPDATE_DS_SW; +} + /* * Intel "Architectural Performance Monitoring" CPUID * detection/enumeration details: