Message ID | 20230612093539.895253662@infradead.org |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp2477207vqr; Mon, 12 Jun 2023 03:11:26 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ7SHBDRSrRgp4fWGOnr5An4kEjlA4RGdkhlhaGUTg47rTZrSGsduJgU2fKgOnzGLNrrVoI0 X-Received: by 2002:a05:6a00:cca:b0:663:6986:8aad with SMTP id b10-20020a056a000cca00b0066369868aadmr10375339pfv.13.1686564686536; Mon, 12 Jun 2023 03:11:26 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1686564686; cv=none; d=google.com; s=arc-20160816; b=tctkSOxMsVSjjVbLOnDXb15IDGkVuwDLEMKoHM78VOPoyWkMDHiNKm9CL89mf1q8qK 31VwxwdkvIepHG9ypjnmmDc3BHbj6cfPrAbly7H4dU6XprgUv7UZzysDTy1Bbzflvhqb WccVeg83TC5DrLnvKtqnnTGCCOlXXsPNaftXP5/5o1BF6vsCXDyLwlB9QJC5vigvUKBj bCFofbjQi+plfrtSZv2zm8/tjRK3L86WRli+gr2fs30vKXAyLy6xtnc7ak6HTAHHerKB N4J5IuoZgD6Dh3KROZUNt9faBf1Mr0FgkB73Qazftj88UMfHLEefdRx5DIYv3Prqo81w L+bQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:subject:cc:to:from:date :user-agent:message-id:dkim-signature; bh=Xqf5W2yYJrVz/DQlZr8jQFSxQY35vT93yf3RiRPsU+E=; b=zU0N9YH6ygxlKTUc4Ndl5aqjuItMAqearrLK2SFrSw0gjhprUIilqgcuXlCDrFySzO BTw6utAB9H/psoDB4YQgeVXhkgrTiAXXz34H/EEDfsQKeH2nIq1valVkTl1Dx1yXRd9y hnAxmHMOAYJc44c0O0iuygpmr/hukoCn4POy1/Dz2B80Yx1iciy9mJvBSpw2scrKj0Ic 0mD8jTiSpPSc03lhBLrOJzdUjTRKMliramqlCgITMm99KgEBPM9ue8lHmwbrm2Mus9C1 +LYhhg6VjLEY1STe5eUsG3DT7KLEK5q3UN1440QTDcpI+zRns6MgUuFDaa1LtIHqdHbo 5FLA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=desiato.20200630 header.b=SEfMUFGL; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id i185-20020a636dc2000000b0053f0cdab819si3663832pgc.366.2023.06.12.03.11.13; Mon, 12 Jun 2023 03:11:26 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=desiato.20200630 header.b=SEfMUFGL; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231653AbjFLKBQ (ORCPT <rfc822;rust.linux@gmail.com> + 99 others); Mon, 12 Jun 2023 06:01:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33444 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234230AbjFLJyw (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Mon, 12 Jun 2023 05:54:52 -0400 Received: from desiato.infradead.org (desiato.infradead.org [IPv6:2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 442CF135; Mon, 12 Jun 2023 02:39:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:In-Reply-To; bh=Xqf5W2yYJrVz/DQlZr8jQFSxQY35vT93yf3RiRPsU+E=; b=SEfMUFGLcxrsWLLqD4rqRZBdOB DMO5LkBgWnb7vye/D7ZkfEdHNA4LQN6b5bn5RL58pOT0LP8Ukx+j7rtkbDL66kKKuWwzQP+0IzjuJ AdEfjN1fLmHqDivsqL1q0OX0YaAfdn6s/zz1LJAh4NMZPXKCPKTrw61PW7pZt7bEWpZBjhknZF1Pu rfeTs7IyGKdFRPRXNbUm16+QK2kGeQ4AWFVZtGzK5KzTe3Y1gWTmt95W3FGRtzT13/BNtcln9du7r HSmORkIVWTlySqAFUGzgBWHME1IieCrv8NsH6CweYmH/tOmp7EBCw7d+LXsHQCoDcEPhMni22BNIN vlnEfTKA==; Received: from j130084.upc-j.chello.nl ([24.132.130.84] helo=noisy.programming.kicks-ass.net) by desiato.infradead.org with esmtpsa (Exim 4.96 #2 (Red Hat Linux)) id 1q8e0m-008kQh-0J; Mon, 12 Jun 2023 09:39:23 +0000 Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (Client did not present a certificate) by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 2EC853032B4; Mon, 12 Jun 2023 11:38:53 +0200 (CEST) Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id B3A0A30A77B6B; Mon, 12 Jun 2023 11:38:48 +0200 (CEST) Message-ID: <20230612093539.895253662@infradead.org> User-Agent: quilt/0.66 Date: Mon, 12 Jun 2023 11:07:46 +0200 From: Peter Zijlstra <peterz@infradead.org> To: torvalds@linux-foundation.org, keescook@chromium.org, gregkh@linuxfoundation.org, pbonzini@redhat.com Cc: masahiroy@kernel.org, nathan@kernel.org, ndesaulniers@google.com, nicolas@fjasle.eu, catalin.marinas@arm.com, will@kernel.org, vkoul@kernel.org, trix@redhat.com, ojeda@kernel.org, peterz@infradead.org, mingo@redhat.com, longman@redhat.com, boqun.feng@gmail.com, dennis@kernel.org, tj@kernel.org, cl@linux.com, acme@kernel.org, mark.rutland@arm.com, alexander.shishkin@linux.intel.com, jolsa@kernel.org, namhyung@kernel.org, irogers@google.com, adrian.hunter@intel.com, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com, paulmck@kernel.org, frederic@kernel.org, quic_neeraju@quicinc.com, joel@joelfernandes.org, josh@joshtriplett.org, mathieu.desnoyers@efficios.com, jiangshanlai@gmail.com, rientjes@google.com, vbabka@suse.cz, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com, apw@canonical.com, joe@perches.com, dwaipayanray1@gmail.com, lukas.bulwahn@gmail.com, john.johansen@canonical.com, paul@paul-moore.com, jmorris@namei.org, serge@hallyn.com, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, dmaengine@vger.kernel.org, llvm@lists.linux.dev, linux-perf-users@vger.kernel.org, rcu@vger.kernel.org, linux-security-module@vger.kernel.org, tglx@linutronix.de, ravi.bangoria@amd.com, error27@gmail.com, luc.vanoostenryck@gmail.com Subject: [PATCH v3 33/57] perf: Simplify perf_adjust_freq_unthr_context() References: <20230612090713.652690195@infradead.org> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1768491253024173962?= X-GMAIL-MSGID: =?utf-8?q?1768491253024173962?= |
Series |
Scope-based Resource Management
|
|
Commit Message
Peter Zijlstra
June 12, 2023, 9:07 a.m. UTC
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
kernel/events/core.c | 10 +++-------
1 file changed, 3 insertions(+), 7 deletions(-)
Comments
This seems to have the same pattern that I *think* it's entirely broken, ie you are doing that guard(perf_pmu_disable)(event->pmu); inside a loop, and then you are randomly just removing the perf_pmu_enable(event->pmu); at the end of the loop, or when you do a "continue"./ That's not right. The thing does not not go out of scope when the loop *iterates*. It only goes out of scope when the loop *ends*. Big difference as far as cleanup goes. So you have not "simplified" the unlocking code, you've broken it. Now it no longer locks and unlocks for each iteration, it tries to re-lock every time. Or have I mis-understood something completely? Linus
On Mon, Jun 12, 2023 at 09:27:09AM -0700, Linus Torvalds wrote: > The thing does not not go out of scope when the loop *iterates*. > > It only goes out of scope when the loop *ends*. > Or have I mis-understood something completely? I tried this before I used it and variables inside a for() loop have a scope of a single iteration. $ gcc -O2 -o guard guard.c && ./guard spin_lock ponies __raw_spin_lock_irqsave can haz raw_spin_unlock_irqrestore mutex_lock mutex_unlock mutex_lock mutex_unlock mutex_lock mutex_unlock mutex_lock mutex_unlock mutex_lock mutex_unlock spin_unlock --- #include <stdio.h> #include <stdbool.h> typedef struct { } raw_spinlock_t; typedef struct { } spinlock_t; typedef struct { } mutex_t; void raw_spin_lock(raw_spinlock_t *) { printf("%s\n", __FUNCTION__); } void raw_spin_unlock(raw_spinlock_t *) { printf("%s\n", __FUNCTION__); } unsigned long __raw_spin_lock_irqsave(raw_spinlock_t *lock) { printf("%s\n", __FUNCTION__); return 0; } #define raw_spin_lock_irqsave(lock, flags) \ flags = __raw_spin_lock_irqsave(lock) void raw_spin_unlock_irqrestore(raw_spinlock_t *lock, unsigned long flags) { printf("%s\n", __FUNCTION__); } void spin_lock(spinlock_t *) { printf("%s\n", __FUNCTION__); } void spin_unlock(spinlock_t *) { printf("%s\n", __FUNCTION__); } void mutex_lock(mutex_t *) { printf("%s\n", __FUNCTION__); } void mutex_unlock(mutex_t *) { printf("%s\n", __FUNCTION__); } #define DEFINE_LOCK_GUARD(_type, _Type, _Lock, _Unlock, ...) \ typedef struct { \ _Type *lock; \ __VA_ARGS__ \ } lock_guard_##_type##_t; \ \ static inline void lock_guard_##_type##_cleanup(void *_g) \ { \ lock_guard_##_type##_t *_G = _g; \ if (_G->lock) \ _Unlock; \ } \ \ static inline lock_guard_##_type##_t lock_guard_##_type##_init(_Type *lock) \ { \ lock_guard_##_type##_t _g = { .lock = lock }, *_G = &_g; \ _Lock; \ return _g; \ } DEFINE_LOCK_GUARD(raw, raw_spinlock_t, raw_spin_lock(_G->lock), raw_spin_unlock(_G->lock) ) DEFINE_LOCK_GUARD(spin, spinlock_t, spin_lock(_G->lock), spin_unlock(_G->lock) ) DEFINE_LOCK_GUARD(mutex, mutex_t, mutex_lock(_G->lock), mutex_unlock(_G->lock) ) DEFINE_LOCK_GUARD(raw_irqsave, raw_spinlock_t, raw_spin_lock_irqsave(_G->lock, _G->flags), raw_spin_unlock_irqrestore(_G->lock, _G->flags), unsigned long flags; ) #define __cleanup(func) __attribute__((__cleanup__(func))) #define lock_guard(_type, _name, _ptr) \ lock_guard_##_type##_t _name __cleanup(lock_guard_##_type##_cleanup) = \ lock_guard_##_type##_init(_ptr) #define lock_scope(_type, _ptr) \ for (struct { lock_guard_##_type##_t guard ; bool done; } _scope __cleanup(lock_guard_##_type##_cleanup) = \ { .guard = lock_guard_##_type##_init(_ptr), .done = false }; \ !_scope.done; _scope.done = true) raw_spinlock_t raw_lock; spinlock_t lock; mutex_t mutex; void main(void) { lock_guard(spin, guard, &lock); printf("ponies\n"); lock_scope(raw_irqsave, &raw_lock) { printf("can haz\n"); } for (int i=0; i<5; i++) { lock_guard(mutex, foo, &mutex); continue; } }
On Mon, Jun 12, 2023 at 11:44 AM Peter Zijlstra <peterz@infradead.org> wrote: > > I tried this before I used it and variables inside a for() loop have a > scope of a single iteration. Whee. Ok, that surprised me. But I guess that I shouldn't have found it surprising, since the value doesn't survive from one iteration to the next. My mental picture of the scope was just different - and apparently wrong. But thinking about it, it's not just that the value doesn't survive, it's also that the "continue" will exit the scope in order to go back to the "for()" loop. I guess the "goto repeat" ends up being similar, since - as Ian Lance Taylor said in one of the earlier discussions - that "__cleanup__" ends up creating an implicit hidden scope for the variable. So a 'goto' to before the variable was declared implicitly exits the scope. Ugh. I do really hate how subtle that is, though. So while it might not be the horrible bug I thought it was, I'd _really_ like us to not do those things just from a sanity angle. Linus
On Mon, Jun 12, 2023 at 11:55:57AM -0700, Linus Torvalds wrote: > So while it might not be the horrible bug I thought it was, I'd > _really_ like us to not do those things just from a sanity angle. OK, let me go see if there's another way to sanely write those. Is this less offensive? restart: scoped_guard (rcu) { list_for_each_entry_rcu(..) { ... if (err == -EAGAIN) goto restart; } } That has the explicit scope on the rcu and now the goto explicitly exist it. Gonna have to think what to do about the continue ...
On Mon, Jun 12, 2023 at 11:55:57AM -0700, Linus Torvalds wrote: > But thinking about it, it's not just that the value doesn't survive, > it's also that the "continue" will exit the scope in order to go back > to the "for()" loop. So if you still feel the continue is a step too far; the alternative isn't horrible either.. --- --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -4090,7 +4090,7 @@ perf_adjust_freq_unthr_context(struct pe if (!(ctx->nr_freq || unthrottle)) return; - raw_spin_lock(&ctx->lock); + guard(raw_spinlock)(&ctx->lock); list_for_each_entry_rcu(event, &ctx->event_list, event_entry) { if (event->state != PERF_EVENT_STATE_ACTIVE) @@ -4100,7 +4100,7 @@ perf_adjust_freq_unthr_context(struct pe if (!event_filter_match(event)) continue; - perf_pmu_disable(event->pmu); + guard(perf_pmu_disable)(event->pmu); hwc = &event->hw; @@ -4110,34 +4110,29 @@ perf_adjust_freq_unthr_context(struct pe event->pmu->start(event, 0); } - if (!event->attr.freq || !event->attr.sample_freq) - goto next; + if (event->attr.freq && event->attr.sample_freq) { + /* + * stop the event and update event->count + */ + event->pmu->stop(event, PERF_EF_UPDATE); + + now = local64_read(&event->count); + delta = now - hwc->freq_count_stamp; + hwc->freq_count_stamp = now; + + /* + * restart the event + * reload only if value has changed + * we have stopped the event so tell that + * to perf_adjust_period() to avoid stopping it + * twice. + */ + if (delta > 0) + perf_adjust_period(event, period, delta, false); - /* - * stop the event and update event->count - */ - event->pmu->stop(event, PERF_EF_UPDATE); - - now = local64_read(&event->count); - delta = now - hwc->freq_count_stamp; - hwc->freq_count_stamp = now; - - /* - * restart the event - * reload only if value has changed - * we have stopped the event so tell that - * to perf_adjust_period() to avoid stopping it - * twice. - */ - if (delta > 0) - perf_adjust_period(event, period, delta, false); - - event->pmu->start(event, delta > 0 ? PERF_EF_RELOAD : 0); - next: - perf_pmu_enable(event->pmu); + event->pmu->start(event, delta > 0 ? PERF_EF_RELOAD : 0); + } } - - raw_spin_unlock(&ctx->lock); } /*
--- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -4100,7 +4100,7 @@ perf_adjust_freq_unthr_context(struct pe if (!(ctx->nr_freq || unthrottle)) return; - raw_spin_lock(&ctx->lock); + guard(raw_spinlock)(&ctx->lock); list_for_each_entry_rcu(event, &ctx->event_list, event_entry) { if (event->state != PERF_EVENT_STATE_ACTIVE) @@ -4110,7 +4110,7 @@ perf_adjust_freq_unthr_context(struct pe if (!event_filter_match(event)) continue; - perf_pmu_disable(event->pmu); + guard(perf_pmu_disable)(event->pmu); hwc = &event->hw; @@ -4121,7 +4121,7 @@ perf_adjust_freq_unthr_context(struct pe } if (!event->attr.freq || !event->attr.sample_freq) - goto next; + continue; /* * stop the event and update event->count @@ -4143,11 +4143,7 @@ perf_adjust_freq_unthr_context(struct pe perf_adjust_period(event, period, delta, false); event->pmu->start(event, delta > 0 ? PERF_EF_RELOAD : 0); - next: - perf_pmu_enable(event->pmu); } - - raw_spin_unlock(&ctx->lock); } /*