[v5,2/4] dt-bindings: cpufreq: apple,soc-cpufreq: Add binding for Apple SoC cpufreq
Message ID | 20221128142912.16022-3-marcan@marcan.st |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp5709520wrr; Mon, 28 Nov 2022 06:30:38 -0800 (PST) X-Google-Smtp-Source: AA0mqf4St6oTpH6xgu4AIR1dESxM72097S9o0L2Q0wV2uya3UTN3sToj6ar+cydH+eXYQr2eB3x8 X-Received: by 2002:a63:4246:0:b0:477:98cc:3d01 with SMTP id p67-20020a634246000000b0047798cc3d01mr25763584pga.505.1669645838523; Mon, 28 Nov 2022 06:30:38 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1669645838; cv=none; d=google.com; s=arc-20160816; b=0CehhRBaa7OcW+9Se+G5yaNml/q0XRHyg1daAiKNd7s8Y/cl5UgCcU6L+5hFu2auA+ qXEfmS3FDqyL8XIkkz28zq2tqAXKT41GMRF0ykGP7dtA2GXE1oRSCk0T/34qaJSysPCY NLBDBI08twWYRv7dV9BPNpJsj6eXp/xTqY8kw5/UebU90wENdHCkDbKLo/1YEvNoi5xp VTOZXBSaCruCkNPIdtOCuG9NgAsxv4aRb4gtkB6KV+eelNyCanXXKtUtN+jsXTmGVCD6 rCvgIYwhpzTZ8A8o+IAuzDjAEfndy+48OEvhxRXf1JRALkVMAO2wgL1csIbqgARrsXQt 4lPg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=bSOx6HwedqtkIRLrma4eyVtYqTK5hX3AQfenkxgHadQ=; b=Y4+cd0sL9GApgYcMvJQTWVTs0IvezFZTsIHpJyNVQ2EOEa4b26b4ldXR9fs7y0Q9bL UEm5+A5NwkF9MhWSPnqLgXvQBrqq9mm0fCedXnnpdrOofeDTu9Qb8W3v4EHNgj0BTtEe sTilkITS571j91wRzQRQrCSrAvyDJCnZ+6BsludB7WGl1qAP1jugfefcwFzWSyqolobJ ZRiG6Hee6z8lBaC8YZqBGdvjABb9/R3tX8edJJxXB/X3GHkadW1DcFcX/IqGvwRqU4Eo jhVxj+5dOTbr0lOfPpDaLBC/H79z+aRh7uUwpuxoVqg0gd8xBFDS2c3xUiNPkoiwBAqd 9zMw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@marcan.st header.s=default header.b=Von0MhmP; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=marcan.st Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id o9-20020a62cd09000000b00546ce9189b1si11623405pfg.65.2022.11.28.06.30.23; Mon, 28 Nov 2022 06:30:38 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@marcan.st header.s=default header.b=Von0MhmP; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=marcan.st Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232579AbiK1O3w (ORCPT <rfc822;gah0developer@gmail.com> + 99 others); Mon, 28 Nov 2022 09:29:52 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51430 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232535AbiK1O3f (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Mon, 28 Nov 2022 09:29:35 -0500 Received: from mail.marcansoft.com (marcansoft.com [212.63.210.85]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 44F1B22503; Mon, 28 Nov 2022 06:29:32 -0800 (PST) Received: from [127.0.0.1] (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: sendonly@marcansoft.com) by mail.marcansoft.com (Postfix) with ESMTPSA id 6BABC41F4A; Mon, 28 Nov 2022 14:29:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=marcan.st; s=default; t=1669645770; bh=ZjlR4kP5lVlH6ae1l9tl2In7/rCadhjHVu/7ZfsAgu0=; h=From:To:Cc:Subject:Date:In-Reply-To:References; b=Von0MhmPHSu99+yjBWuB7nUZrwtyrp4us8zvyS6u/j8i//aFdglkh6O+tLOYvpnMO Xv23z666dhfgSrj+uoRZ85b6+BzdVQwE7/PwBVxisvC0/6fzP/4UHcLcn8/1AwYcqa eEMlIfn5OqIkInjRJtRbjkVyLRPwiy13Jaw6r+9vueFtF5jLnUMjacFVBvQiF1VeJy DmIgPbFs0m+EjnkMH6dWvnv2CHnQyHBANivs8+pQ9KkZvkW3yJL/qPl5ceT30wb9M2 yPqhmaS6RMM1l/YrDXumpEFgnAWqRFBAtoO21GUVXscLDVrHDXSPgpn8T3JQk6LWQD KHpekb+Y1lTKg== From: Hector Martin <marcan@marcan.st> To: "Rafael J. Wysocki" <rafael@kernel.org>, Viresh Kumar <viresh.kumar@linaro.org>, Matthias Brugger <matthias.bgg@gmail.com> Cc: Hector Martin <marcan@marcan.st>, Sven Peter <sven@svenpeter.dev>, Alyssa Rosenzweig <alyssa@rosenzweig.io>, Rob Herring <robh+dt@kernel.org>, Krzysztof Kozlowski <krzysztof.kozlowski+dt@linaro.org>, Stephen Boyd <sboyd@kernel.org>, Ulf Hansson <ulf.hansson@linaro.org>, Marc Zyngier <maz@kernel.org>, Mark Kettenis <mark.kettenis@xs4all.nl>, asahi@lists.linux.dev, linux-arm-kernel@lists.infradead.org, linux-pm@vger.kernel.org, devicetree@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v5 2/4] dt-bindings: cpufreq: apple,soc-cpufreq: Add binding for Apple SoC cpufreq Date: Mon, 28 Nov 2022 23:29:10 +0900 Message-Id: <20221128142912.16022-3-marcan@marcan.st> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20221128142912.16022-1-marcan@marcan.st> References: <20221128142912.16022-1-marcan@marcan.st> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1750750555107702028?= X-GMAIL-MSGID: =?utf-8?q?1750750555107702028?= |
Series |
Apple SoC cpufreq driver
|
|
Commit Message
Hector Martin
Nov. 28, 2022, 2:29 p.m. UTC
This binding represents the cpufreq/DVFS hardware present in Apple SoCs. The hardware has an independent controller per CPU cluster, and we represent them as unique nodes in order to accurately describe the hardware. The driver is responsible for binding them as a single cpufreq device (in the Linux cpufreq model). Acked-by: Marc Zyngier <maz@kernel.org> Signed-off-by: Hector Martin <marcan@marcan.st> --- .../cpufreq/apple,cluster-cpufreq.yaml | 117 ++++++++++++++++++ 1 file changed, 117 insertions(+) create mode 100644 Documentation/devicetree/bindings/cpufreq/apple,cluster-cpufreq.yaml
Comments
On 28/11/2022 15:29, Hector Martin wrote: > This binding represents the cpufreq/DVFS hardware present in Apple SoCs. > The hardware has an independent controller per CPU cluster, and we > represent them as unique nodes in order to accurately describe the > hardware. The driver is responsible for binding them as a single cpufreq > device (in the Linux cpufreq model). > Reviewed-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org> Best regards, Krzysztof
On Mon, 28 Nov 2022 at 15:29, Hector Martin <marcan@marcan.st> wrote: > > This binding represents the cpufreq/DVFS hardware present in Apple SoCs. > The hardware has an independent controller per CPU cluster, and we > represent them as unique nodes in order to accurately describe the > hardware. The driver is responsible for binding them as a single cpufreq > device (in the Linux cpufreq model). > > Acked-by: Marc Zyngier <maz@kernel.org> > Signed-off-by: Hector Martin <marcan@marcan.st> > --- > .../cpufreq/apple,cluster-cpufreq.yaml | 117 ++++++++++++++++++ > 1 file changed, 117 insertions(+) > create mode 100644 Documentation/devicetree/bindings/cpufreq/apple,cluster-cpufreq.yaml > > diff --git a/Documentation/devicetree/bindings/cpufreq/apple,cluster-cpufreq.yaml b/Documentation/devicetree/bindings/cpufreq/apple,cluster-cpufreq.yaml > new file mode 100644 > index 000000000000..76cb9726660e > --- /dev/null > +++ b/Documentation/devicetree/bindings/cpufreq/apple,cluster-cpufreq.yaml > @@ -0,0 +1,117 @@ > +# SPDX-License-Identifier: GPL-2.0-only OR BSD-2-Clause > +%YAML 1.2 > +--- > +$id: http://devicetree.org/schemas/cpufreq/apple,cluster-cpufreq.yaml# > +$schema: http://devicetree.org/meta-schemas/core.yaml# > + > +title: Apple SoC cluster cpufreq device > + > +maintainers: > + - Hector Martin <marcan@marcan.st> > + > +description: | > + Apple SoCs (e.g. M1) have a per-cpu-cluster DVFS controller that is part of > + the cluster management register block. This binding uses the standard > + operating-points-v2 table to define the CPU performance states, with the > + opp-level property specifying the hardware p-state index for that level. > + > +properties: > + compatible: > + oneOf: > + - items: > + - enum: > + - apple,t8103-cluster-cpufreq > + - apple,t8112-cluster-cpufreq > + - const: apple,cluster-cpufreq > + - items: > + - const: apple,t6000-cluster-cpufreq > + - const: apple,t8103-cluster-cpufreq > + - const: apple,cluster-cpufreq > + > + reg: > + maxItems: 1 > + > + '#performance-domain-cells': > + const: 0 > + > +required: > + - compatible > + - reg > + - '#performance-domain-cells' > + > +additionalProperties: false > + > +examples: > + - | > + // This example shows a single CPU per domain and 2 domains, > + // with two p-states per domain. > + // Shipping hardware has 2-4 CPUs per domain and 2-6 domains. > + cpus { > + #address-cells = <2>; > + #size-cells = <0>; > + > + cpu@0 { > + compatible = "apple,icestorm"; > + device_type = "cpu"; > + reg = <0x0 0x0>; > + operating-points-v2 = <&ecluster_opp>; To me, it looks like the operating-points-v2 phandle better belongs in the performance-domains provider node. I mean, isn't the OPPs really a description of the performance-domain provider? That said, I suggest we try to extend the generic performance-domain binding [1] with an "operating-points-v2". In that way, we should instead be able to reference it from this binding. In fact, that would be very similar to what already exists for the generic power-domain binding [2]. I think it would be rather nice to follow a similar pattern for the performance-domain binding. > + performance-domains = <&cpufreq_e>; > + }; > + > + cpu@10100 { > + compatible = "apple,firestorm"; > + device_type = "cpu"; > + reg = <0x0 0x10100>; > + operating-points-v2 = <&pcluster_opp>; > + performance-domains = <&cpufreq_p>; > + }; > + }; > + > + ecluster_opp: opp-table-0 { > + compatible = "operating-points-v2"; > + opp-shared; > + > + opp01 { > + opp-hz = /bits/ 64 <600000000>; > + opp-level = <1>; > + clock-latency-ns = <7500>; > + }; > + opp02 { > + opp-hz = /bits/ 64 <972000000>; > + opp-level = <2>; > + clock-latency-ns = <22000>; > + }; > + }; > + > + pcluster_opp: opp-table-1 { > + compatible = "operating-points-v2"; > + opp-shared; > + > + opp01 { > + opp-hz = /bits/ 64 <600000000>; > + opp-level = <1>; > + clock-latency-ns = <8000>; > + }; > + opp02 { > + opp-hz = /bits/ 64 <828000000>; > + opp-level = <2>; > + clock-latency-ns = <19000>; > + }; > + }; > + > + soc { > + #address-cells = <2>; > + #size-cells = <2>; > + > + cpufreq_e: performance-controller@210e20000 { > + compatible = "apple,t8103-cluster-cpufreq", "apple,cluster-cpufreq"; > + reg = <0x2 0x10e20000 0 0x1000>; > + #performance-domain-cells = <0>; > + }; > + > + cpufreq_p: performance-controller@211e20000 { > + compatible = "apple,t8103-cluster-cpufreq", "apple,cluster-cpufreq"; > + reg = <0x2 0x11e20000 0 0x1000>; > + #performance-domain-cells = <0>; > + }; > + }; Kind regards Uffe [1] Documentation/devicetree/bindings/dvfs/performance-domain.yaml [2] Documentation/devicetree/bindings/power/power-domain.yaml
On 29/11/2022 20.36, Ulf Hansson wrote: > On Mon, 28 Nov 2022 at 15:29, Hector Martin <marcan@marcan.st> wrote: >> +examples: >> + - | >> + // This example shows a single CPU per domain and 2 domains, >> + // with two p-states per domain. >> + // Shipping hardware has 2-4 CPUs per domain and 2-6 domains. >> + cpus { >> + #address-cells = <2>; >> + #size-cells = <0>; >> + >> + cpu@0 { >> + compatible = "apple,icestorm"; >> + device_type = "cpu"; >> + reg = <0x0 0x0>; >> + operating-points-v2 = <&ecluster_opp>; > > To me, it looks like the operating-points-v2 phandle better belongs in > the performance-domains provider node. I mean, isn't the OPPs really a > description of the performance-domain provider? > > That said, I suggest we try to extend the generic performance-domain > binding [1] with an "operating-points-v2". In that way, we should > instead be able to reference it from this binding. > > In fact, that would be very similar to what already exists for the > generic power-domain binding [2]. I think it would be rather nice to > follow a similar pattern for the performance-domain binding. While I agree with the technical rationale and the proposed approach being better in principle... We're at v5 of bikeshedding this trivial driver's DT binding, and the comment could've been made at v3. To quote IRC just now: > this way the machines will be obsolete before things are fully upstreamed I think it's long overdue for the kernel community to take a deep look at itself and its development and review process, because it is quite honestly insane how pathologically inefficient it is compared to, basically, every other large and healthy open source project of similar or even greater impact and scope. Cc Linus, because this is for your Mac and I assume you care. We're at v5 here for this silly driver. Meanwhile, rmk recently threw the towel on upstreaming macsmc for us. We're trying, and I'll keep trying because I actually get paid (by very generous donors) to do this, but if I weren't I'd have given up a long time ago. And while I won't give up, I can't deny this situation affects my morale and willingness to keep pushing on upstreaming on a regular basis. Meanwhile, OpenBSD has been *shipping* full M1 support for a while now in official release images (and since Linux is the source of truth for DT bindings, every time we re-bikeshed it we break their users because they, quite reasonably, aren't interested in waiting for us Linux slowpokes to figure it out first). Please, let's introspect about this for a moment. Something is deeply broken if people with 25+ years being an arch maintainer can't get a 700-line mfd driver upstreamed before giving up. I don't know how we expect to ever get a Rust GPU driver merged if it takes 6+ versions to upstream the world's easiest cpufreq hardware. - Hector
On 2022-11-29 11:36, Ulf Hansson wrote: > On Mon, 28 Nov 2022 at 15:29, Hector Martin <marcan@marcan.st> wrote: >> >> +examples: >> + - | >> + // This example shows a single CPU per domain and 2 domains, >> + // with two p-states per domain. >> + // Shipping hardware has 2-4 CPUs per domain and 2-6 domains. >> + cpus { >> + #address-cells = <2>; >> + #size-cells = <0>; >> + >> + cpu@0 { >> + compatible = "apple,icestorm"; >> + device_type = "cpu"; >> + reg = <0x0 0x0>; >> + operating-points-v2 = <&ecluster_opp>; > > To me, it looks like the operating-points-v2 phandle better belongs in > the performance-domains provider node. I mean, isn't the OPPs really a > description of the performance-domain provider? > > That said, I suggest we try to extend the generic performance-domain > binding [1] with an "operating-points-v2". In that way, we should > instead be able to reference it from this binding. > > In fact, that would be very similar to what already exists for the > generic power-domain binding [2]. I think it would be rather nice to > follow a similar pattern for the performance-domain binding. I'm not going to rabbit-hole into whether this is a good or a bad binding. As far as I'm concerned, and as a user of the HW it describes, it does the job in a satisfactory manner. What I'm concerned about is that this constant, last minute bikeshedding that actively prevents upstreaming of support for HW that people are actively using. If anything, this is only causing people to stop trying to contribute to the upstream kernel because of this constant DT bikeshed. Honestly, writing code to support this HW is a lot less effort than trying to get 3 different people to agree on a binding. It also affects other OSs that rely on the the same bindings, and will eventually only result in developers not submitting bindings anymore. After all, nothing says that bindings and device trees have to exist in the Linux kernel tree. M.
On 29/11/2022 15:00, Hector Martin wrote: > On 29/11/2022 20.36, Ulf Hansson wrote: >> On Mon, 28 Nov 2022 at 15:29, Hector Martin <marcan@marcan.st> wrote: >>> +examples: >>> + - | >>> + // This example shows a single CPU per domain and 2 domains, >>> + // with two p-states per domain. >>> + // Shipping hardware has 2-4 CPUs per domain and 2-6 domains. >>> + cpus { >>> + #address-cells = <2>; >>> + #size-cells = <0>; >>> + >>> + cpu@0 { >>> + compatible = "apple,icestorm"; >>> + device_type = "cpu"; >>> + reg = <0x0 0x0>; >>> + operating-points-v2 = <&ecluster_opp>; >> >> To me, it looks like the operating-points-v2 phandle better belongs in >> the performance-domains provider node. I mean, isn't the OPPs really a >> description of the performance-domain provider? >> >> That said, I suggest we try to extend the generic performance-domain >> binding [1] with an "operating-points-v2". In that way, we should >> instead be able to reference it from this binding. >> >> In fact, that would be very similar to what already exists for the >> generic power-domain binding [2]. I think it would be rather nice to >> follow a similar pattern for the performance-domain binding. > > While I agree with the technical rationale and the proposed approach > being better in principle... > > We're at v5 of bikeshedding this trivial driver's DT binding, and the > comment could've been made at v3. To quote IRC just now: > >> this way the machines will be obsolete before things are fully upstreamed > > I think it's long overdue for the kernel community to take a deep look > at itself and its development and review process, because it is quite > honestly insane how pathologically inefficient it is compared to, > basically, every other large and healthy open source project of similar > or even greater impact and scope. > > Cc Linus, because this is for your Mac and I assume you care. We're at > v5 here for this silly driver. Meanwhile, rmk recently threw the towel > on upstreaming macsmc for us. We're trying, and I'll keep trying because > I actually get paid (by very generous donors) to do this, but if I > weren't I'd have given up a long time ago. And while I won't give up, I > can't deny this situation affects my morale and willingness to keep > pushing on upstreaming on a regular basis. > > Meanwhile, OpenBSD has been *shipping* full M1 support for a while now > in official release images (and since Linux is the source of truth for > DT bindings, every time we re-bikeshed it we break their users because > they, quite reasonably, aren't interested in waiting for us Linux > slowpokes to figure it out first). > > Please, let's introspect about this for a moment. Something is deeply > broken if people with 25+ years being an arch maintainer can't get a If arch maintainer sends patches which does not build (make dt_binding_check), then what do you exactly expect? Accept them just because it is 25+ years of experience or a maintainer? So we have difference processes - for beginners code should compile. For experienced people, it does not have to build because otherwise they will get discouraged? > 700-line mfd driver upstreamed before giving up. I don't know how we > expect to ever get a Rust GPU driver merged if it takes 6+ versions to > upstream the world's easiest cpufreq hardware. While I understand your point about bikeschedding, but I think your previous bindings got pretty nice and fast reviews, so using examples of non-building case is poor choice. Best regards, Krzysztof
On 29/11/2022 23.34, Krzysztof Kozlowski wrote: > On 29/11/2022 15:00, Hector Martin wrote: >> On 29/11/2022 20.36, Ulf Hansson wrote: >> Please, let's introspect about this for a moment. Something is deeply >> broken if people with 25+ years being an arch maintainer can't get a > > If arch maintainer sends patches which does not build (make > dt_binding_check), then what do you exactly expect? Accept them just > because it is 25+ years of experience or a maintainer? So we have > difference processes - for beginners code should compile. For > experienced people, it does not have to build because otherwise they > will get discouraged? I expect the process to not be so confusing and frustrating that a maintainer with 25+ years of experience gives up. That the bindings didn't pass the checker is besides the point. People say the Linux kernel community is hostile to newbies. This issue proves it's not just newbies, the process is failing even experienced folks. On that specific issue, any other functional open source project would have the binding checks be a CI bot, with a friendly message telling you what to do to fix it, and it would re-run when you push to the PR again, which is a *much* lower friction action than sending a whole new patch series out for review via email (if you don't agree with this, then you're not the average contributor - the Linux kernel is by far the scariest major open source project to contribute to, and I think most people would agree with me on that). I know Rob has a DT checker bot, but its error output is practically line noise, and the error email doesn't even mention the DT_SCHEMA_FILES= make option (which is the only way to make the check not take *forever* to run). Absolutely nobody is going to look at those emails without already knowing the intricacies of DT bindings and the checker and not find them incredibly frustrating. But it's not just the DT checker. That came after an argument where the MFD maintainer complained about the driver while offering no solution nor proposed path forward. I had to have an IRC conversation with him to work it out, after which he accepted one of the options I'd already proposed over email. If you have to change communication mediums to resolve an issue, that means your initial medium failed at its job. Not to mention the random drive-by reviews, nitpicks, disagreements between maintainers about how to do things, or just plain cases of maintainers stubbornly being *wrong* and refusing to listen while everyone around them is telling them they're wrong (until someone above them in the maintainer tree tells them they're wrong - then they finally get it. If it takes someone in a position of authority telling you you're wrong for you to accept it, you're doing a poor job at your own position.) And then there's the coding style. The rest of the world has standardized on formatting tools. Here, every subsystem maintainer has their own pet style you have to learn. "Please put your variables in reverse christmas tree order". I once got a review comment that complained that I re-aligned an existing #define when I added another one (to keep them visually aligned, as the new one increased the required tab stop). Because "cleanup patches should be separate" (I didn't submit a cleanup patch after that, I just left the defines misaligned). So now I need to maintain a massive mental map of exactly what style conventions and patch breakup conventions every subsystem maintainer wants me to use. I'm so glad `make rustfmt` is a thing now. Maybe 50 years down the line, most of the kernel will have been rewritten in Rust and we'll finally fix just one of these problems /s Some of these things are, to some extent, a natural result of working with other humans. But the kernel community has both a number of humans harder to work with than is reasonable in their position, and an overall process that multiplies the resulting pain by an order of magnitude for everyone involved. > While I understand your point about bikeschedding, but I think your > previous bindings got pretty nice and fast reviews, so using examples of > non-building case is poor choice. Yeah, after a while, because I learned how to do DT bindings the hard way after having to submit a bunch and getting everything wrong (and even then I still get the simplest things wrong, see: v4 here). Which is why I'll pick up after rmk's attempt and try again with macsmc at some point (probably after 6.1). But I'm not holding my breath that I won't need another half dozen rounds of bikeshedding. When it takes 10 times more man-hours to upstream a driver than to reverse engineer and write it with zero documentation, never mind the calendar time it takes, something is very wrong. I actively dread submitting new drivers to new subsystems or some existing ones now. How much pain will the next one be? Will I be asked to move files around 3 times? Spend 4 rounds bikeshedding the DT schema? Think it's finally ready only for someone to show up and ask to change a major part of the design at the last minute? And this all when we're actually submitting code of decent quality (I think I have enough experience not to submit crap most of the time). Now imagine how much worse this all is for a newbie who submits a well-intentioned but definitely not up to standard patch. There's a huge chance they'll give up before ever getting the submission through. Again, I'll keep pushing and sending out patches, because this is important. But this is the reality. This is how bad it is. The process is broken, and everyone outside the kernel community can see that and has been saying that for ages. Just because some of us are willing to put up with the pain anyway doesn't mean it's working as intended. - Hector
On 29/11/2022 16:17, Hector Martin wrote: > On 29/11/2022 23.34, Krzysztof Kozlowski wrote: >> On 29/11/2022 15:00, Hector Martin wrote: >>> On 29/11/2022 20.36, Ulf Hansson wrote: >>> Please, let's introspect about this for a moment. Something is deeply >>> broken if people with 25+ years being an arch maintainer can't get a >> >> If arch maintainer sends patches which does not build (make >> dt_binding_check), then what do you exactly expect? Accept them just >> because it is 25+ years of experience or a maintainer? So we have >> difference processes - for beginners code should compile. For >> experienced people, it does not have to build because otherwise they >> will get discouraged? > > I expect the process to not be so confusing and frustrating that a > maintainer with 25+ years of experience gives up. That the bindings > didn't pass the checker is besides the point. People say the Linux > kernel community is hostile to newbies. This issue proves it's not just > newbies, the process is failing even experienced folks. > > On that specific issue, any other functional open source project would > have the binding checks be a CI bot, with a friendly message telling you > what to do to fix it, and it would re-run when you push to the PR again, > which is a *much* lower friction action than sending a whole new patch > series out for review via email (if you don't agree with this, then > you're not the average contributor - the Linux kernel is by far the > scariest major open source project to contribute to, and I think most > people would agree with me on that). I agree with this entirely. Not only DT checks, but also for driver code with sparse/smatch/coccinelle/W=1/clang builds. > I know Rob has a DT checker bot, but its error output is practically > line noise, and the error email doesn't even mention the > DT_SCHEMA_FILES= make option (which is the only way to make the check > not take *forever* to run). Absolutely nobody is going to look at those > emails without already knowing the intricacies of DT bindings and the > checker and not find them incredibly frustrating. Ack > > But it's not just the DT checker. That came after an argument where the > MFD maintainer complained about the driver while offering no solution > nor proposed path forward. I had to have an IRC conversation with him to > work it out, after which he accepted one of the options I'd already > proposed over email. If you have to change communication mediums to > resolve an issue, that means your initial medium failed at its job. Ack > > Not to mention the random drive-by reviews, nitpicks, disagreements > between maintainers about how to do things, or just plain cases of > maintainers stubbornly being *wrong* and refusing to listen while > everyone around them is telling them they're wrong (until someone above > them in the maintainer tree tells them they're wrong - then they finally > get it. If it takes someone in a position of authority telling you > you're wrong for you to accept it, you're doing a poor job at your own > position.) Ack > > And then there's the coding style. The rest of the world has > standardized on formatting tools. Here, every subsystem maintainer has > their own pet style you have to learn. "Please put your variables in > reverse christmas tree order". Ack here as well, and by above Acks I meant - I agree, you are right. I don't think we should discuss people's differences, I mean, different people can give you different review, sometimes less sometimes more thorough. However at least policies/processes and coding style should be heavily unified. > I once got a review comment that > complained that I re-aligned an existing #define when I added another > one (to keep them visually aligned, as the new one increased the > required tab stop). Because "cleanup patches should be separate" (I > didn't submit a cleanup patch after that, I just left the defines > misaligned). So now I need to maintain a massive mental map of exactly > what style conventions and patch breakup conventions every subsystem > maintainer wants me to use. Yep. I just have a doc, instead of mental map. :) > > I'm so glad `make rustfmt` is a thing now. Maybe 50 years down the line, > most of the kernel will have been rewritten in Rust and we'll finally > fix just one of these problems /s +1 > > Some of these things are, to some extent, a natural result of working > with other humans. But the kernel community has both a number of humans > harder to work with than is reasonable in their position, and an overall > process that multiplies the resulting pain by an order of magnitude for > everyone involved. > >> While I understand your point about bikeschedding, but I think your >> previous bindings got pretty nice and fast reviews, so using examples of >> non-building case is poor choice. > > Yeah, after a while, because I learned how to do DT bindings the hard > way after having to submit a bunch and getting everything wrong (and > even then I still get the simplest things wrong, see: v4 here). Which is > why I'll pick up after rmk's attempt and try again with macsmc at some > point (probably after 6.1). But I'm not holding my breath that I won't > need another half dozen rounds of bikeshedding. > > When it takes 10 times more man-hours to upstream a driver than to > reverse engineer and write it with zero documentation, never mind the > calendar time it takes, something is very wrong. Yes, that's a point. Yet we (community) still have another goal here than the pure goal of submitter. The submitter wants its stuff upstream. Therefore this upstreaming effort is for submitter a bit wasted (as I agree that reverse-engineering should be the biggest task). However the community (expressed maybe mostly as maintainers) wants something which will be maintainable and usable also for others, not only for patch submitters. Indeed maybe 10x difference between writing code and upstreaming is something to improve, but please do not forget that the concept is here to be able to manage this big codebase. > I actively dread > submitting new drivers to new subsystems or some existing ones now. How > much pain will the next one be? Will I be asked to move files around 3 > times? Spend 4 rounds bikeshedding the DT schema? Think it's finally > ready only for someone to show up and ask to change a major part of the > design at the last minute? > > And this all when we're actually submitting code of decent quality (I > think I have enough experience not to submit crap most of the time). Now > imagine how much worse this all is for a newbie who submits a > well-intentioned but definitely not up to standard patch. There's a huge > chance they'll give up before ever getting the submission through. You say this like any newbie should be able to send a patch and get it accepted right away, regardless of actually what is in that patch (not up to standard). It's not newbie's right. No one said it's easy and fast process... If you want easy and fast, do JavaScript... If you want to make it easier and faster in the kernel, then we need more reviewers and maintainers. Somehow many, many people want to send patches, but not that many want to review them. > > Again, I'll keep pushing and sending out patches, because this is > important. But this is the reality. This is how bad it is. The process > is broken, and everyone outside the kernel community can see that and > has been saying that for ages. Just because some of us are willing to > put up with the pain anyway doesn't mean it's working as intended. Best regards, Krzysztof
On 30/11/2022 00.46, Krzysztof Kozlowski wrote: > On 29/11/2022 16:17, Hector Martin wrote: >> When it takes 10 times more man-hours to upstream a driver than to >> reverse engineer and write it with zero documentation, never mind the >> calendar time it takes, something is very wrong. > > Yes, that's a point. Yet we (community) still have another goal here > than the pure goal of submitter. The submitter wants its stuff upstream. > Therefore this upstreaming effort is for submitter a bit wasted (as I > agree that reverse-engineering should be the biggest task). > > However the community (expressed maybe mostly as maintainers) wants > something which will be maintainable and usable also for others, not > only for patch submitters. > > Indeed maybe 10x difference between writing code and upstreaming is > something to improve, but please do not forget that the concept is here > to be able to manage this big codebase. You're misunderstanding me here. The problem isn't the goal of keeping things maintainable. The problem is that the process involved makes the entire experience take much, much longer than it should. A more efficient process could maintain the same (actual) code quality with a lot less time wasted for everyone, both in process issues and in bikeshedding details that don't actually increase code quality. >> I actively dread >> submitting new drivers to new subsystems or some existing ones now. How >> much pain will the next one be? Will I be asked to move files around 3 >> times? Spend 4 rounds bikeshedding the DT schema? Think it's finally >> ready only for someone to show up and ask to change a major part of the >> design at the last minute? >> >> And this all when we're actually submitting code of decent quality (I >> think I have enough experience not to submit crap most of the time). Now >> imagine how much worse this all is for a newbie who submits a >> well-intentioned but definitely not up to standard patch. There's a huge >> chance they'll give up before ever getting the submission through. > > You say this like any newbie should be able to send a patch and get it > accepted right away, regardless of actually what is in that patch (not > up to standard). It's not newbie's right. No one said it's easy and fast > process... If you want easy and fast, do JavaScript... If you want to > make it easier and faster in the kernel, then we need more reviewers and > maintainers. Somehow many, many people want to send patches, but not > that many want to review them. Again, I'm not saying the bad code should go in. I'm saying the *process* is so frustrating that most newbies will give up before the code has a chance to evolve from bad to good and go in. The solution isn't to let more bad code in, it's to make the process less painful. And yes, some of it does mean either relaxing style rules or just adopting a formatter, because let's face it: complaints that a line is 81 characters instead of 80 don't *actually* increase code quality to any measurable extent, they just waste everyone's time, and if that level of style pickiness is desired, that's what formatters are for. I have personal experience with this with the Asahi Linux bootloader (m1n1). We use a formatter for C; the CI tells people when they have to run `make format` to fix their style and that solves all our C formatting issues (incidentally, we also have a CI bot to check for S-o-b lines - very handy). We don't use a formatter for Python, and as a result, I don't impose overly strict rules on things like exactly how many lines of whitespace should exist between functions or exactly how expression continuations should be indented or what the maximum line length is (within reason), because code doesn't have to look like it was written by a homogeneous robot to still be completely understandable and readable to humans using modern editing tools, and trying to teach every contributor my own personal style (which isn't even 100% consistent, I'm not a robot either!) would be a waste of my and their time. Most contributors are capable of defaulting to a readable coding style similar to the rest of the code around it, and it's rare I have to tell someone to fix the formatting because something is clearly wrong (like blatantly inconsistent block indentation). - Hector
On Tue, 29 Nov 2022 at 15:00, Hector Martin <marcan@marcan.st> wrote: > > On 29/11/2022 20.36, Ulf Hansson wrote: > > On Mon, 28 Nov 2022 at 15:29, Hector Martin <marcan@marcan.st> wrote: > >> +examples: > >> + - | > >> + // This example shows a single CPU per domain and 2 domains, > >> + // with two p-states per domain. > >> + // Shipping hardware has 2-4 CPUs per domain and 2-6 domains. > >> + cpus { > >> + #address-cells = <2>; > >> + #size-cells = <0>; > >> + > >> + cpu@0 { > >> + compatible = "apple,icestorm"; > >> + device_type = "cpu"; > >> + reg = <0x0 0x0>; > >> + operating-points-v2 = <&ecluster_opp>; > > > > To me, it looks like the operating-points-v2 phandle better belongs in > > the performance-domains provider node. I mean, isn't the OPPs really a > > description of the performance-domain provider? > > > > That said, I suggest we try to extend the generic performance-domain > > binding [1] with an "operating-points-v2". In that way, we should > > instead be able to reference it from this binding. > > > > In fact, that would be very similar to what already exists for the > > generic power-domain binding [2]. I think it would be rather nice to > > follow a similar pattern for the performance-domain binding. > > While I agree with the technical rationale and the proposed approach > being better in principle... > > We're at v5 of bikeshedding this trivial driver's DT binding, and the > comment could've been made at v3. To quote IRC just now: It could and I certainly apologize for that. It's simply been a busy period for me, so I haven't been able to look closer at the DT bindings, until now. > > > this way the machines will be obsolete before things are fully upstreamed > > I think it's long overdue for the kernel community to take a deep look > at itself and its development and review process, because it is quite > honestly insane how pathologically inefficient it is compared to, > basically, every other large and healthy open source project of similar > or even greater impact and scope. > > Cc Linus, because this is for your Mac and I assume you care. We're at > v5 here for this silly driver. Meanwhile, rmk recently threw the towel > on upstreaming macsmc for us. We're trying, and I'll keep trying because > I actually get paid (by very generous donors) to do this, but if I > weren't I'd have given up a long time ago. And while I won't give up, I > can't deny this situation affects my morale and willingness to keep > pushing on upstreaming on a regular basis. > > Meanwhile, OpenBSD has been *shipping* full M1 support for a while now > in official release images (and since Linux is the source of truth for > DT bindings, every time we re-bikeshed it we break their users because > they, quite reasonably, aren't interested in waiting for us Linux > slowpokes to figure it out first). > > Please, let's introspect about this for a moment. Something is deeply > broken if people with 25+ years being an arch maintainer can't get a > 700-line mfd driver upstreamed before giving up. I don't know how we > expect to ever get a Rust GPU driver merged if it takes 6+ versions to > upstream the world's easiest cpufreq hardware. > > - Hector I didn't intend to bikesheed this, while I do understand your valid concerns from the above statements. Instead, my intent was to help, by reviewing. Simply, because I care about this too. If you think incorporating the changes I proposed is a too big deal at this point, let me not stand in the way of applying this. In the end, it's the DT maintainers' decision. Kind regards Uffe
On Wed, Nov 30, 2022 at 12:17:08AM +0900, Hector Martin wrote: > On 29/11/2022 23.34, Krzysztof Kozlowski wrote: > > On 29/11/2022 15:00, Hector Martin wrote: > >> On 29/11/2022 20.36, Ulf Hansson wrote: > >> Please, let's introspect about this for a moment. Something is deeply > >> broken if people with 25+ years being an arch maintainer can't get a > > > > If arch maintainer sends patches which does not build (make > > dt_binding_check), then what do you exactly expect? Accept them just > > because it is 25+ years of experience or a maintainer? So we have > > difference processes - for beginners code should compile. For > > experienced people, it does not have to build because otherwise they > > will get discouraged? > > I expect the process to not be so confusing and frustrating that a > maintainer with 25+ years of experience gives up. That the bindings > didn't pass the checker is besides the point. People say the Linux > kernel community is hostile to newbies. This issue proves it's not just > newbies, the process is failing even experienced folks. IME, a lack of response is a bigger issue and more frustrating. > On that specific issue, any other functional open source project would > have the binding checks be a CI bot, with a friendly message telling you > what to do to fix it, and it would re-run when you push to the PR again, > which is a *much* lower friction action than sending a whole new patch > series out for review via email (if you don't agree with this, then > you're not the average contributor - the Linux kernel is by far the > scariest major open source project to contribute to, and I think most > people would agree with me on that). We could probably add a $ci_provider job description to do that. In fact, I did try that once[1]. The challenge would be what to run if there's multiple maintainers doing something. Otherwise, it's a maintainer creating their own thing which we have too much of already. > I know Rob has a DT checker bot, but its error output is practically > line noise, I'm not sure what to do there beyond the 'hint' lines I've added. It's kind of how json-schema functions unfortunately. I think it stems from each schema keyword being evaluated independently. > and the error email doesn't even mention the > DT_SCHEMA_FILES= make option (which is the only way to make the check > not take *forever* to run). That's easy enough to add and a specific suggestion I can act on. However, note that the full thing still has to be run because any schema change can affect any other example (which is a large part of why it's slow). > Absolutely nobody is going to look at those > emails without already knowing the intricacies of DT bindings and the > checker and not find them incredibly frustrating. I don't know how else to enable someone not understanding DT bindings nor json-schema to write DT bindings. I get that json-schema is not something kernel developers typically know already. Trust me, the alternatives proposed for a schema over the years would have been much worse. Rob [1] https://lore.kernel.org/all/20181003222715.28667-1-robh@kernel.org/
On Mon, 28 Nov 2022 23:29:10 +0900, Hector Martin wrote: > This binding represents the cpufreq/DVFS hardware present in Apple SoCs. > The hardware has an independent controller per CPU cluster, and we > represent them as unique nodes in order to accurately describe the > hardware. The driver is responsible for binding them as a single cpufreq > device (in the Linux cpufreq model). > > Acked-by: Marc Zyngier <maz@kernel.org> > Signed-off-by: Hector Martin <marcan@marcan.st> > --- > .../cpufreq/apple,cluster-cpufreq.yaml | 117 ++++++++++++++++++ > 1 file changed, 117 insertions(+) > create mode 100644 Documentation/devicetree/bindings/cpufreq/apple,cluster-cpufreq.yaml In case there's any doubt: Reviewed-by: Rob Herring <robh@kernel.org>
On Tue, Nov 29, 2022 at 5:28 PM Rob Herring <robh@kernel.org> wrote: > > On Wed, Nov 30, 2022 at 12:17:08AM +0900, Hector Martin wrote: > > On 29/11/2022 23.34, Krzysztof Kozlowski wrote: > > > On 29/11/2022 15:00, Hector Martin wrote: > > >> On 29/11/2022 20.36, Ulf Hansson wrote: > > >> Please, let's introspect about this for a moment. Something is deeply > > >> broken if people with 25+ years being an arch maintainer can't get a > > > > > > If arch maintainer sends patches which does not build (make > > > dt_binding_check), then what do you exactly expect? Accept them just > > > because it is 25+ years of experience or a maintainer? So we have > > > difference processes - for beginners code should compile. For > > > experienced people, it does not have to build because otherwise they > > > will get discouraged? > > > > I expect the process to not be so confusing and frustrating that a > > maintainer with 25+ years of experience gives up. That the bindings > > didn't pass the checker is besides the point. People say the Linux > > kernel community is hostile to newbies. This issue proves it's not just > > newbies, the process is failing even experienced folks. > > IME, a lack of response is a bigger issue and more frustrating. > > > On that specific issue, any other functional open source project would > > have the binding checks be a CI bot, with a friendly message telling you > > what to do to fix it, and it would re-run when you push to the PR again, > > which is a *much* lower friction action than sending a whole new patch > > series out for review via email (if you don't agree with this, then > > you're not the average contributor - the Linux kernel is by far the > > scariest major open source project to contribute to, and I think most > > people would agree with me on that). > > We could probably add a $ci_provider job description to do that. In > fact, I did try that once[1]. The challenge would be what to run if > there's multiple maintainers doing something. Otherwise, it's a > maintainer creating their own thing which we have too much of already. Actually, turns out this pretty much already exists with my CI. I just had to turn on merge requests on the project. If anyone actually uses it, I'll have to tweak it to not do 'make dtbs_check' because that is really slow. And this all runs on my machines, so that is another issue. It already is just running it for patches on the list (which is a different CI job). Just create a MR here: https://gitlab.com/robherring/linux-dt/-/merge_requests Rob
diff --git a/Documentation/devicetree/bindings/cpufreq/apple,cluster-cpufreq.yaml b/Documentation/devicetree/bindings/cpufreq/apple,cluster-cpufreq.yaml new file mode 100644 index 000000000000..76cb9726660e --- /dev/null +++ b/Documentation/devicetree/bindings/cpufreq/apple,cluster-cpufreq.yaml @@ -0,0 +1,117 @@ +# SPDX-License-Identifier: GPL-2.0-only OR BSD-2-Clause +%YAML 1.2 +--- +$id: http://devicetree.org/schemas/cpufreq/apple,cluster-cpufreq.yaml# +$schema: http://devicetree.org/meta-schemas/core.yaml# + +title: Apple SoC cluster cpufreq device + +maintainers: + - Hector Martin <marcan@marcan.st> + +description: | + Apple SoCs (e.g. M1) have a per-cpu-cluster DVFS controller that is part of + the cluster management register block. This binding uses the standard + operating-points-v2 table to define the CPU performance states, with the + opp-level property specifying the hardware p-state index for that level. + +properties: + compatible: + oneOf: + - items: + - enum: + - apple,t8103-cluster-cpufreq + - apple,t8112-cluster-cpufreq + - const: apple,cluster-cpufreq + - items: + - const: apple,t6000-cluster-cpufreq + - const: apple,t8103-cluster-cpufreq + - const: apple,cluster-cpufreq + + reg: + maxItems: 1 + + '#performance-domain-cells': + const: 0 + +required: + - compatible + - reg + - '#performance-domain-cells' + +additionalProperties: false + +examples: + - | + // This example shows a single CPU per domain and 2 domains, + // with two p-states per domain. + // Shipping hardware has 2-4 CPUs per domain and 2-6 domains. + cpus { + #address-cells = <2>; + #size-cells = <0>; + + cpu@0 { + compatible = "apple,icestorm"; + device_type = "cpu"; + reg = <0x0 0x0>; + operating-points-v2 = <&ecluster_opp>; + performance-domains = <&cpufreq_e>; + }; + + cpu@10100 { + compatible = "apple,firestorm"; + device_type = "cpu"; + reg = <0x0 0x10100>; + operating-points-v2 = <&pcluster_opp>; + performance-domains = <&cpufreq_p>; + }; + }; + + ecluster_opp: opp-table-0 { + compatible = "operating-points-v2"; + opp-shared; + + opp01 { + opp-hz = /bits/ 64 <600000000>; + opp-level = <1>; + clock-latency-ns = <7500>; + }; + opp02 { + opp-hz = /bits/ 64 <972000000>; + opp-level = <2>; + clock-latency-ns = <22000>; + }; + }; + + pcluster_opp: opp-table-1 { + compatible = "operating-points-v2"; + opp-shared; + + opp01 { + opp-hz = /bits/ 64 <600000000>; + opp-level = <1>; + clock-latency-ns = <8000>; + }; + opp02 { + opp-hz = /bits/ 64 <828000000>; + opp-level = <2>; + clock-latency-ns = <19000>; + }; + }; + + soc { + #address-cells = <2>; + #size-cells = <2>; + + cpufreq_e: performance-controller@210e20000 { + compatible = "apple,t8103-cluster-cpufreq", "apple,cluster-cpufreq"; + reg = <0x2 0x10e20000 0 0x1000>; + #performance-domain-cells = <0>; + }; + + cpufreq_p: performance-controller@211e20000 { + compatible = "apple,t8103-cluster-cpufreq", "apple,cluster-cpufreq"; + reg = <0x2 0x11e20000 0 0x1000>; + #performance-domain-cells = <0>; + }; + };