From patchwork Tue Apr 11 10:59:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sumit Gupta X-Patchwork-Id: 81884 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp2483584vqo; Tue, 11 Apr 2023 04:03:52 -0700 (PDT) X-Google-Smtp-Source: AKy350aicjpbZ/0veDcoxa2VBN7ns9XeT8p81c9fcQnsDeoyeffHHBYzgD/dGooGWSA+cmpqXxtu X-Received: by 2002:a05:6a20:7517:b0:db:443b:58d0 with SMTP id r23-20020a056a20751700b000db443b58d0mr13606756pzd.12.1681211032602; Tue, 11 Apr 2023 04:03:52 -0700 (PDT) ARC-Seal: i=2; a=rsa-sha256; t=1681211032; cv=pass; d=google.com; s=arc-20160816; b=JAaHlGmIMfHnQNnUDuiK/3Kd1our6mM2BZRz+jH4HAQfG5CdpXEHn8fMVBZWYHViuI adHH3yI8ypZmTBx0JFyC+PJdxuAAshTUAGuy+gixvmSlcvROQEsmlqp7+BEh+pTsyJYu LwViToWsxcFlllfxtNdS2ujptTdt85AIWrky5fhRsd/L/d8sG4JBYAXrFDwPsr9USLYw BqQK7kBcrUBeb15nviLODs8sXanuE3CwZhwObrVW/pTwU3TsZHWsEcUkFiV0QO1sN7kC EIxakIU2llpHP7yzI6hfhk/pWRFddMiREVSNJDN8ijGMjI/z9KK1gkqFI8QSqVsG5yEG wo1A== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=oSL0tSeKRavrPoWu4zSxlgLh06KAImtspwJWfUPGZSM=; b=bV7rV1Z2MvpQxAWvGjMQrxpEhnPR+XrDqJyr0fzqo9G8bSEYsVaMnKJXXYlKJ6gDV4 fCSCsQyJKrA1JDvet0789hVCMYWQJkLZZfghg1oZDB9fluNMo3vlk+vPu650FPqlJBJh wa7SMuN6mzDOe8/0Nrom1kT2Tmj3w9BzGwSkoE02SfKcnepUW6ZasLub7xTBk4JawVc9 NJ2TtBkFnE2CVUOMV9DBjulhrD/an5DKvN9zATie1USqVyRaRwQpawLj9nR6wZ8BpgFV lYQ3OHxO0QwuPegRV3j+VxsnPEQzqbxt/AhXmxkiiyRtxUdRXCBe10aIhyApgOpfGJhO Wolg== ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@Nvidia.com header.s=selector2 header.b=jLYOoM0G; arc=pass (i=1 spf=pass spfdomain=nvidia.com dmarc=pass fromdomain=nvidia.com); spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=nvidia.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id 190-20020a6300c7000000b004d3cdf2aae7si9802924pga.789.2023.04.11.04.03.39; Tue, 11 Apr 2023 04:03:52 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@Nvidia.com header.s=selector2 header.b=jLYOoM0G; arc=pass (i=1 spf=pass spfdomain=nvidia.com dmarc=pass fromdomain=nvidia.com); spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=nvidia.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230010AbjDKLCh (ORCPT + 99 others); Tue, 11 Apr 2023 07:02:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54744 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230030AbjDKLCR (ORCPT ); Tue, 11 Apr 2023 07:02:17 -0400 Received: from NAM10-DM6-obe.outbound.protection.outlook.com (mail-dm6nam10on20611.outbound.protection.outlook.com [IPv6:2a01:111:f400:7e88::611]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AC80749ED; Tue, 11 Apr 2023 04:01:54 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=QRVvEvigrth45OEUS10hKgfp9H6pZV3WiKeSrpQW3i049XuKJnfqoGsnjexW2+c0T7XDHuU2aR7kmtwsG14OjMXA/md5ohdbixY0yxjNTge8xCqyX83EJpaY7Z+kqpcL9xj+GmW/QM2V4hRiJOjMrcgJS+bbUc1Lw9jwbq8nY3UXVItOv5ER06NUDJly2M47zm0y1hFcryfFYn5v2D2hArw60kBKUMqAL6sSsUktyLLI1TWzUSE3zVWxLfHAfe5q6ra5GzH96GsSfFp2R9qwAVdqxkWL9eB8selCtVWwfbwnu5NvlxspgaE4XPcqSzSp44W8skzHMDIakO9YuBS5sw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=oSL0tSeKRavrPoWu4zSxlgLh06KAImtspwJWfUPGZSM=; b=DuwxMJ1ZRvMVc25C+Ea52TlO0SIXrYDVBrjX9fhqSMScgWIRHV3Jm4zQbgM+TTo+7SExL/bcR2XL4hFsNCgA2SiPKSrCOj22vjQocQgnUrba9JDiSrKe8eWX7CwWjeiX4yy1GFIQrz+PAGl8Lm45QH24w6rHRgtCIPfXkgYf6uHoNVQCsH3GDAY24tZmarbBK2SVC4NSNkJM0pQOrfXe93Ye8ZdQN9Y0JueEi2neKjyDtfocIEibGVLR9flxrHS9f9fFClTJH/J85em5H6BcBDQwp1tevK91luGnufiazs/N9cjJZyUza8NkgPAryPbE/jzOseUM68utZrtmp1NYcg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=google.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=oSL0tSeKRavrPoWu4zSxlgLh06KAImtspwJWfUPGZSM=; b=jLYOoM0GJ3jJcieBlL9JFvv7SusyEQ1AiFZSokBdDJHIFfr483OqnfQoy/LxCwhpWnFA++HmCoLI1LfTBjwAlWNu1qVfLsMdkrlI9934OiSKFI+C7ijjvnai473zpdBJZ9kNyTHZpqC1g6+OhOcLhza+0/vIERRhiy6GtmajkgfNMPSlgLlgNGjOrERgU5kEGNMNWIEQqI64qFIvwZcb7O+MKzd61FatxOJH7F2FDd9Bn7r4mvm0fDEMlRjkHwVUrN13EZJSsosk5CPfHAtwvXp84Jn7gvsEL3L0D5RQ4uSAWZenktfVQAXrbAdS1s5xuzR9swLjD1PJ6Cp4PM9h1w== Received: from BN9P223CA0022.NAMP223.PROD.OUTLOOK.COM (2603:10b6:408:10b::27) by MW6PR12MB8757.namprd12.prod.outlook.com (2603:10b6:303:239::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.34; Tue, 11 Apr 2023 11:01:50 +0000 Received: from BN8NAM11FT075.eop-nam11.prod.protection.outlook.com (2603:10b6:408:10b:cafe::fb) by BN9P223CA0022.outlook.office365.com (2603:10b6:408:10b::27) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.38 via Frontend Transport; Tue, 11 Apr 2023 11:01:49 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by BN8NAM11FT075.mail.protection.outlook.com (10.13.176.208) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.28 via Frontend Transport; Tue, 11 Apr 2023 11:01:49 +0000 Received: from rnnvmail202.nvidia.com (10.129.68.7) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.5; Tue, 11 Apr 2023 04:01:37 -0700 Received: from rnnvmail204.nvidia.com (10.129.68.6) by rnnvmail202.nvidia.com (10.129.68.7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.37; Tue, 11 Apr 2023 04:01:36 -0700 Received: from sumitg-l4t.nvidia.com (10.127.8.10) by mail.nvidia.com (10.129.68.6) with Microsoft SMTP Server id 15.2.986.37 via Frontend Transport; Tue, 11 Apr 2023 04:01:30 -0700 From: Sumit Gupta To: , , , , , , , , CC: , , , , , , , , , , , , , Subject: [Patch v6 6/9] cpufreq: tegra194: add OPP support and set bandwidth Date: Tue, 11 Apr 2023 16:29:59 +0530 Message-ID: <20230411110002.19824-7-sumitg@nvidia.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230411110002.19824-1-sumitg@nvidia.com> References: <20230411110002.19824-1-sumitg@nvidia.com> X-NVConfidentiality: public MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN8NAM11FT075:EE_|MW6PR12MB8757:EE_ X-MS-Office365-Filtering-Correlation-Id: 5aa943f0-b805-4cbe-a7e0-08db3a7c2644 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: wUsGeG3PZTs5PPBdXWis7KYLIrLPdJY2jz6jVt4AZaYtUYN2hyHIPBcipYkoHKfuV0op+vlqdJFZDlonHE1/4kilb5RxRklYKFZ6NrKsdT8TyXIu+sjLYlKkiWqYKhj7OyAkTip4In8tyKErlsCzWYuAg5gvwjJbhCrOjL/gR7eYTj32XDlIVSmJoquXRzqR6640SOLcixQJXJIirB2VcVCNgKEbY1BKyYdAxsslfwUnR6QOcDCH+cmXDEYtuzZRxPtj7hvi3MyNsbb/g4FX835zvP78IrK1hm7mCMrDmdDYhh8lTr0wRSV6IEfBI9tKOgyXJ/U0tRdGENRemE7yVKHmYAlptV0XtJ6khvrEsUEwZefUYpZyXMeAj1GmVxG4/P57CPL4NX7GCIV+l5uKOe0zmstANVH3eGIUszdmA1pjxkvZROTw6n/WyBZboDETaJFKsHT4oR1y3SDC5TUmU9+WpqzmqFsRh36t9Ykh+b+Mg/oxJjR2BvvOXP0O/QL729QllPURzDuYiBDbcWmn8D/PCjbmth/Z/DiFr1b2K75EbVS2Tv6IInz1Njtda7HJsGbbXi79trW3HzoE0EYW0cmHTDMkCKN4hlsNUxameJOQOyAXexX5EfTrooWgzyJ9z2eCq9Fzw/FdWkbLr4BAo/lNMEkUwbz5ZJI+sb0uxV8QTvt1r7B5hbGWRuMYzlsYFeTF+Q0j8Z6HlgJGGNblfRWojko0pW/kGfkm7ybC+P7pLlZ0eRsjC1ag6iJD/UBZ X-Forefront-Antispam-Report: CIP:216.228.117.161;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:dc6edge2.nvidia.com;CAT:NONE;SFS:(13230028)(4636009)(136003)(376002)(346002)(39860400002)(396003)(451199021)(40470700004)(46966006)(36840700001)(86362001)(40460700003)(8936002)(7696005)(82310400005)(478600001)(36756003)(41300700001)(8676002)(70206006)(70586007)(4326008)(54906003)(36860700001)(26005)(316002)(107886003)(83380400001)(6666004)(47076005)(110136005)(1076003)(186003)(336012)(2616005)(426003)(40480700001)(2906002)(82740400003)(7416002)(7636003)(356005)(5660300002);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Apr 2023 11:01:49.3259 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 5aa943f0-b805-4cbe-a7e0-08db3a7c2644 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.117.161];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT075.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW6PR12MB8757 X-Spam-Status: No, score=0.8 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FORGED_SPF_HELO,SPF_HELO_PASS, SPF_NONE autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1762877540231046289?= X-GMAIL-MSGID: =?utf-8?q?1762877540231046289?= Add support to use OPP table from DT in Tegra194 cpufreq driver. Tegra SoC's receive the frequency lookup table (LUT) from BPMP-FW. Cross check the OPP's present in DT against the LUT from BPMP-FW and enable only those DT OPP's which are present in LUT also. The OPP table in DT has CPU Frequency to bandwidth mapping where the bandwidth value is per MC channel. DRAM bandwidth depends on the number of MC channels which can vary as per the boot configuration. This per channel bandwidth from OPP table will be later converted by MC driver to final bandwidth value by multiplying with number of channels before sending the request to BPMP-FW. If OPP table is not present in DT, then use the LUT from BPMP-FW directy as the CPU frequency table and not do the DRAM frequency scaling which is same as the current behavior. Now, as the CPU Frequency table is being controlling through OPP table in DT. Keeping fewer entries in the table will create less frequency steps and can help to scale fast to high frequencies when required. Signed-off-by: Sumit Gupta --- drivers/cpufreq/tegra194-cpufreq.c | 156 ++++++++++++++++++++++++++--- 1 file changed, 143 insertions(+), 13 deletions(-) diff --git a/drivers/cpufreq/tegra194-cpufreq.c b/drivers/cpufreq/tegra194-cpufreq.c index 5890e25d7f77..c8d03346068a 100644 --- a/drivers/cpufreq/tegra194-cpufreq.c +++ b/drivers/cpufreq/tegra194-cpufreq.c @@ -12,6 +12,7 @@ #include #include #include +#include #include @@ -65,12 +66,36 @@ struct tegra_cpufreq_soc { struct tegra194_cpufreq_data { void __iomem *regs; - struct cpufreq_frequency_table **tables; + struct cpufreq_frequency_table **bpmp_luts; const struct tegra_cpufreq_soc *soc; + bool icc_dram_bw_scaling; }; static struct workqueue_struct *read_counters_wq; +static int tegra_cpufreq_set_bw(struct cpufreq_policy *policy, unsigned long freq_khz) +{ + struct tegra194_cpufreq_data *data = cpufreq_get_driver_data(); + struct dev_pm_opp *opp; + struct device *dev; + int ret; + + dev = get_cpu_device(policy->cpu); + if (!dev) + return -ENODEV; + + opp = dev_pm_opp_find_freq_exact(dev, freq_khz * KHZ, true); + if (IS_ERR(opp)) + return PTR_ERR(opp); + + ret = dev_pm_opp_set_opp(dev, opp); + if (ret) + data->icc_dram_bw_scaling = false; + + dev_pm_opp_put(opp); + return ret; +} + static void tegra_get_cpu_mpidr(void *mpidr) { *((u64 *)mpidr) = read_cpuid_mpidr() & MPIDR_HWID_BITMASK; @@ -354,7 +379,7 @@ static unsigned int tegra194_get_speed(u32 cpu) * to the last written ndiv value from freq_table. This is * done to return consistent value. */ - cpufreq_for_each_valid_entry(pos, data->tables[clusterid]) { + cpufreq_for_each_valid_entry(pos, data->bpmp_luts[clusterid]) { if (pos->driver_data != ndiv) continue; @@ -369,16 +394,93 @@ static unsigned int tegra194_get_speed(u32 cpu) return rate; } +static int tegra_cpufreq_init_cpufreq_table(struct cpufreq_policy *policy, + struct cpufreq_frequency_table *bpmp_lut, + struct cpufreq_frequency_table **opp_table) +{ + struct tegra194_cpufreq_data *data = cpufreq_get_driver_data(); + struct cpufreq_frequency_table *freq_table = NULL; + struct cpufreq_frequency_table *pos; + struct device *cpu_dev; + struct dev_pm_opp *opp; + unsigned long rate; + int ret, max_opps; + int j = 0; + + cpu_dev = get_cpu_device(policy->cpu); + if (!cpu_dev) { + pr_err("%s: failed to get cpu%d device\n", __func__, policy->cpu); + return -ENODEV; + } + + /* Initialize OPP table mentioned in operating-points-v2 property in DT */ + ret = dev_pm_opp_of_add_table_indexed(cpu_dev, 0); + if (!ret) { + max_opps = dev_pm_opp_get_opp_count(cpu_dev); + if (max_opps <= 0) { + dev_err(cpu_dev, "Failed to add OPPs\n"); + return max_opps; + } + + /* Disable all opps and cross-validate against LUT later */ + for (rate = 0; ; rate++) { + opp = dev_pm_opp_find_freq_ceil(cpu_dev, &rate); + if (IS_ERR(opp)) + break; + + dev_pm_opp_put(opp); + dev_pm_opp_disable(cpu_dev, rate); + } + } else { + dev_err(cpu_dev, "Invalid or empty opp table in device tree\n"); + data->icc_dram_bw_scaling = false; + return ret; + } + + freq_table = kcalloc((max_opps + 1), sizeof(*freq_table), GFP_KERNEL); + if (!freq_table) + return -ENOMEM; + + /* + * Cross check the frequencies from BPMP-FW LUT against the OPP's present in DT. + * Enable only those DT OPP's which are present in LUT also. + */ + cpufreq_for_each_valid_entry(pos, bpmp_lut) { + opp = dev_pm_opp_find_freq_exact(cpu_dev, pos->frequency * KHZ, false); + if (IS_ERR(opp)) + continue; + + ret = dev_pm_opp_enable(cpu_dev, pos->frequency * KHZ); + if (ret < 0) + return ret; + + freq_table[j].driver_data = pos->driver_data; + freq_table[j].frequency = pos->frequency; + j++; + } + + freq_table[j].driver_data = pos->driver_data; + freq_table[j].frequency = CPUFREQ_TABLE_END; + + *opp_table = &freq_table[0]; + + dev_pm_opp_set_sharing_cpus(cpu_dev, policy->cpus); + + return ret; +} + static int tegra194_cpufreq_init(struct cpufreq_policy *policy) { struct tegra194_cpufreq_data *data = cpufreq_get_driver_data(); int maxcpus_per_cluster = data->soc->maxcpus_per_cluster; + struct cpufreq_frequency_table *freq_table; + struct cpufreq_frequency_table *bpmp_lut; u32 start_cpu, cpu; u32 clusterid; + int ret; data->soc->ops->get_cpu_cluster_id(policy->cpu, NULL, &clusterid); - - if (clusterid >= data->soc->num_clusters || !data->tables[clusterid]) + if (clusterid >= data->soc->num_clusters || !data->bpmp_luts[clusterid]) return -EINVAL; start_cpu = rounddown(policy->cpu, maxcpus_per_cluster); @@ -387,9 +489,22 @@ static int tegra194_cpufreq_init(struct cpufreq_policy *policy) if (cpu_possible(cpu)) cpumask_set_cpu(cpu, policy->cpus); } - policy->freq_table = data->tables[clusterid]; policy->cpuinfo.transition_latency = TEGRA_CPUFREQ_TRANSITION_LATENCY; + bpmp_lut = data->bpmp_luts[clusterid]; + + if (data->icc_dram_bw_scaling) { + ret = tegra_cpufreq_init_cpufreq_table(policy, bpmp_lut, &freq_table); + if (!ret) { + policy->freq_table = freq_table; + return 0; + } + } + + data->icc_dram_bw_scaling = false; + policy->freq_table = bpmp_lut; + pr_info("OPP tables missing from DT, EMC frequency scaling disabled\n"); + return 0; } @@ -406,6 +521,9 @@ static int tegra194_cpufreq_set_target(struct cpufreq_policy *policy, */ data->soc->ops->set_cpu_ndiv(policy, (u64)tbl->driver_data); + if (data->icc_dram_bw_scaling) + tegra_cpufreq_set_bw(policy, tbl->frequency); + return 0; } @@ -439,8 +557,8 @@ static void tegra194_cpufreq_free_resources(void) } static struct cpufreq_frequency_table * -init_freq_table(struct platform_device *pdev, struct tegra_bpmp *bpmp, - unsigned int cluster_id) +tegra_cpufreq_bpmp_read_lut(struct platform_device *pdev, struct tegra_bpmp *bpmp, + unsigned int cluster_id) { struct cpufreq_frequency_table *freq_table; struct mrq_cpu_ndiv_limits_response resp; @@ -515,6 +633,7 @@ static int tegra194_cpufreq_probe(struct platform_device *pdev) const struct tegra_cpufreq_soc *soc; struct tegra194_cpufreq_data *data; struct tegra_bpmp *bpmp; + struct device *cpu_dev; int err, i; data = devm_kzalloc(&pdev->dev, sizeof(*data), GFP_KERNEL); @@ -530,9 +649,9 @@ static int tegra194_cpufreq_probe(struct platform_device *pdev) return -EINVAL; } - data->tables = devm_kcalloc(&pdev->dev, data->soc->num_clusters, - sizeof(*data->tables), GFP_KERNEL); - if (!data->tables) + data->bpmp_luts = devm_kcalloc(&pdev->dev, data->soc->num_clusters, + sizeof(*data->bpmp_luts), GFP_KERNEL); + if (!data->bpmp_luts) return -ENOMEM; if (soc->actmon_cntr_base) { @@ -556,15 +675,26 @@ static int tegra194_cpufreq_probe(struct platform_device *pdev) } for (i = 0; i < data->soc->num_clusters; i++) { - data->tables[i] = init_freq_table(pdev, bpmp, i); - if (IS_ERR(data->tables[i])) { - err = PTR_ERR(data->tables[i]); + data->bpmp_luts[i] = tegra_cpufreq_bpmp_read_lut(pdev, bpmp, i); + if (IS_ERR(data->bpmp_luts[i])) { + err = PTR_ERR(data->bpmp_luts[i]); goto err_free_res; } } tegra194_cpufreq_driver.driver_data = data; + /* Check for optional OPPv2 and interconnect paths on CPU0 to enable ICC scaling */ + cpu_dev = get_cpu_device(0); + if (!cpu_dev) + return -EPROBE_DEFER; + + if (dev_pm_opp_of_get_opp_desc_node(cpu_dev)) { + err = dev_pm_opp_of_find_icc_paths(cpu_dev, NULL); + if (!err) + data->icc_dram_bw_scaling = true; + } + err = cpufreq_register_driver(&tegra194_cpufreq_driver); if (!err) goto put_bpmp;