From patchwork Mon Oct 31 07:37:58 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: MD Danish Anwar X-Patchwork-Id: 13136 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp2165826wru; Mon, 31 Oct 2022 00:41:48 -0700 (PDT) X-Google-Smtp-Source: AMsMyM4Lz7lZR3BzTkLfMwFD/2EeaOPl6mxvjQcEkfTtjDpXhuVhQ0vHb/rbtGJJ3Ah+/ZZDQFgR X-Received: by 2002:a05:6402:4444:b0:458:f355:ce04 with SMTP id o4-20020a056402444400b00458f355ce04mr12627194edb.422.1667202108082; Mon, 31 Oct 2022 00:41:48 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1667202108; cv=none; d=google.com; s=arc-20160816; b=Icw7zK+5vOSp/oDDql3rjB0CEQQPomq+nhpOjDIZ6tvkOX0KwE9ry8jSaYLEe2YWup D7vanqPtFHhSpS1WTsuq2ZA7uPz9YOuF/m0nyCgXbBik2zsAjZuy3aZ+UOEFcQd2TMYS /xUBESFPfPNwpOk97g/KgLxzSZccMGQmlbMXc310ne0e/STce4YR5+7yF5WAg0mA7M/2 8hEQRJ3CQTSPUKnL8nVdqbw4xWIzluIFaOsIHJ7AvJW0szm4/kiq+WRCzVKVbUfZ/0he 7+2ja2VJIacvyhNox0brb0R+HX2Pw4Mh5TvychZ317HB/9ujglwtH7gTbtWBkGYXAOAl dC8Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=eBIe3q/yiGXV89wO4Lr9w4+u6TArNmpozq9EEtztdX4=; b=wJCF11qR+K74L5Jv5W9Kpg67YxZ4/FWBdLk1LH5MPCn9izLhFxeFXKOIrozj8Q0YPh N0VjmWJgxcwgdk/G8BpNa1yJOGhCZeHN2bTFmErt4t4tRSx71lCrRhquBc8B7kXLAiRR TVO83cYG6RBJYRpHsrAskpwxQutU0+CXF3vOsaq68Clm1Qm9zMgbecx6aQQT3YgHDj4M 7LVKyFGpXqnUe/XAr0qZpCxOf6A4LpvDuYLnaMYdSMh9hHerkiCJEKMXAidLpnrcEerK wK17SyBqj/Ve06jv90CvOmx/4qMrwGEMRHqbvFvzPZoAwQh4g8rD711v4x2yybHRnHz0 hzAA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b=FyJmtSYE; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id y7-20020a056402270700b00459528ef81bsi8313652edd.324.2022.10.31.00.41.24; Mon, 31 Oct 2022 00:41:48 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b=FyJmtSYE; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229636AbiJaHim (ORCPT + 99 others); Mon, 31 Oct 2022 03:38:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58962 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229761AbiJaHi0 (ORCPT ); Mon, 31 Oct 2022 03:38:26 -0400 Received: from fllv0016.ext.ti.com (fllv0016.ext.ti.com [198.47.19.142]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5548A338; Mon, 31 Oct 2022 00:38:25 -0700 (PDT) Received: from fllv0035.itg.ti.com ([10.64.41.0]) by fllv0016.ext.ti.com (8.15.2/8.15.2) with ESMTP id 29V7cBwC045482; Mon, 31 Oct 2022 02:38:11 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1667201891; bh=eBIe3q/yiGXV89wO4Lr9w4+u6TArNmpozq9EEtztdX4=; h=From:To:CC:Subject:Date:In-Reply-To:References; b=FyJmtSYEBQ5A9tp9IDZdxRpAhfmE7JmUVgZ+Dor4A34x++lNapbLkRgR8+HZQN0tM 5ylEVozg5rT2bm1jnE4724R45MYBSlPHdXRQBFtBfmv40Puarb2ND3oSPPlm05+ebW xCqIuaz2AMf8+SlxAcf1K7xvhwydHH22CcPkP9Lk= Received: from DFLE112.ent.ti.com (dfle112.ent.ti.com [10.64.6.33]) by fllv0035.itg.ti.com (8.15.2/8.15.2) with ESMTPS id 29V7cBVa111504 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=FAIL); Mon, 31 Oct 2022 02:38:11 -0500 Received: from DFLE114.ent.ti.com (10.64.6.35) by DFLE112.ent.ti.com (10.64.6.33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2507.6; Mon, 31 Oct 2022 02:38:10 -0500 Received: from fllv0040.itg.ti.com (10.64.41.20) by DFLE114.ent.ti.com (10.64.6.35) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2507.6 via Frontend Transport; Mon, 31 Oct 2022 02:38:10 -0500 Received: from lelv0854.itg.ti.com (lelv0854.itg.ti.com [10.181.64.140]) by fllv0040.itg.ti.com (8.15.2/8.15.2) with ESMTP id 29V7cAPw104766; Mon, 31 Oct 2022 02:38:10 -0500 Received: from localhost (a0501179-pc.dhcp.ti.com [10.24.69.114]) by lelv0854.itg.ti.com (8.14.7/8.14.7) with ESMTP id 29V7c8R6012817; Mon, 31 Oct 2022 02:38:09 -0500 From: MD Danish Anwar To: Mathieu Poirier , Krzysztof Kozlowski , Rob Herring CC: Suman Anna , Roger Quadros , "Andrew F . Davis" , , , , , , , , MD Danish Anwar Subject: [PATCH v7 2/5] remoteproc: pru: Add APIs to get and put the PRU cores Date: Mon, 31 Oct 2022 13:07:58 +0530 Message-ID: <20221031073801.130541-3-danishanwar@ti.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221031073801.130541-1-danishanwar@ti.com> References: <20221031073801.130541-1-danishanwar@ti.com> MIME-Version: 1.0 X-EXCLAIMER-MD-CONFIG: e1e8a2fd-e40a-4ac6-ac9b-f7e9cc9ee180 X-Spam-Status: No, score=-5.5 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1748188117943032687?= X-GMAIL-MSGID: =?utf-8?q?1748188117943032687?= From: Tero Kristo Add two new APIs, pru_rproc_get() and pru_rproc_put(), to the PRU driver to allow client drivers to acquire and release the remoteproc device associated with a PRU core. The PRU cores are treated as resources with only one client owning it at a time. The pru_rproc_get() function returns the rproc handle corresponding to a PRU core identified by the device tree "ti,prus" property under the client node. The pru_rproc_put() is the complementary function to pru_rproc_get(). Co-developed-by: Suman Anna Signed-off-by: Suman Anna Signed-off-by: Tero Kristo Co-developed-by: Grzegorz Jaszczyk Signed-off-by: Grzegorz Jaszczyk Co-developed-by: Puranjay Mohan Signed-off-by: Puranjay Mohan Signed-off-by: MD Danish Anwar --- drivers/remoteproc/pru_rproc.c | 142 +++++++++++++++++++++++++++++++-- include/linux/pruss.h | 56 +++++++++++++ 2 files changed, 193 insertions(+), 5 deletions(-) create mode 100644 include/linux/pruss.h diff --git a/drivers/remoteproc/pru_rproc.c b/drivers/remoteproc/pru_rproc.c index 128bf9912f2c..9ba73cfc29e2 100644 --- a/drivers/remoteproc/pru_rproc.c +++ b/drivers/remoteproc/pru_rproc.c @@ -2,12 +2,14 @@ /* * PRU-ICSS remoteproc driver for various TI SoCs * - * Copyright (C) 2014-2020 Texas Instruments Incorporated - https://www.ti.com/ + * Copyright (C) 2014-2022 Texas Instruments Incorporated - https://www.ti.com/ * * Author(s): * Suman Anna * Andrew F. Davis * Grzegorz Jaszczyk for Texas Instruments + * Puranjay Mohan + * Md Danish Anwar */ #include @@ -16,6 +18,7 @@ #include #include #include +#include #include #include @@ -111,6 +114,8 @@ struct pru_private_data { * @rproc: remoteproc pointer for this PRU core * @data: PRU core specific data * @mem_regions: data for each of the PRU memory regions + * @client_np: client device node + * @lock: mutex to protect client usage * @fw_name: name of firmware image used during loading * @mapped_irq: virtual interrupt numbers of created fw specific mapping * @pru_interrupt_map: pointer to interrupt mapping description (firmware) @@ -126,6 +131,8 @@ struct pru_rproc { struct rproc *rproc; const struct pru_private_data *data; struct pruss_mem_region mem_regions[PRU_IOMEM_MAX]; + struct device_node *client_np; + struct mutex lock; /* client access lock */ const char *fw_name; unsigned int *mapped_irq; struct pru_irq_rsc *pru_interrupt_map; @@ -146,6 +153,127 @@ void pru_control_write_reg(struct pru_rproc *pru, unsigned int reg, u32 val) writel_relaxed(val, pru->mem_regions[PRU_IOMEM_CTRL].va + reg); } +static struct rproc *__pru_rproc_get(struct device_node *np, int index) +{ + struct rproc *rproc; + phandle rproc_phandle; + int ret; + + ret = of_property_read_u32_index(np, "ti,prus", index, &rproc_phandle); + if (ret) + return ERR_PTR(ret); + + rproc = rproc_get_by_phandle(rproc_phandle); + if (!rproc) { + ret = -EPROBE_DEFER; + goto err_no_rproc_handle; + } + + /* make sure it is PRU rproc */ + if (!is_pru_rproc(rproc->dev.parent)) { + rproc_put(rproc); + return ERR_PTR(-ENODEV); + } + + get_device(&rproc->dev); + + return rproc; + +err_no_rproc_handle: + rproc_put(rproc); + return ERR_PTR(ret); +} + +/** + * pru_rproc_get() - get the PRU rproc instance from a device node + * @np: the user/client device node + * @index: index to use for the ti,prus property + * @pru_id: optional pointer to return the PRU remoteproc processor id + * + * This function looks through a client device node's "ti,prus" property at + * index @index and returns the rproc handle for a valid PRU remote processor if + * found. The function allows only one user to own the PRU rproc resource at a + * time. Caller must call pru_rproc_put() when done with using the rproc, not + * required if the function returns a failure. + * + * When optional @pru_id pointer is passed the PRU remoteproc processor id is + * returned. + * + * Return: rproc handle on success, and an ERR_PTR on failure using one + * of the following error values + * -ENODEV if device is not found + * -EBUSY if PRU is already acquired by anyone + * -EPROBE_DEFER is PRU device is not probed yet + */ +struct rproc *pru_rproc_get(struct device_node *np, int index, + enum pruss_pru_id *pru_id) +{ + struct rproc *rproc; + struct pru_rproc *pru; + struct device *dev; + int ret; + + rproc = __pru_rproc_get(np, index); + if (IS_ERR(rproc)) + return rproc; + + pru = rproc->priv; + dev = &rproc->dev; + + mutex_lock(&pru->lock); + + if (pru->client_np) { + mutex_unlock(&pru->lock); + put_device(dev); + ret = -EBUSY; + goto err_no_rproc_handle; + } + + pru->client_np = np; + + mutex_unlock(&pru->lock); + + if (pru_id) + *pru_id = pru->id; + + return rproc; + +err_no_rproc_handle: + rproc_put(rproc); + return ERR_PTR(ret); +} +EXPORT_SYMBOL_GPL(pru_rproc_get); + +/** + * pru_rproc_put() - release the PRU rproc resource + * @rproc: the rproc resource to release + * + * Releases the PRU rproc resource and makes it available to other + * users. + */ +void pru_rproc_put(struct rproc *rproc) +{ + struct pru_rproc *pru; + + if (IS_ERR_OR_NULL(rproc) || !is_pru_rproc(rproc->dev.parent)) + return; + + pru = rproc->priv; + + mutex_lock(&pru->lock); + + if (!pru->client_np) { + mutex_unlock(&pru->lock); + return; + } + + pru->client_np = NULL; + mutex_unlock(&pru->lock); + + rproc_put(rproc); +} +EXPORT_SYMBOL_GPL(pru_rproc_put); + static inline u32 pru_debug_read_reg(struct pru_rproc *pru, unsigned int reg) { return readl_relaxed(pru->mem_regions[PRU_IOMEM_DEBUG].va + reg); @@ -438,7 +566,7 @@ static void *pru_d_da_to_va(struct pru_rproc *pru, u32 da, size_t len) dram0 = pruss->mem_regions[PRUSS_MEM_DRAM0]; dram1 = pruss->mem_regions[PRUSS_MEM_DRAM1]; /* PRU1 has its local RAM addresses reversed */ - if (pru->id == 1) + if (pru->id == PRUSS_PRU1) swap(dram0, dram1); shrd_ram = pruss->mem_regions[PRUSS_MEM_SHRD_RAM2]; @@ -747,14 +875,14 @@ static int pru_rproc_set_id(struct pru_rproc *pru) case RTU0_IRAM_ADDR_MASK: fallthrough; case PRU0_IRAM_ADDR_MASK: - pru->id = 0; + pru->id = PRUSS_PRU0; break; case TX_PRU1_IRAM_ADDR_MASK: fallthrough; case RTU1_IRAM_ADDR_MASK: fallthrough; case PRU1_IRAM_ADDR_MASK: - pru->id = 1; + pru->id = PRUSS_PRU1; break; default: ret = -EINVAL; @@ -816,6 +944,8 @@ static int pru_rproc_probe(struct platform_device *pdev) pru->pruss = platform_get_drvdata(ppdev); pru->rproc = rproc; pru->fw_name = fw_name; + pru->client_np = NULL; + mutex_init(&pru->lock); for (i = 0; i < ARRAY_SIZE(mem_names); i++) { res = platform_get_resource_byname(pdev, IORESOURCE_MEM, @@ -904,7 +1034,7 @@ MODULE_DEVICE_TABLE(of, pru_rproc_match); static struct platform_driver pru_rproc_driver = { .driver = { - .name = "pru-rproc", + .name = PRU_RPROC_DRVNAME, .of_match_table = pru_rproc_match, .suppress_bind_attrs = true, }, @@ -916,5 +1046,7 @@ module_platform_driver(pru_rproc_driver); MODULE_AUTHOR("Suman Anna "); MODULE_AUTHOR("Andrew F. Davis "); MODULE_AUTHOR("Grzegorz Jaszczyk "); +MODULE_AUTHOR("Puranjay Mohan "); +MODULE_AUTHOR("Md Danish Anwar "); MODULE_DESCRIPTION("PRU-ICSS Remote Processor Driver"); MODULE_LICENSE("GPL v2"); diff --git a/include/linux/pruss.h b/include/linux/pruss.h new file mode 100644 index 000000000000..fdc719b43db0 --- /dev/null +++ b/include/linux/pruss.h @@ -0,0 +1,56 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/** + * PRU-ICSS Subsystem user interfaces + * + * Copyright (C) 2015-2022 Texas Instruments Incorporated - http://www.ti.com + * Suman Anna + */ + +#ifndef __LINUX_PRUSS_H +#define __LINUX_PRUSS_H + +#include +#include + +#define PRU_RPROC_DRVNAME "pru-rproc" + +/* + * enum pruss_pru_id - PRU core identifiers + */ +enum pruss_pru_id { + PRUSS_PRU0 = 0, + PRUSS_PRU1, + PRUSS_NUM_PRUS, +}; + +struct device_node; + +#if IS_ENABLED(CONFIG_PRU_REMOTEPROC) + +struct rproc *pru_rproc_get(struct device_node *np, int index, + enum pruss_pru_id *pru_id); +void pru_rproc_put(struct rproc *rproc); + +#else + +static inline struct rproc * +pru_rproc_get(struct device_node *np, int index, enum pruss_pru_id *pru_id) +{ + return ERR_PTR(-EOPNOTSUPP); +} + +static inline void pru_rproc_put(struct rproc *rproc) { } + +#endif /* CONFIG_PRU_REMOTEPROC */ + +static inline bool is_pru_rproc(struct device *dev) +{ + const char *drv_name = dev_driver_string(dev); + + if (strncmp(drv_name, PRU_RPROC_DRVNAME, sizeof(PRU_RPROC_DRVNAME))) + return false; + + return true; +} + +#endif /* __LINUX_PRUSS_H */