From patchwork Tue Nov 22 11:11:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?WWFuY2hhbyBZYW5nICjmnajlvabotoUp?= X-Patchwork-Id: 24286 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp2139439wrr; Tue, 22 Nov 2022 03:19:29 -0800 (PST) X-Google-Smtp-Source: AA0mqf46O8cYkosSuZWafJ17NyQ9LFCfiMMoCMtRDBaHMtv2jg6WkNx8lcZUR8Wd9diuomQo/T2E X-Received: by 2002:a05:6402:5513:b0:467:7026:515e with SMTP id fi19-20020a056402551300b004677026515emr20900485edb.26.1669115969174; Tue, 22 Nov 2022 03:19:29 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1669115969; cv=none; d=google.com; s=arc-20160816; b=QKXBwBZMCx6nEDC4mY5UCh49cytElPLO6xBZ4pcPL1xzTYs52YTreDRLpy6BQh6CoD 4YxWw38tx4hpDs95r269wnRbXs2nuaRIAwlvfA8/a4xq5dafDWTjfRlHWKhkw6oncYlH VDieCYsLdcH76Tqob+V+Jsq2mZOYkoWD9+oQq4DZ3JTyPv+QGuz3Y+JqFmgAR2R86zuR poscjVZJ+poSKDQvQeesIAsF73yvI/u/2j3/ZTrNYh0QnQ4wEifEte7kUFJIglH2ys0j 3rtXMXxoD2/hrFziSo8sexR4glA+0lESQHhdplP2NvMtGDIVl7AfJ+vwRfxU4Swz17ds 1W5A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=95+8Wvfu1F/cvxZUXnRRrHuyMjPmPT5AzAU6GMrtA2E=; b=qo1j9WBz2ar5luWUQHSbEIBhHilMecnRPUDPSudM6EEpYeSy9PVTP02H29pZl7aG54 A2Vw9HXilBgc8BARzLVM+KByuNvWsAdIsu4YZMa1H7sq23/Nopquh15KZ5Aif7IVpq2j d1RqmpvTfhLNgcyaVWI89qndkbqQYsY5ubwvXpiLQPuC21gO46p+EVPUgCY4lSFfZ/B4 lDHDBgfYv+b/2O2dFm6EgTz+N5x2FWlsd0hXSo67ThKO5/pMfWXCuqi5Q4QWkeKIWuRE Q6fTpjheXnQODgSzh/O5iijKhIh6pM9c4ykeodcfVOcWQOI0xRuZnqzTbnACIYUbbLst ve1w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@mediatek.com header.s=dk header.b=jmf1GPWV; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=mediatek.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id m19-20020a056402431300b00459060fce1bsi12928616edc.574.2022.11.22.03.19.05; Tue, 22 Nov 2022 03:19:29 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@mediatek.com header.s=dk header.b=jmf1GPWV; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=mediatek.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233169AbiKVLO5 (ORCPT + 99 others); Tue, 22 Nov 2022 06:14:57 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60688 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233035AbiKVLNs (ORCPT ); Tue, 22 Nov 2022 06:13:48 -0500 Received: from mailgw01.mediatek.com (unknown [60.244.123.138]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9710240925; Tue, 22 Nov 2022 03:13:28 -0800 (PST) X-UUID: 7e77e69afe674c7b957a7b0808cf6e31-20221122 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=mediatek.com; s=dk; h=Content-Type:MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:CC:To:From; bh=95+8Wvfu1F/cvxZUXnRRrHuyMjPmPT5AzAU6GMrtA2E=; b=jmf1GPWVboYGjvXRo71jaUPlHRLM70S40mZShO5UCYqWG6jFKYu5x5oYKO3maPGAQs0s33YPEJL9/xS6MpQIVFGGKibagTbcj/2OA3vQrbA3IPeYdfNagv8K26JvSBQp2dJ4KAO8sjQ/hUzCDDi44KOg0AsQMgHlwtsTA4A5QTc=; X-CID-P-RULE: Release_Ham X-CID-O-INFO: VERSION:1.1.13,REQID:1364265e-3a34-4f49-a378-12e653ddc4e4,IP:0,U RL:0,TC:0,Content:-25,EDM:0,RT:0,SF:0,FILE:0,BULK:0,RULE:Release_Ham,ACTIO N:release,TS:-25 X-CID-META: VersionHash:d12e911,CLOUDID:6be2e9db-6ad4-42ff-91f3-18e0272db660,B ulkID:nil,BulkQuantity:0,Recheck:0,SF:102,TC:nil,Content:0,EDM:-3,IP:nil,U RL:0,File:nil,Bulk:nil,QS:nil,BEC:nil,COL:1 X-UUID: 7e77e69afe674c7b957a7b0808cf6e31-20221122 Received: from mtkmbs11n2.mediatek.inc [(172.21.101.187)] by mailgw01.mediatek.com (envelope-from ) (Generic MTA with TLSv1.2 ECDHE-RSA-AES256-GCM-SHA384 256/256) with ESMTP id 1496961858; Tue, 22 Nov 2022 19:13:21 +0800 Received: from mtkmbs11n2.mediatek.inc (172.21.101.187) by mtkmbs11n2.mediatek.inc (172.21.101.187) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.792.15; Tue, 22 Nov 2022 19:13:19 +0800 Received: from mcddlt001.gcn.mediatek.inc (10.19.240.15) by mtkmbs11n2.mediatek.inc (172.21.101.73) with Microsoft SMTP Server id 15.2.792.15 via Frontend Transport; Tue, 22 Nov 2022 19:13:17 +0800 From: Yanchao Yang To: Loic Poulain , Sergey Ryazanov , Johannes Berg , "David S . Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , netdev ML , kernel ML CC: MTK ML , Liang Lu , Haijun Liu , Hua Yang , Ting Wang , Felix Chen , Mingliang Xu , Min Dong , Aiden Wang , Guohao Zhang , Chris Feng , Yanchao Yang , Lambert Wang , Mingchuang Qiao , Xiayu Zhang , Haozhe Chang , MediaTek Corporation Subject: [PATCH net-next v1 01/13] net: wwan: tmi: Add PCIe core Date: Tue, 22 Nov 2022 19:11:40 +0800 Message-ID: <20221122111152.160377-2-yanchao.yang@mediatek.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20221122111152.160377-1-yanchao.yang@mediatek.com> References: <20221122111152.160377-1-yanchao.yang@mediatek.com> MIME-Version: 1.0 X-MTK: N X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_MSPIKE_H2,SPF_HELO_PASS, SPF_PASS,UNPARSEABLE_RELAY autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1750194946648893515?= X-GMAIL-MSGID: =?utf-8?q?1750194946648893515?= From: MediaTek Corporation Registers the TMI device driver with the kernel. Set up all the fundamental configurations for the device: PCIe layer, Modem Host Cross Core Interface (MHCCIF), Reset Generation Unit (RGU), modem common control operations and build infrastructure. * PCIe layer code implements driver probe and removal, MSI-X interrupt initialization and de-initialization, and the way of resetting the device. * MHCCIF provides interrupt channels to communicate events such as handshake, PM and port enumeration. * RGU provides interrupt channels to generate notifications from the device so that the TMI driver could get the device reset. * Modem common control operations provide the basic read/write functions of the device's hardware registers, mask/unmask/get/clear functions of the device's interrupt registers and inquiry functions of the device's status. Signed-off-by: Ting Wang Signed-off-by: MediaTek Corporation --- drivers/net/wwan/Kconfig | 11 + drivers/net/wwan/Makefile | 1 + drivers/net/wwan/mediatek/Makefile | 12 + drivers/net/wwan/mediatek/mtk_common.h | 30 + drivers/net/wwan/mediatek/mtk_dev.c | 50 + drivers/net/wwan/mediatek/mtk_dev.h | 503 ++++++++++ drivers/net/wwan/mediatek/pcie/mtk_pci.c | 1164 ++++++++++++++++++++++ drivers/net/wwan/mediatek/pcie/mtk_pci.h | 150 +++ drivers/net/wwan/mediatek/pcie/mtk_reg.h | 69 ++ 9 files changed, 1990 insertions(+) create mode 100644 drivers/net/wwan/mediatek/Makefile create mode 100644 drivers/net/wwan/mediatek/mtk_common.h create mode 100644 drivers/net/wwan/mediatek/mtk_dev.c create mode 100644 drivers/net/wwan/mediatek/mtk_dev.h create mode 100644 drivers/net/wwan/mediatek/pcie/mtk_pci.c create mode 100644 drivers/net/wwan/mediatek/pcie/mtk_pci.h create mode 100644 drivers/net/wwan/mediatek/pcie/mtk_reg.h diff --git a/drivers/net/wwan/Kconfig b/drivers/net/wwan/Kconfig index 3486ffe94ac4..a93a0c511d50 100644 --- a/drivers/net/wwan/Kconfig +++ b/drivers/net/wwan/Kconfig @@ -119,6 +119,17 @@ config MTK_T7XX If unsure, say N. +config MTK_TMI + tristate "TMI Driver for Mediatek T-series Device" + depends on PCI + help + This driver enables Mediatek T-series WWAN Device communication. + + If you have one of those Mediatek T-series WWAN Modules and wish to + use it in Linux say Y/M here. + + If unsure, say N. + endif # WWAN endmenu diff --git a/drivers/net/wwan/Makefile b/drivers/net/wwan/Makefile index 3960c0ae2445..198d8074851f 100644 --- a/drivers/net/wwan/Makefile +++ b/drivers/net/wwan/Makefile @@ -14,3 +14,4 @@ obj-$(CONFIG_QCOM_BAM_DMUX) += qcom_bam_dmux.o obj-$(CONFIG_RPMSG_WWAN_CTRL) += rpmsg_wwan_ctrl.o obj-$(CONFIG_IOSM) += iosm/ obj-$(CONFIG_MTK_T7XX) += t7xx/ +obj-$(CONFIG_MTK_TMI) += mediatek/ diff --git a/drivers/net/wwan/mediatek/Makefile b/drivers/net/wwan/mediatek/Makefile new file mode 100644 index 000000000000..ae5f8a5ba05a --- /dev/null +++ b/drivers/net/wwan/mediatek/Makefile @@ -0,0 +1,12 @@ +# SPDX-License-Identifier: BSD-3-Clause-Clear + +MODULE_NAME := mtk_tmi + +mtk_tmi-y = \ + pcie/mtk_pci.o \ + mtk_dev.o + +ccflags-y += -I$(srctree)/$(src)/ +ccflags-y += -I$(srctree)/$(src)/pcie/ + +obj-$(CONFIG_MTK_TMI) += mtk_tmi.o diff --git a/drivers/net/wwan/mediatek/mtk_common.h b/drivers/net/wwan/mediatek/mtk_common.h new file mode 100644 index 000000000000..516d3d9e02cf --- /dev/null +++ b/drivers/net/wwan/mediatek/mtk_common.h @@ -0,0 +1,30 @@ +/* SPDX-License-Identifier: BSD-3-Clause-Clear + * + * Copyright (c) 2022, MediaTek Inc. + */ + +#ifndef _MTK_COMMON_H +#define _MTK_COMMON_H + +#include + +#define MTK_UEVENT_INFO_LEN 128 + +/* MTK uevent */ +enum mtk_uevent_id { + MTK_UEVENT_FSM = 1, + MTK_UEVENT_MAX +}; + +static inline void mtk_uevent_notify(struct device *dev, enum mtk_uevent_id id, const char *info) +{ + char buf[MTK_UEVENT_INFO_LEN]; + char *ext[2] = {NULL, NULL}; + + snprintf(buf, MTK_UEVENT_INFO_LEN, "%s:event_id=%d, info=%s", + dev->kobj.name, id, info); + ext[0] = buf; + kobject_uevent_env(&dev->kobj, KOBJ_CHANGE, ext); +} + +#endif /* _MTK_COMMON_H */ diff --git a/drivers/net/wwan/mediatek/mtk_dev.c b/drivers/net/wwan/mediatek/mtk_dev.c new file mode 100644 index 000000000000..d3d7bf940d78 --- /dev/null +++ b/drivers/net/wwan/mediatek/mtk_dev.c @@ -0,0 +1,50 @@ +// SPDX-License-Identifier: BSD-3-Clause-Clear +/* + * Copyright (c) 2022, MediaTek Inc. + */ + +#include "mtk_dev.h" + +int mtk_dev_init(struct mtk_md_dev *mdev) +{ + return 0; +} + +void mtk_dev_exit(struct mtk_md_dev *mdev) +{ +} + +int mtk_dev_start(struct mtk_md_dev *mdev) +{ + return 0; +} + +int mtk_dma_map_single(struct mtk_md_dev *mdev, dma_addr_t *addr, + void *mem, size_t size, int direction) +{ + if (!addr) + return -EINVAL; + + *addr = dma_map_single(mdev->dev, mem, size, direction); + if (unlikely(dma_mapping_error(mdev->dev, *addr))) { + dev_err(mdev->dev, "Failed to map dma!\n"); + return -ENOMEM; + } + + return 0; +} + +int mtk_dma_map_page(struct mtk_md_dev *mdev, dma_addr_t *addr, + struct page *page, unsigned long offset, size_t size, int direction) +{ + if (!addr) + return -EINVAL; + + *addr = dma_map_page(mdev->dev, page, offset, size, direction); + if (unlikely(dma_mapping_error(mdev->dev, *addr))) { + dev_err(mdev->dev, "Failed to map dma!\n"); + return -ENOMEM; + } + + return 0; +} diff --git a/drivers/net/wwan/mediatek/mtk_dev.h b/drivers/net/wwan/mediatek/mtk_dev.h new file mode 100644 index 000000000000..bd7b1dc11daf --- /dev/null +++ b/drivers/net/wwan/mediatek/mtk_dev.h @@ -0,0 +1,503 @@ +/* SPDX-License-Identifier: BSD-3-Clause-Clear + * + * Copyright (c) 2022, MediaTek Inc. + */ + +#ifndef __MTK_DEV_H__ +#define __MTK_DEV_H__ + +#include +#include + +#define MTK_DEV_STR_LEN 16 + +enum mtk_irq_src { + MTK_IRQ_SRC_MIN, + MTK_IRQ_SRC_MHCCIF, + MTK_IRQ_SRC_SAP_RGU, + MTK_IRQ_SRC_DPMAIF, + MTK_IRQ_SRC_DPMAIF2, + MTK_IRQ_SRC_CLDMA0, + MTK_IRQ_SRC_CLDMA1, + MTK_IRQ_SRC_CLDMA2, + MTK_IRQ_SRC_CLDMA3, + MTK_IRQ_SRC_PM_LOCK, + MTK_IRQ_SRC_DPMAIF3, + MTK_IRQ_SRC_MAX +}; + +enum mtk_user_id { + MTK_USER_HW, + MTK_USER_CTRL, + MTK_USER_DPMAIF, + MTK_USER_PM, + MTK_USER_EXCEPT, + MTK_USER_MAX +}; + +enum mtk_reset_type { + RESET_FLDR, + RESET_PLDR, + RESET_RGU, +}; + +enum mtk_reinit_type { + REINIT_TYPE_RESUME, + REINIT_TYPE_EXP, +}; + +enum mtk_l1ss_grp { + L1SS_PM, + L1SS_EXT_EVT, +}; + +#define L1SS_BIT_L1(grp) BIT(((grp) << 2) + 1) +#define L1SS_BIT_L1_1(grp) BIT(((grp) << 2) + 2) +#define L1SS_BIT_L1_2(grp) BIT(((grp) << 2) + 3) + +struct mtk_md_dev; + +/* struct mtk_hw_ops - The HW layer operations provided to transaction layer. + * @read32: Callback to read 32-bit register. + * @write32: Callback to write 32-bit register. + * @get_dev_state: Callback to get the device's state. + * @ack_dev_state: Callback to acknowledge device state. + * @get_ds_status: Callback to get device deep sleep status. + * @ds_lock: Callback to lock the deep sleep of device. + * @ds_unlock: Callback to unlock the deep sleep of device. + * @set_l1ss: Callback to set the link L1 and L1ss enable/disable. + * @get_resume_state:Callback to get PM resume information that device writes. + * @get_irq_id: Callback to get the irq id specific IP on a chip. + * @get_virq_id: Callback to get the system virtual IRQ. + * @register_irq: Callback to register callback function to specific hardware IP. + * @unregister_irq: Callback to unregister callback function to specific hardware IP. + * @mask_irq: Callback to mask the interrupt of specific hardware IP. + * @unmask_irq: Callback to unmask the interrupt of specific hardware IP. + * @clear_irq: Callback to clear the interrupt of specific hardware IP. + * @register_ext_evt:Callback to register HW Layer external event. + * @unregister_ext_evt:Callback to unregister HW Layer external event. + * @mask_ext_evt: Callback to mask HW Layer external event. + * @unmask_ext_evt: Callback to unmask HW Layer external event. + * @clear_ext_evt: Callback to clear HW Layer external event status. + * @send_ext_evt: Callback to send HW Layer external event. + * @get_ext_evt_status:Callback to get HW Layer external event status. + * @reset: Callback to reset device. + * @reinit: Callback to execute device re-initialization. + * @get_hp_status: Callback to get link hotplug status. + */ +struct mtk_hw_ops { + /* Read value from MD. For PCIe, it's BAR 2/3 MMIO read */ + u32 (*read32)(struct mtk_md_dev *mdev, u64 addr); + /* Write value to MD. For PCIe, it's BAR 2/3 MMIO write */ + void (*write32)(struct mtk_md_dev *mdev, u64 addr, u32 val); + /* Device operations */ + u32 (*get_dev_state)(struct mtk_md_dev *mdev); + void (*ack_dev_state)(struct mtk_md_dev *mdev, u32 state); + u32 (*get_ds_status)(struct mtk_md_dev *mdev); + void (*ds_lock)(struct mtk_md_dev *mdev); + void (*ds_unlock)(struct mtk_md_dev *mdev); + void (*set_l1ss)(struct mtk_md_dev *mdev, u32 type, bool enable); + u32 (*get_resume_state)(struct mtk_md_dev *mdev); + /* IRQ Related operations */ + int (*get_irq_id)(struct mtk_md_dev *mdev, enum mtk_irq_src irq_src); + int (*get_virq_id)(struct mtk_md_dev *mdev, int irq_id); + int (*register_irq)(struct mtk_md_dev *mdev, int irq_id, + int (*irq_cb)(int irq_id, void *data), void *data); + int (*unregister_irq)(struct mtk_md_dev *mdev, int irq_id); + int (*mask_irq)(struct mtk_md_dev *mdev, int irq_id); + int (*unmask_irq)(struct mtk_md_dev *mdev, int irq_id); + int (*clear_irq)(struct mtk_md_dev *mdev, int irq_id); + /* External event related */ + int (*register_ext_evt)(struct mtk_md_dev *mdev, u32 chs, + int (*evt_cb)(u32 status, void *data), void *data); + int (*unregister_ext_evt)(struct mtk_md_dev *mdev, u32 chs); + void (*mask_ext_evt)(struct mtk_md_dev *mdev, u32 chs); + void (*unmask_ext_evt)(struct mtk_md_dev *mdev, u32 chs); + void (*clear_ext_evt)(struct mtk_md_dev *mdev, u32 chs); + int (*send_ext_evt)(struct mtk_md_dev *mdev, u32 ch); + u32 (*get_ext_evt_status)(struct mtk_md_dev *mdev); + + int (*reset)(struct mtk_md_dev *mdev, enum mtk_reset_type type); + int (*reinit)(struct mtk_md_dev *mdev, enum mtk_reinit_type type); + int (*get_hp_status)(struct mtk_md_dev *mdev); +}; + +/* mtk_md_dev defines the structure of MTK modem device */ +struct mtk_md_dev { + struct device *dev; + const struct mtk_hw_ops *hw_ops; /* The operations provided by hw layer */ + void *hw_priv; + u32 hw_ver; + int msi_nvecs; + char dev_str[MTK_DEV_STR_LEN]; +}; + +int mtk_dev_init(struct mtk_md_dev *mdev); +void mtk_dev_exit(struct mtk_md_dev *mdev); +int mtk_dev_start(struct mtk_md_dev *mdev); + +/* mtk_hw_read32() -Read dword from register. + * + * @mdev: Device instance. + * @addr: Register address. + * + * Return: Dword register value. + */ +static inline u32 mtk_hw_read32(struct mtk_md_dev *mdev, u64 addr) +{ + return mdev->hw_ops->read32(mdev, addr); +} + +/* mtk_hw_write32() -Write dword to register. + * + * @mdev: Device instance. + * @addr: Register address. + * @val: Dword to be written. + */ +static inline void mtk_hw_write32(struct mtk_md_dev *mdev, u64 addr, u32 val) +{ + mdev->hw_ops->write32(mdev, addr, val); +} + +/* mtk_hw_get_dev_state() -Get device's state register. + * + * @mdev: Device instance. + * + * Return: The value of state register. + */ +static inline u32 mtk_hw_get_dev_state(struct mtk_md_dev *mdev) +{ + return mdev->hw_ops->get_dev_state(mdev); +} + +/* mtk_hw_ack_dev_state() -Write state to device's state register. + * + * @mdev: Device instance. + * @state: The state value to be written. + */ +static inline void mtk_hw_ack_dev_state(struct mtk_md_dev *mdev, u32 state) +{ + mdev->hw_ops->ack_dev_state(mdev, state); +} + +/* mtk_hw_get_ds_status() -Get device's deep sleep status. + * + * @mdev: Device instance. + * + * Return: The value of deep sleep register. + */ +static inline u32 mtk_hw_get_ds_status(struct mtk_md_dev *mdev) +{ + return mdev->hw_ops->get_ds_status(mdev); +} + +/* mtk_hw_ds_lock() -Prevent the device from entering deep sleep. + * + * @mdev: Device instance. + */ +static inline void mtk_hw_ds_lock(struct mtk_md_dev *mdev) +{ + mdev->hw_ops->ds_lock(mdev); +} + +/* mtk_hw_ds_unlock() -Allow the device from entering deep sleep. + * + * @mdev: Device instance. + */ +static inline void mtk_hw_ds_unlock(struct mtk_md_dev *mdev) +{ + mdev->hw_ops->ds_unlock(mdev); +} + +/* mtk_hw_set_l1ss() -Enable or disable l1ss. + * + * @mdev: Device instance. + * @type: Select the sub-function of l1ss by bit, + * please see "enum mtk_l1ss_grp" and "L1SS_BIT_L1", "L1SS_BIT_L1_1", "L1SS_BIT_L1_2". + * @enable: Input true or false. + */ +static inline void mtk_hw_set_l1ss(struct mtk_md_dev *mdev, u32 type, bool enable) +{ + mdev->hw_ops->set_l1ss(mdev, type, enable); +} + +/* mtk_hw_get_resume_state() -Get device resume status. + * + * @mdev: Device instance. + * + * Return: The resume state of device. + */ +static inline u32 mtk_hw_get_resume_state(struct mtk_md_dev *mdev) +{ + return mdev->hw_ops->get_resume_state(mdev); +} + +/* mtk_hw_get_irq_id() -Get hardware irq_id by virtual irq_src. + * + * @mdev: Device instance. + * @irq_src: Virtual irq source number. + * + * Return: a negative value indicates failure, other values are valid irq_id. + */ +static inline int mtk_hw_get_irq_id(struct mtk_md_dev *mdev, enum mtk_irq_src irq_src) +{ + return mdev->hw_ops->get_irq_id(mdev, irq_src); +} + +/* mtk_hw_get_virq_id() -Get system virtual IRQ by hardware irq_id. + * + * @mdev: Device instance. + * @irq_src: Hardware irq_id. + * + * Return: System virtual IRQ. + */ +static inline int mtk_hw_get_virq_id(struct mtk_md_dev *mdev, int irq_id) +{ + return mdev->hw_ops->get_virq_id(mdev, irq_id); +} + +/* mtk_hw_register_irq() -Register the interrupt callback to irq_id. + * + * @mdev: Device instance. + * @irq_id: Hardware irq id. + * @irq_cb: The interrupt callback. + * @data: Private data for callback. + * + * Return: 0 indicates success, other value indicates failure. + */ +static inline int mtk_hw_register_irq(struct mtk_md_dev *mdev, int irq_id, + int (*irq_cb)(int irq_id, void *data), void *data) +{ + return mdev->hw_ops->register_irq(mdev, irq_id, irq_cb, data); +} + +/* mtk_hw_unregister_irq() -Unregister the interrupt callback to irq_id. + * + * @mdev: Device instance. + * @irq_id: Hardware irq id. + * + * Return: 0 indicates success, other value indicates failure. + */ +static inline int mtk_hw_unregister_irq(struct mtk_md_dev *mdev, int irq_id) +{ + return mdev->hw_ops->unregister_irq(mdev, irq_id); +} + +/* mtk_hw_mask_irq() -Mask interrupt. + * + * @mdev: Device instance. + * @irq_id: Hardware irq id. + * + * Return: 0 indicates success, other value indicates failure. + */ +static inline int mtk_hw_mask_irq(struct mtk_md_dev *mdev, int irq_id) +{ + return mdev->hw_ops->mask_irq(mdev, irq_id); +} + +/* mtk_hw_unmask_irq() -Unmask interrupt. + * + * @mdev: Device instance. + * @irq_id: Hardware irq id. + * + * Return: 0 indicates success, other value indicates failure. + */ +static inline int mtk_hw_unmask_irq(struct mtk_md_dev *mdev, int irq_id) +{ + return mdev->hw_ops->unmask_irq(mdev, irq_id); +} + +/* mtk_hw_clear_irq() -Clear interrupt. + * + * @mdev: Device instance. + * @irq_id: Hardware irq id. + * + * Return: 0 indicates success, other value indicates failure. + */ +static inline int mtk_hw_clear_irq(struct mtk_md_dev *mdev, int irq_id) +{ + return mdev->hw_ops->clear_irq(mdev, irq_id); +} + +/* mtk_hw_register_ext_evt() -Register callback to external events. + * + * @mdev: Device instance. + * @chs: External event channels. + * @evt_cb: External events callback. + * @data: Private data for callback. + * + * Return: 0 indicates success, other value indicates failure. + */ +static inline int mtk_hw_register_ext_evt(struct mtk_md_dev *mdev, u32 chs, + int (*evt_cb)(u32 status, void *data), void *data) +{ + return mdev->hw_ops->register_ext_evt(mdev, chs, evt_cb, data); +} + +/* mtk_hw_unregister_ext_evt() -Unregister callback to external events. + * + * @mdev: Device instance. + * @chs: External event channels. + * + * Return: 0 indicates success, other value indicates failure. + */ +static inline int mtk_hw_unregister_ext_evt(struct mtk_md_dev *mdev, u32 chs) +{ + return mdev->hw_ops->unregister_ext_evt(mdev, chs); +} + +/* mtk_hw_mask_ext_evt() -Mask external events. + * + * @mdev: Device instance. + * @chs: External event channels. + */ +static inline void mtk_hw_mask_ext_evt(struct mtk_md_dev *mdev, u32 chs) +{ + mdev->hw_ops->mask_ext_evt(mdev, chs); +} + +/* mtk_hw_unmask_ext_evt() -Unmask external events. + * + * @mdev: Device instance. + * @chs: External event channels. + */ +static inline void mtk_hw_unmask_ext_evt(struct mtk_md_dev *mdev, u32 chs) +{ + mdev->hw_ops->unmask_ext_evt(mdev, chs); +} + +/* mtk_hw_clear_ext_evt() -Clear external events. + * + * @mdev: Device instance. + * @chs: External event channels. + */ +static inline void mtk_hw_clear_ext_evt(struct mtk_md_dev *mdev, u32 chs) +{ + mdev->hw_ops->clear_ext_evt(mdev, chs); +} + +/* mtk_hw_send_ext_evt() -Send external event to device. + * + * @mdev: Device instance. + * @ch: External event channel, only allow one channel at a time. + * + * Return: 0 indicates success, other value indicates failure. + */ +static inline int mtk_hw_send_ext_evt(struct mtk_md_dev *mdev, u32 ch) +{ + return mdev->hw_ops->send_ext_evt(mdev, ch); +} + +/* mtk_hw_get_ext_evt_status() -Get external event status of device. + * + * @mdev: Device instance. + * + * Return: External event status of device. + */ +static inline u32 mtk_hw_get_ext_evt_status(struct mtk_md_dev *mdev) +{ + return mdev->hw_ops->get_ext_evt_status(mdev); +} + +/* mtk_hw_reset() -Reset device. + * + * @mdev: Device instance. + * @type: Reset type. + * + * Return: 0 indicates success, other value indicates failure. + */ +static inline int mtk_hw_reset(struct mtk_md_dev *mdev, enum mtk_reset_type type) +{ + return mdev->hw_ops->reset(mdev, type); +} + +/* mtk_hw_reinit() -Reinitialize device. + * + * @mdev: Device instance. + * @type: Reinit type. + * + * Return: 0 indicates success, other value indicates failure. + */ +static inline int mtk_hw_reinit(struct mtk_md_dev *mdev, enum mtk_reinit_type type) +{ + return mdev->hw_ops->reinit(mdev, type); +} + +/* mtk_hw_get_hp_status() -Get whether the device can be hot-plugged. + * + * @mdev: Device instance. + * + * Return: 0 indicates can't, other value indicates can. + */ +static inline int mtk_hw_get_hp_status(struct mtk_md_dev *mdev) +{ + return mdev->hw_ops->get_hp_status(mdev); +} + +static inline void *mtk_dma_alloc_coherent(struct mtk_md_dev *mdev, + size_t size, dma_addr_t *addr, gfp_t flag) +{ + if (addr) + return dma_alloc_coherent(mdev->dev, size, addr, flag); + return NULL; +} + +static inline int mtk_dma_free_coherent(struct mtk_md_dev *mdev, + size_t size, void *cpu_addr, dma_addr_t addr) +{ + if (!addr) + return -EINVAL; + dma_free_coherent(mdev->dev, size, cpu_addr, addr); + return 0; +} + +static inline struct dma_pool *mtk_dma_pool_create(struct mtk_md_dev *mdev, + const char *name, size_t size, + size_t align, size_t allocation) +{ + return dma_pool_create(name, mdev->dev, size, align, allocation); +} + +static inline void mtk_dma_pool_destroy(struct dma_pool *pool) +{ + dma_pool_destroy(pool); +} + +static inline void *mtk_dma_pool_alloc(struct dma_pool *pool, gfp_t mem_flags, dma_addr_t *addr) +{ + if (!pool || !addr) + return NULL; + return dma_pool_zalloc(pool, mem_flags, addr); +} + +static inline int mtk_dma_pool_free(struct dma_pool *pool, void *cpu_addr, dma_addr_t addr) +{ + if (!pool || !addr) + return -EINVAL; + dma_pool_free(pool, cpu_addr, addr); + return 0; +} + +int mtk_dma_map_single(struct mtk_md_dev *mdev, dma_addr_t *addr, + void *mem, size_t size, int direction); +static inline int mtk_dma_unmap_single(struct mtk_md_dev *mdev, + dma_addr_t addr, size_t size, int direction) +{ + if (!addr) + return -EINVAL; + dma_unmap_single(mdev->dev, addr, size, direction); + return 0; +} + +int mtk_dma_map_page(struct mtk_md_dev *mdev, dma_addr_t *addr, + struct page *page, unsigned long offset, size_t size, int direction); +static inline int mtk_dma_unmap_page(struct mtk_md_dev *mdev, + dma_addr_t addr, size_t size, int direction) +{ + if (!addr) + return -EINVAL; + dma_unmap_page(mdev->dev, addr, size, direction); + return 0; +} + +#endif /* __MTK_DEV_H__ */ diff --git a/drivers/net/wwan/mediatek/pcie/mtk_pci.c b/drivers/net/wwan/mediatek/pcie/mtk_pci.c new file mode 100644 index 000000000000..5be61178d30d --- /dev/null +++ b/drivers/net/wwan/mediatek/pcie/mtk_pci.c @@ -0,0 +1,1164 @@ +// SPDX-License-Identifier: BSD-3-Clause-Clear +/* + * Copyright (c) 2022, MediaTek Inc. + */ + +#include +#include +#include +#include +#include +#include +#include + +#include "mtk_pci.h" +#include "mtk_reg.h" + +#define MTK_PCI_TRANSPARENT_ATR_SIZE (0x3F) + +/* This table records which bits of the interrupt status register each interrupt corresponds to + * when there are different numbers of msix interrupts. + */ +static const u32 mtk_msix_bits_map[MTK_IRQ_CNT_MAX / 2][5] = { + {0xFFFFFFFF, 0x55555555, 0x11111111, 0x01010101, 0x00010001}, + {0x00000000, 0xAAAAAAAA, 0x22222222, 0x02020202, 0x00020002}, + {0x00000000, 0x00000000, 0x44444444, 0x04040404, 0x00040004}, + {0x00000000, 0x00000000, 0x88888888, 0x08080808, 0x00080008}, + {0x00000000, 0x00000000, 0x00000000, 0x10101010, 0x00100010}, + {0x00000000, 0x00000000, 0x00000000, 0x20202020, 0x00200020}, + {0x00000000, 0x00000000, 0x00000000, 0x40404040, 0x00400040}, + {0x00000000, 0x00000000, 0x00000000, 0x80808080, 0x00800080}, + {0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x01000100}, + {0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x02000200}, + {0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x04000400}, + {0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x08000800}, + {0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x10001000}, + {0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x20002000}, + {0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x40004000}, + {0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x80008000}, +}; + +static u32 mtk_pci_mac_read32(struct mtk_pci_priv *priv, u64 addr) +{ + return ioread32(priv->mac_reg_base + addr); +} + +static void mtk_pci_mac_write32(struct mtk_pci_priv *priv, u64 addr, u32 val) +{ + iowrite32(val, priv->mac_reg_base + addr); +} + +static void mtk_pci_set_msix_merged(struct mtk_pci_priv *priv, int irq_cnt) +{ + mtk_pci_mac_write32(priv, REG_PCIE_CFG_MSIX, ffs(irq_cnt) * 2 - 1); +} + +static int mtk_pci_setup_atr(struct mtk_md_dev *mdev, struct mtk_atr_cfg *cfg) +{ + struct mtk_pci_priv *priv = mdev->hw_priv; + u32 addr, val, size_h, size_l; + int atr_size, pos, offset; + + if (cfg->transparent) { + atr_size = MTK_PCI_TRANSPARENT_ATR_SIZE; /* No address conversion is performed */ + } else { + if (cfg->src_addr & (cfg->size - 1)) { + dev_err(mdev->dev, "Invalid atr src addr is not aligned to size\n"); + return -EFAULT; + } + if (cfg->trsl_addr & (cfg->size - 1)) { + dev_err(mdev->dev, "Invalid atr trsl addr is not aligned to size, %llx, %llx\n", + cfg->trsl_addr, cfg->size - 1); + return -EFAULT; + } + + size_l = cfg->size & 0xFFFFFFFF; + size_h = cfg->size >> 32; + pos = ffs(size_l); + if (pos) { + /* Address Translate Space Size is equal to 2^(atr_size+1) + * "-2" means "-1-1", the first "-1" is because of the atr_size register, + * the second is because of the ffs() will increase by one. + */ + atr_size = pos - 2; + } else { + pos = ffs(size_h); + /* "+30" means "+32-1-1", the meaning of "-1-1" is same as above, + * "+32" is because atr_size is large, exceeding 32-bits. + */ + atr_size = pos + 30; + } + } + + /* Calculate table offset */ + offset = ATR_PORT_OFFSET * cfg->port + ATR_TABLE_OFFSET * cfg->table; + /* SRC_ADDR_H */ + addr = REG_ATR_PCIE_WIN0_T0_SRC_ADDR_MSB + offset; + val = (u32)(cfg->src_addr >> 32); + mtk_pci_mac_write32(priv, addr, val); + /* SRC_ADDR_L */ + addr = REG_ATR_PCIE_WIN0_T0_SRC_ADDR_LSB + offset; + val = (u32)(cfg->src_addr & 0xFFFFF000) | (atr_size << 1) | 0x1; + mtk_pci_mac_write32(priv, addr, val); + + /* TRSL_ADDR_H */ + addr = REG_ATR_PCIE_WIN0_T0_TRSL_ADDR_MSB + offset; + val = (u32)(cfg->trsl_addr >> 32); + mtk_pci_mac_write32(priv, addr, val); + /* TRSL_ADDR_L */ + addr = REG_ATR_PCIE_WIN0_T0_TRSL_ADDR_LSB + offset; + val = (u32)(cfg->trsl_addr & 0xFFFFF000); + mtk_pci_mac_write32(priv, addr, val); + + /* TRSL_PARAM */ + addr = REG_ATR_PCIE_WIN0_T0_TRSL_PARAM + offset; + val = (cfg->trsl_param << 16) | cfg->trsl_id; + mtk_pci_mac_write32(priv, addr, val); + + return 0; +} + +static void mtk_pci_atr_disable(struct mtk_pci_priv *priv) +{ + int port, tbl, offset; + + /* Disable all ATR table for all ports */ + for (port = ATR_SRC_PCI_WIN0; port <= ATR_SRC_AXIS_3; port++) + for (tbl = 0; tbl < ATR_TABLE_NUM_PER_ATR; tbl++) { + /* Calculate table offset */ + offset = ATR_PORT_OFFSET * port + ATR_TABLE_OFFSET * tbl; + /* Disable table by SRC_ADDR_L */ + mtk_pci_mac_write32(priv, REG_ATR_PCIE_WIN0_T0_SRC_ADDR_LSB + offset, 0); + } +} + +static int mtk_pci_atr_init(struct mtk_md_dev *mdev) +{ + struct pci_dev *pdev = to_pci_dev(mdev->dev); + struct mtk_pci_priv *priv = mdev->hw_priv; + struct mtk_atr_cfg cfg; + int port, ret; + + mtk_pci_atr_disable(priv); + + /* Config ATR for RC to access device's register */ + cfg.src_addr = pci_resource_start(pdev, MTK_BAR_2_3_IDX); + cfg.size = ATR_PCIE_REG_SIZE; + cfg.trsl_addr = ATR_PCIE_REG_TRSL_ADDR; + cfg.type = ATR_PCI2AXI; + cfg.port = ATR_PCIE_REG_PORT; + cfg.table = ATR_PCIE_REG_TABLE_NUM; + cfg.trsl_id = ATR_PCIE_REG_TRSL_PORT; + cfg.trsl_param = 0x0; + cfg.transparent = 0x0; + ret = mtk_pci_setup_atr(mdev, &cfg); + if (ret) + return ret; + + /* Config ATR for MHCCIF */ + cfg.src_addr = pci_resource_start(pdev, MTK_BAR_2_3_IDX); + cfg.src_addr += priv->cfg->mhccif_rc_base_addr - ATR_PCIE_REG_TRSL_ADDR; + cfg.size = priv->cfg->mhccif_trsl_size; + cfg.trsl_addr = priv->cfg->mhccif_rc_reg_trsl_addr; + cfg.type = ATR_PCI2AXI; + cfg.port = ATR_PCIE_REG_PORT; + cfg.table = ART_PCIE_REG_MHCCIF_TABLE_NUM; + cfg.trsl_id = ATR_PCIE_REG_TRSL_PORT; + cfg.trsl_param = 0x0; + cfg.transparent = 0x0; + ret = mtk_pci_setup_atr(mdev, &cfg); + if (ret) + return ret; + + /* Config ATR for EP to access RC's memory */ + for (port = ATR_PCIE_DEV_DMA_PORT_START; port <= ATR_PCIE_DEV_DMA_PORT_END; port++) { + cfg.src_addr = ATR_PCIE_DEV_DMA_SRC_ADDR; + cfg.size = ATR_PCIE_DEV_DMA_SIZE; + cfg.trsl_addr = ATR_PCIE_DEV_DMA_TRSL_ADDR; + cfg.type = ATR_AXI2PCI; + cfg.port = port; + cfg.table = ATR_PCIE_DEV_DMA_TABLE_NUM; + cfg.trsl_id = ATR_DST_PCI_TRX; + cfg.trsl_param = 0x0; + /* Enable transparent translation */ + cfg.transparent = ATR_PCIE_DEV_DMA_TRANSPARENT; + ret = mtk_pci_setup_atr(mdev, &cfg); + if (ret) + return ret; + } + + return 0; +} + +static u32 mtk_pci_read32(struct mtk_md_dev *mdev, u64 addr) +{ + struct mtk_pci_priv *priv = mdev->hw_priv; + + return ioread32(priv->ext_reg_base + addr); +} + +static void mtk_pci_write32(struct mtk_md_dev *mdev, u64 addr, u32 val) +{ + struct mtk_pci_priv *priv = mdev->hw_priv; + + iowrite32(val, priv->ext_reg_base + addr); +} + +static u32 mtk_pci_get_dev_state(struct mtk_md_dev *mdev) +{ + return mtk_pci_mac_read32(mdev->hw_priv, REG_PCIE_DEBUG_DUMMY_7); +} + +static void mtk_pci_ack_dev_state(struct mtk_md_dev *mdev, u32 state) +{ + mtk_pci_mac_write32(mdev->hw_priv, REG_PCIE_DEBUG_DUMMY_7, state); +} + +static void mtk_pci_force_mac_active(struct mtk_md_dev *mdev, bool enable) +{ + struct mtk_pci_priv *priv = mdev->hw_priv; + u32 reg; + + reg = mtk_pci_mac_read32(priv, REG_PCIE_MISC_CTRL); + if (enable) + reg |= MTK_FORCE_MAC_ACTIVE_BIT; + else + reg &= ~MTK_FORCE_MAC_ACTIVE_BIT; + mtk_pci_mac_write32(priv, REG_PCIE_MISC_CTRL, reg); +} + +static u32 mtk_pci_get_ds_status(struct mtk_md_dev *mdev) +{ + u32 reg; + + mtk_pci_force_mac_active(mdev, true); + reg = mtk_pci_mac_read32(mdev->hw_priv, REG_PCIE_RESOURCE_STATUS); + mtk_pci_force_mac_active(mdev, false); + + return reg; +} + +static void mtk_pci_ds_lock(struct mtk_md_dev *mdev) +{ + mtk_pci_mac_write32(mdev->hw_priv, REG_PCIE_PEXTP_MAC_SLEEP_CTRL, + MTK_DISABLE_DS_BIT(0)); +} + +static void mtk_pci_ds_unlock(struct mtk_md_dev *mdev) +{ + mtk_pci_mac_write32(mdev->hw_priv, REG_PCIE_PEXTP_MAC_SLEEP_CTRL, + MTK_ENABLE_DS_BIT(0)); +} + +static void mtk_pci_set_l1ss(struct mtk_md_dev *mdev, u32 type, bool enable) +{ + struct mtk_pci_priv *priv = mdev->hw_priv; + u32 addr = REG_DIS_ASPM_LOWPWR_SET_0; + + if (enable) + addr = REG_DIS_ASPM_LOWPWR_CLR_0; + + mtk_pci_mac_write32(priv, addr, type); +} + +static int mtk_pci_get_irq_id(struct mtk_md_dev *mdev, enum mtk_irq_src irq_src) +{ + struct mtk_pci_priv *priv = mdev->hw_priv; + const int *irq_tbl = priv->cfg->irq_tbl; + int irq_id = -EINVAL; + + if (irq_src > MTK_IRQ_SRC_MIN && irq_src < MTK_IRQ_SRC_MAX) { + irq_id = irq_tbl[irq_src]; + if (unlikely(irq_id < 0 || irq_id >= MTK_IRQ_CNT_MAX)) + irq_id = -EINVAL; + } + + return irq_id; +} + +static int mtk_pci_get_virq_id(struct mtk_md_dev *mdev, int irq_id) +{ + struct pci_dev *pdev = to_pci_dev(mdev->dev); + int nr = 0; + + if (pdev->msix_enabled) + nr = irq_id % mdev->msi_nvecs; + + return pci_irq_vector(pdev, nr); +} + +static int mtk_pci_register_irq(struct mtk_md_dev *mdev, int irq_id, + int (*irq_cb)(int irq_id, void *data), void *data) +{ + struct mtk_pci_priv *priv = mdev->hw_priv; + + if (unlikely((irq_id < 0 || irq_id >= MTK_IRQ_CNT_MAX) || !irq_cb)) + return -EINVAL; + + if (priv->irq_cb_list[irq_id]) { + dev_err(mdev->dev, + "Unable to register irq, irq_id=%d, it's already been register by %ps.\n", + irq_id, priv->irq_cb_list[irq_id]); + return -EFAULT; + } + priv->irq_cb_list[irq_id] = irq_cb; + priv->irq_cb_data[irq_id] = data; + + return 0; +} + +static int mtk_pci_unregister_irq(struct mtk_md_dev *mdev, int irq_id) +{ + struct mtk_pci_priv *priv = mdev->hw_priv; + + if (unlikely(irq_id < 0 || irq_id >= MTK_IRQ_CNT_MAX)) + return -EINVAL; + + if (!priv->irq_cb_list[irq_id]) { + dev_err(mdev->dev, "irq_id=%d has not been registered\n", irq_id); + return -EFAULT; + } + priv->irq_cb_list[irq_id] = NULL; + priv->irq_cb_data[irq_id] = NULL; + + return 0; +} + +static int mtk_pci_mask_irq(struct mtk_md_dev *mdev, int irq_id) +{ + struct mtk_pci_priv *priv = mdev->hw_priv; + + if (unlikely((irq_id < 0 || irq_id >= MTK_IRQ_CNT_MAX) || priv->irq_type != PCI_IRQ_MSIX)) { + dev_err(mdev->dev, "Failed to mask irq: input irq_id=%d\n", irq_id); + return -EINVAL; + } + + mtk_pci_mac_write32(priv, REG_IMASK_HOST_MSIX_CLR_GRP0_0, BIT(irq_id)); + + return 0; +} + +static int mtk_pci_unmask_irq(struct mtk_md_dev *mdev, int irq_id) +{ + struct mtk_pci_priv *priv = mdev->hw_priv; + + if (unlikely((irq_id < 0 || irq_id >= MTK_IRQ_CNT_MAX) || priv->irq_type != PCI_IRQ_MSIX)) { + dev_err(mdev->dev, "Failed to unmask irq: input irq_id=%d\n", irq_id); + return -EINVAL; + } + + mtk_pci_mac_write32(priv, REG_IMASK_HOST_MSIX_SET_GRP0_0, BIT(irq_id)); + + return 0; +} + +static int mtk_pci_clear_irq(struct mtk_md_dev *mdev, int irq_id) +{ + struct mtk_pci_priv *priv = mdev->hw_priv; + + if (unlikely((irq_id < 0 || irq_id >= MTK_IRQ_CNT_MAX) || priv->irq_type != PCI_IRQ_MSIX)) { + dev_err(mdev->dev, "Failed to clear irq: input irq_id=%d\n", irq_id); + return -EINVAL; + } + + mtk_pci_mac_write32(priv, REG_MSIX_ISTATUS_HOST_GRP0_0, BIT(irq_id)); + + return 0; +} + +static int mtk_mhccif_register_evt(struct mtk_md_dev *mdev, u32 chs, + int (*evt_cb)(u32 status, void *data), void *data) +{ + struct mtk_pci_priv *priv = mdev->hw_priv; + struct mtk_mhccif_cb *cb; + unsigned long flag; + int ret = 0; + + if (!chs || !evt_cb) + return -EINVAL; + + spin_lock_irqsave(&priv->mhccif_lock, flag); + list_for_each_entry(cb, &priv->mhccif_cb_list, entry) { + if (cb->chs & chs) { + ret = -EFAULT; + dev_err(mdev->dev, + "Unable to register evt, chs=0x%08X&0x%08X registered_cb=%ps\n", + chs, cb->chs, cb->evt_cb); + goto err; + } + } + cb = devm_kzalloc(mdev->dev, sizeof(*cb), GFP_ATOMIC); + if (!cb) { + ret = -ENOMEM; + goto err; + } + cb->evt_cb = evt_cb; + cb->data = data; + cb->chs = chs; + list_add_tail(&cb->entry, &priv->mhccif_cb_list); + +err: + spin_unlock_irqrestore(&priv->mhccif_lock, flag); + + return ret; +} + +static int mtk_mhccif_unregister_evt(struct mtk_md_dev *mdev, u32 chs) +{ + struct mtk_pci_priv *priv = mdev->hw_priv; + struct mtk_mhccif_cb *cb, *next; + unsigned long flag; + int ret = 0; + + if (!chs) + return -EINVAL; + + spin_lock_irqsave(&priv->mhccif_lock, flag); + list_for_each_entry_safe(cb, next, &priv->mhccif_cb_list, entry) { + if (cb->chs == chs) { + list_del(&cb->entry); + devm_kfree(mdev->dev, cb); + goto out; + } + } + ret = -EFAULT; + dev_warn(mdev->dev, "Unable to unregister evt, no chs=0x%08X has been registered.\n", chs); +out: + spin_unlock_irqrestore(&priv->mhccif_lock, flag); + + return ret; +} + +static void mtk_mhccif_mask_evt(struct mtk_md_dev *mdev, u32 chs) +{ + struct mtk_pci_priv *priv = mdev->hw_priv; + + mtk_pci_write32(mdev, priv->cfg->mhccif_rc_base_addr + + MHCCIF_EP2RC_SW_INT_EAP_MASK_SET, chs); +} + +static void mtk_mhccif_unmask_evt(struct mtk_md_dev *mdev, u32 chs) +{ + struct mtk_pci_priv *priv = mdev->hw_priv; + + mtk_pci_write32(mdev, priv->cfg->mhccif_rc_base_addr + + MHCCIF_EP2RC_SW_INT_EAP_MASK_CLR, chs); +} + +static void mtk_mhccif_clear_evt(struct mtk_md_dev *mdev, u32 chs) +{ + struct mtk_pci_priv *priv = mdev->hw_priv; + + mtk_pci_write32(mdev, priv->cfg->mhccif_rc_base_addr + + MHCCIF_EP2RC_SW_INT_ACK, chs); +} + +static int mtk_mhccif_send_evt(struct mtk_md_dev *mdev, u32 ch) +{ + struct mtk_pci_priv *priv = mdev->hw_priv; + u32 rc_base; + + rc_base = priv->cfg->mhccif_rc_base_addr; + /* Only allow one ch to be triggered at a time */ + if ((ch & (ch - 1)) || !ch) { + dev_err(mdev->dev, "Unsupported ext evt ch=0x%08X\n", ch); + return -EINVAL; + } + + mtk_pci_write32(mdev, rc_base + MHCCIF_RC2EP_SW_BSY, ch); + mtk_pci_write32(mdev, rc_base + MHCCIF_RC2EP_SW_TCHNUM, ffs(ch) - 1); + + return 0; +} + +static u32 mtk_mhccif_get_evt_status(struct mtk_md_dev *mdev) +{ + struct mtk_pci_priv *priv = mdev->hw_priv; + + return mtk_pci_read32(mdev, priv->cfg->mhccif_rc_base_addr + MHCCIF_EP2RC_SW_INT_STS); +} + +static int mtk_pci_acpi_reset(struct mtk_md_dev *mdev, char *fn_name) +{ +#ifdef CONFIG_ACPI + struct acpi_buffer buffer = { ACPI_ALLOCATE_BUFFER, NULL }; + acpi_status acpi_ret; + acpi_handle handle; + int ret = 0; + + handle = ACPI_HANDLE(mdev->dev); + if (!handle) { + dev_err(mdev->dev, "Unsupported, acpi handle isn't found\n"); + ret = -ENODEV; + goto err; + } + if (!acpi_has_method(handle, fn_name)) { + dev_err(mdev->dev, "Unsupported, _RST method isn't found\n"); + ret = -ENODEV; + goto err; + } + acpi_ret = acpi_evaluate_object(handle, fn_name, NULL, &buffer); + if (ACPI_FAILURE(acpi_ret)) { + dev_err(mdev->dev, "Failed to execute %s method: %s\n", + fn_name, + acpi_format_exception(acpi_ret)); + ret = -EFAULT; + goto err; + } + dev_info(mdev->dev, "FLDR execute successfully\n"); + acpi_os_free(buffer.pointer); +err: + return ret; +#else + dev_err(mdev->dev, "Unsupported, CONFIG ACPI hasn't been set to 'y'\n"); + return -ENODEV; +#endif +} + +static int mtk_pci_fldr(struct mtk_md_dev *mdev) +{ + return mtk_pci_acpi_reset(mdev, "_RST"); +} + +static int mtk_pci_pldr(struct mtk_md_dev *mdev) +{ + return mtk_pci_acpi_reset(mdev, "MRST._RST"); +} + +static int mtk_pci_reset(struct mtk_md_dev *mdev, enum mtk_reset_type type) +{ + switch (type) { + case RESET_RGU: + return mtk_mhccif_send_evt(mdev, EXT_EVT_H2D_DEVICE_RESET); + case RESET_FLDR: + return mtk_pci_fldr(mdev); + case RESET_PLDR: + return mtk_pci_pldr(mdev); + } + + return -EINVAL; +} + +static int mtk_pci_reinit(struct mtk_md_dev *mdev, enum mtk_reinit_type type) +{ + struct pci_dev *pdev = to_pci_dev(mdev->dev); + struct mtk_pci_priv *priv = mdev->hw_priv; + int ret, ltr, l1ss; + + /* restore ltr */ + ltr = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_LTR); + if (ltr) { + pci_write_config_word(pdev, ltr + PCI_LTR_MAX_SNOOP_LAT, + priv->ltr_max_snoop_lat); + pci_write_config_word(pdev, ltr + PCI_LTR_MAX_NOSNOOP_LAT, + priv->ltr_max_nosnoop_lat); + } + /* restore l1ss */ + l1ss = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_L1SS); + if (l1ss) { + pci_write_config_dword(pdev, l1ss + PCI_L1SS_CTL1, priv->l1ss_ctl1); + pci_write_config_dword(pdev, l1ss + PCI_L1SS_CTL2, priv->l1ss_ctl2); + } + + ret = mtk_pci_atr_init(mdev); + if (ret) + return ret; + + if (priv->irq_type == PCI_IRQ_MSIX) { + if (priv->irq_cnt != MTK_IRQ_CNT_MAX) + mtk_pci_set_msix_merged(priv, priv->irq_cnt); + } + + mtk_pci_unmask_irq(mdev, priv->rgu_irq_id); + mtk_pci_unmask_irq(mdev, priv->mhccif_irq_id); + + /* In L2 resume, device would disable PCIe interrupt, + * and this step would re-enable PCIe interrupt. + * For L3, just do this with no effect. + */ + if (type == REINIT_TYPE_RESUME) + mtk_pci_mac_write32(priv, priv->cfg->istatus_host_ctrl_addr, 0); + + dev_info(mdev->dev, "PCIe reinit type=%d\n", type); + + return 0; +} + +static bool mtk_pci_link_check(struct mtk_md_dev *mdev) +{ + return !pci_device_is_present(to_pci_dev(mdev->dev)); +} + +static int mtk_pci_get_hp_status(struct mtk_md_dev *mdev) +{ + struct mtk_pci_priv *priv = mdev->hw_priv; + + return priv->rc_hp_on; +} + +static u32 mtk_pci_get_resume_state(struct mtk_md_dev *mdev) +{ + return mtk_pci_mac_read32(mdev->hw_priv, REG_PCIE_DEBUG_DUMMY_3); +} + +static const struct mtk_hw_ops mtk_pci_ops = { + .read32 = mtk_pci_read32, + .write32 = mtk_pci_write32, + .get_dev_state = mtk_pci_get_dev_state, + .ack_dev_state = mtk_pci_ack_dev_state, + .get_ds_status = mtk_pci_get_ds_status, + .ds_lock = mtk_pci_ds_lock, + .ds_unlock = mtk_pci_ds_unlock, + .set_l1ss = mtk_pci_set_l1ss, + .get_resume_state = mtk_pci_get_resume_state, + .get_irq_id = mtk_pci_get_irq_id, + .get_virq_id = mtk_pci_get_virq_id, + .register_irq = mtk_pci_register_irq, + .unregister_irq = mtk_pci_unregister_irq, + .mask_irq = mtk_pci_mask_irq, + .unmask_irq = mtk_pci_unmask_irq, + .clear_irq = mtk_pci_clear_irq, + .register_ext_evt = mtk_mhccif_register_evt, + .unregister_ext_evt = mtk_mhccif_unregister_evt, + .mask_ext_evt = mtk_mhccif_mask_evt, + .unmask_ext_evt = mtk_mhccif_unmask_evt, + .clear_ext_evt = mtk_mhccif_clear_evt, + .send_ext_evt = mtk_mhccif_send_evt, + .get_ext_evt_status = mtk_mhccif_get_evt_status, + .reset = mtk_pci_reset, + .reinit = mtk_pci_reinit, + .get_hp_status = mtk_pci_get_hp_status, +}; + +static void mtk_mhccif_isr_work(struct work_struct *work) +{ + struct mtk_pci_priv *priv = container_of(work, struct mtk_pci_priv, mhccif_work); + struct mtk_md_dev *mdev = priv->irq_desc->mdev; + struct mtk_mhccif_cb *cb; + unsigned long flag; + u32 stat, mask; + + stat = mtk_mhccif_get_evt_status(mdev); + mask = mtk_pci_read32(mdev, priv->cfg->mhccif_rc_base_addr + + MHCCIF_EP2RC_SW_INT_EAP_MASK); + dev_info(mdev->dev, "External events: mhccif_stat=0x%08X mask=0x%08X\n", stat, mask); + + if (unlikely(stat == U32_MAX && mtk_pci_link_check(mdev))) { + /* When link failed, we don't need to unmask/clear. */ + dev_err(mdev->dev, "Failed to check link in MHCCIF handler.\n"); + return; + } + + stat &= ~mask; + spin_lock_irqsave(&priv->mhccif_lock, flag); + list_for_each_entry(cb, &priv->mhccif_cb_list, entry) { + if (cb->chs & stat) + cb->evt_cb(cb->chs & stat, cb->data); + } + spin_unlock_irqrestore(&priv->mhccif_lock, flag); + + mtk_pci_clear_irq(mdev, priv->mhccif_irq_id); + mtk_pci_unmask_irq(mdev, priv->mhccif_irq_id); +} + +static const struct mtk_pci_dev_cfg mtk_dev_cfg_0800 = { + .mhccif_rc_base_addr = 0x10012000, + .mhccif_trsl_size = 0x2000, + .mhccif_rc_reg_trsl_addr = 0x12020000, + .istatus_host_ctrl_addr = REG_ISTATUS_HOST_CTRL_NEW, + .irq_tbl = { + [MTK_IRQ_SRC_DPMAIF] = 24, + [MTK_IRQ_SRC_CLDMA0] = 25, + [MTK_IRQ_SRC_CLDMA1] = 26, + [MTK_IRQ_SRC_CLDMA2] = 27, + [MTK_IRQ_SRC_MHCCIF] = 28, + [MTK_IRQ_SRC_DPMAIF2] = 29, + [MTK_IRQ_SRC_SAP_RGU] = 30, + [MTK_IRQ_SRC_CLDMA3] = 31, + [MTK_IRQ_SRC_PM_LOCK] = 0, + [MTK_IRQ_SRC_DPMAIF3] = 7, + }, +}; + +static const struct pci_device_id mtk_pci_ids[] = { + MTK_PCI_DEV_CFG(0x0800, mtk_dev_cfg_0800), + { /* end: all zeroes */ } +}; +MODULE_DEVICE_TABLE(pci, mtk_pci_ids); + +static int mtk_pci_bar_init(struct mtk_md_dev *mdev) +{ + struct pci_dev *pdev = to_pci_dev(mdev->dev); + struct mtk_pci_priv *priv = mdev->hw_priv; + int ret; + + ret = pcim_iomap_regions(pdev, MTK_REQUESTED_BARS, mdev->dev_str); + if (ret) { + dev_err(mdev->dev, "Failed to init MMIO. ret=%d\n", ret); + return ret; + } + + /* get ioremapped memory */ + priv->mac_reg_base = pcim_iomap_table(pdev)[MTK_BAR_0_1_IDX]; + priv->bar23_addr = pcim_iomap_table(pdev)[MTK_BAR_2_3_IDX]; + dev_info(mdev->dev, "BAR0/1 Addr=0x%p, BAR2/3 Addr=0x%p\n", + priv->mac_reg_base, priv->bar23_addr); + /* We use MD view base address "0" to observe registers */ + priv->ext_reg_base = priv->bar23_addr - ATR_PCIE_REG_TRSL_ADDR; + + return 0; +} + +static void mtk_pci_bar_exit(struct mtk_md_dev *mdev) +{ + pcim_iounmap_regions(to_pci_dev(mdev->dev), MTK_REQUESTED_BARS); +} + +static int mtk_mhccif_irq_cb(int irq_id, void *data) +{ + struct mtk_md_dev *mdev = data; + struct mtk_pci_priv *priv; + + priv = mdev->hw_priv; + queue_work(system_highpri_wq, &priv->mhccif_work); + + return 0; +} + +static int mtk_mhccif_init(struct mtk_md_dev *mdev) +{ + struct mtk_pci_priv *priv = mdev->hw_priv; + int ret; + + INIT_LIST_HEAD(&priv->mhccif_cb_list); + spin_lock_init(&priv->mhccif_lock); + INIT_WORK(&priv->mhccif_work, mtk_mhccif_isr_work); + + ret = mtk_pci_get_irq_id(mdev, MTK_IRQ_SRC_MHCCIF); + if (ret < 0) { + dev_err(mdev->dev, "Failed to get mhccif_irq_id. ret=%d\n", ret); + goto err; + } + priv->mhccif_irq_id = ret; + + ret = mtk_pci_register_irq(mdev, priv->mhccif_irq_id, mtk_mhccif_irq_cb, mdev); + if (ret) { + dev_err(mdev->dev, "Failed to register mhccif_irq callback\n"); + goto err; + } + + /* To check if the device rebooted. + * The reboot of some PC doesn't cause the device power cycle. + */ + mtk_pci_read32(mdev, priv->cfg->mhccif_rc_base_addr + + MHCCIF_EP2RC_SW_INT_EAP_MASK); + +err: + return ret; +} + +static void mtk_mhccif_exit(struct mtk_md_dev *mdev) +{ + struct mtk_pci_priv *priv = mdev->hw_priv; + + mtk_pci_unregister_irq(mdev, priv->mhccif_irq_id); + cancel_work_sync(&priv->mhccif_work); +} + +static void mtk_rgu_work(struct work_struct *work) +{ + struct mtk_pci_priv *priv; + struct mtk_md_dev *mdev; + struct pci_dev *pdev; + + priv = container_of(to_delayed_work(work), struct mtk_pci_priv, rgu_work); + mdev = priv->mdev; + pdev = to_pci_dev(mdev->dev); + + dev_info(mdev->dev, "RGU work\n"); + + mtk_pci_mask_irq(mdev, priv->rgu_irq_id); + mtk_pci_clear_irq(mdev, priv->rgu_irq_id); + + if (!pdev->msix_enabled) + return; + + mtk_pci_unmask_irq(mdev, priv->rgu_irq_id); +} + +static int mtk_rgu_irq_cb(int irq_id, void *data) +{ + struct mtk_md_dev *mdev = data; + struct mtk_pci_priv *priv; + + priv = mdev->hw_priv; + schedule_delayed_work(&priv->rgu_work, msecs_to_jiffies(1)); + + return 0; +} + +static int mtk_rgu_init(struct mtk_md_dev *mdev) +{ + struct mtk_pci_priv *priv = mdev->hw_priv; + int ret; + + ret = mtk_pci_get_irq_id(mdev, MTK_IRQ_SRC_SAP_RGU); + if (ret < 0) { + dev_err(mdev->dev, "Failed to get rgu_irq_id. ret=%d\n", ret); + goto err; + } + priv->rgu_irq_id = ret; + + INIT_DELAYED_WORK(&priv->rgu_work, mtk_rgu_work); + + mtk_pci_mask_irq(mdev, priv->rgu_irq_id); + mtk_pci_clear_irq(mdev, priv->rgu_irq_id); + + ret = mtk_pci_register_irq(mdev, priv->rgu_irq_id, mtk_rgu_irq_cb, mdev); + if (ret) { + dev_err(mdev->dev, "Failed to register rgu_irq callback\n"); + goto err; + } + + mtk_pci_unmask_irq(mdev, priv->rgu_irq_id); + +err: + return ret; +} + +static void mtk_rgu_exit(struct mtk_md_dev *mdev) +{ + struct mtk_pci_priv *priv = mdev->hw_priv; + + mtk_pci_unregister_irq(mdev, priv->rgu_irq_id); + cancel_delayed_work_sync(&priv->rgu_work); +} + +static irqreturn_t mtk_pci_irq_handler(struct mtk_md_dev *mdev, u32 irq_state) +{ + struct mtk_pci_priv *priv = mdev->hw_priv; + int irq_id; + + /* Check whether each set bit has a callback, if has, call it */ + do { + irq_id = fls(irq_state) - 1; + irq_state &= ~BIT(irq_id); + if (likely(priv->irq_cb_list[irq_id])) + priv->irq_cb_list[irq_id](irq_id, priv->irq_cb_data[irq_id]); + else + dev_err(mdev->dev, "Unhandled irq_id=%d, no callback for it.\n", irq_id); + } while (irq_state); + + return IRQ_HANDLED; +} + +static irqreturn_t mtk_pci_irq_msix(int irq, void *data) +{ + struct mtk_pci_irq_desc *irq_desc = data; + struct mtk_md_dev *mdev = irq_desc->mdev; + struct mtk_pci_priv *priv; + u32 irq_state, irq_enable; + + priv = mdev->hw_priv; + irq_state = mtk_pci_mac_read32(priv, REG_MSIX_ISTATUS_HOST_GRP0_0); + irq_enable = mtk_pci_mac_read32(priv, REG_IMASK_HOST_MSIX_GRP0_0); + + irq_state &= irq_enable & irq_desc->msix_bits; + + if (unlikely(!irq_state)) + return IRQ_NONE; + + /* Mask the bit and user needs to unmask by itself */ + mtk_pci_mac_write32(priv, REG_IMASK_HOST_MSIX_CLR_GRP0_0, irq_state & (~BIT(30))); + + return mtk_pci_irq_handler(mdev, irq_state); +} + +static int mtk_pci_request_irq_msix(struct mtk_md_dev *mdev, int irq_cnt_allocated) +{ + struct mtk_pci_priv *priv = mdev->hw_priv; + struct mtk_pci_irq_desc *irq_desc; + struct pci_dev *pdev; + int irq_cnt; + int ret, i; + + /* calculate the nearest 2's power number */ + irq_cnt = BIT(fls(irq_cnt_allocated) - 1); + pdev = to_pci_dev(mdev->dev); + irq_desc = priv->irq_desc; + for (i = 0; i < irq_cnt; i++) { + irq_desc[i].mdev = mdev; + if (irq_cnt == MTK_IRQ_CNT_MAX) + irq_desc[i].msix_bits = BIT(i); + else + irq_desc[i].msix_bits = mtk_msix_bits_map[i][ffs(irq_cnt) - 1]; + snprintf(irq_desc[i].name, MTK_IRQ_NAME_LEN, "msix%d-%s", i, mdev->dev_str); + ret = pci_request_irq(pdev, i, mtk_pci_irq_msix, NULL, + &irq_desc[i], irq_desc[i].name); + if (ret) { + dev_err(mdev->dev, "Failed to request %s: ret=%d\n", irq_desc[i].name, ret); + for (i--; i >= 0; i--) + pci_free_irq(pdev, i, &irq_desc[i]); + return ret; + } + } + priv->irq_cnt = irq_cnt; + priv->irq_type = PCI_IRQ_MSIX; + + if (irq_cnt != MTK_IRQ_CNT_MAX) + mtk_pci_set_msix_merged(priv, irq_cnt); + + return 0; +} + +static int mtk_pci_request_irq(struct mtk_md_dev *mdev, int max_irq_cnt, int irq_type) +{ + struct pci_dev *pdev = to_pci_dev(mdev->dev); + int irq_cnt; + int ret; + + irq_cnt = pci_alloc_irq_vectors(pdev, MTK_IRQ_CNT_MIN, max_irq_cnt, irq_type); + mdev->msi_nvecs = irq_cnt; + + if (irq_cnt < MTK_IRQ_CNT_MIN) { + dev_err(mdev->dev, + "Unable to alloc pci irq vectors. ret=%d maxirqcnt=%d irqtype=0x%x\n", + irq_cnt, max_irq_cnt, irq_type); + ret = -EFAULT; + goto err; + } + + ret = mtk_pci_request_irq_msix(mdev, irq_cnt); +err: + return ret; +} + +static void mtk_pci_free_irq(struct mtk_md_dev *mdev) +{ + struct pci_dev *pdev = to_pci_dev(mdev->dev); + struct mtk_pci_priv *priv = mdev->hw_priv; + int i; + + for (i = 0; i < priv->irq_cnt; i++) + pci_free_irq(pdev, i, &priv->irq_desc[i]); + + pci_free_irq_vectors(pdev); +} + +static void mtk_pci_save_state(struct mtk_md_dev *mdev) +{ + struct pci_dev *pdev = to_pci_dev(mdev->dev); + struct mtk_pci_priv *priv = mdev->hw_priv; + int ltr, l1ss; + + pci_save_state(pdev); + /* save ltr */ + ltr = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_LTR); + if (ltr) { + pci_read_config_word(pdev, ltr + PCI_LTR_MAX_SNOOP_LAT, + &priv->ltr_max_snoop_lat); + pci_read_config_word(pdev, ltr + PCI_LTR_MAX_NOSNOOP_LAT, + &priv->ltr_max_nosnoop_lat); + } + /* save l1ss */ + l1ss = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_L1SS); + if (l1ss) { + pci_read_config_dword(pdev, l1ss + PCI_L1SS_CTL1, &priv->l1ss_ctl1); + pci_read_config_dword(pdev, l1ss + PCI_L1SS_CTL2, &priv->l1ss_ctl2); + } +} + +static int mtk_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id) +{ + struct device *dev = &pdev->dev; + struct mtk_pci_priv *priv; + struct mtk_md_dev *mdev; + int ret; + + mdev = devm_kzalloc(dev, sizeof(*mdev), GFP_KERNEL); + if (!mdev) { + ret = -ENOMEM; + goto err_alloc_mdev; + } + + priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL); + if (!priv) { + ret = -ENOMEM; + goto err_alloc_priv; + } + + pci_set_drvdata(pdev, mdev); + priv->cfg = (struct mtk_pci_dev_cfg *)id->driver_data; + priv->mdev = mdev; + mdev->hw_ver = pdev->device; + mdev->hw_ops = &mtk_pci_ops; + mdev->hw_priv = priv; + mdev->dev = dev; + snprintf(mdev->dev_str, MTK_DEV_STR_LEN, "%02x%02x%d", + pdev->bus->number, PCI_SLOT(pdev->devfn), PCI_FUNC(pdev->devfn)); + + dev_info(mdev->dev, "Start probe 0x%x, state_saved[%d]\n", + mdev->hw_ver, pdev->state_saved); + + if (pdev->state_saved) + pci_restore_state(pdev); + + /* enable host to device access. */ + ret = pcim_enable_device(pdev); + if (ret) { + dev_err(mdev->dev, "Failed to enable pci device.\n"); + goto err_enable_pdev; + } + + ret = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64)); + if (ret) { + dev_err(mdev->dev, "Failed to set DMA Mask and Coherent. (ret=%d)\n", ret); + goto err_set_dma_mask; + } + + ret = mtk_pci_bar_init(mdev); + if (ret) + goto err_bar_init; + + ret = mtk_pci_atr_init(mdev); + if (ret) + goto err_atr_init; + + ret = mtk_mhccif_init(mdev); + if (ret) + goto err_mhccif_init; + + ret = mtk_pci_request_irq(mdev, MTK_IRQ_CNT_MAX, PCI_IRQ_MSIX); + if (ret) + goto err_request_irq; + + ret = mtk_dev_init(mdev); + if (ret) { + dev_err(mdev->dev, "Failed to init dev.\n"); + goto err_dev_init; + } + + ret = mtk_rgu_init(mdev); + if (ret) + goto err_rgu_init; + + /* enable device to host interrupt. */ + pci_set_master(pdev); + + mtk_pci_unmask_irq(mdev, priv->mhccif_irq_id); + + mtk_pci_save_state(mdev); + priv->saved_state = pci_store_saved_state(pdev); + if (!priv->saved_state) { + ret = -EFAULT; + goto err_save_state; + } + + ret = mtk_dev_start(mdev); + if (ret) { + dev_err(mdev->dev, "Failed to start dev.\n"); + goto err_dev_start; + } + dev_info(mdev->dev, "Probe done hw_ver=0x%x\n", mdev->hw_ver); + return 0; + +err_dev_start: + pci_load_and_free_saved_state(pdev, &priv->saved_state); +err_save_state: + pci_disable_pcie_error_reporting(pdev); + pci_clear_master(pdev); + mtk_rgu_exit(mdev); +err_rgu_init: + mtk_dev_exit(mdev); +err_dev_init: + mtk_pci_free_irq(mdev); +err_request_irq: + mtk_mhccif_exit(mdev); +err_mhccif_init: +err_atr_init: + mtk_pci_bar_exit(mdev); +err_bar_init: +err_set_dma_mask: + pci_disable_device(pdev); +err_enable_pdev: + devm_kfree(dev, priv); +err_alloc_priv: + devm_kfree(dev, mdev); +err_alloc_mdev: + dev_err(dev, "Failed to probe device, ret=%d\n", ret); + + return ret; +} + +static void mtk_pci_remove(struct pci_dev *pdev) +{ + struct mtk_md_dev *mdev = pci_get_drvdata(pdev); + struct mtk_pci_priv *priv = mdev->hw_priv; + struct device *dev = &pdev->dev; + int ret; + + mtk_pci_mask_irq(mdev, priv->rgu_irq_id); + mtk_pci_mask_irq(mdev, priv->mhccif_irq_id); + pci_disable_pcie_error_reporting(pdev); + mtk_dev_exit(mdev); + + ret = mtk_pci_fldr(mdev); + if (ret) + mtk_mhccif_send_evt(mdev, EXT_EVT_H2D_DEVICE_RESET); + + pci_clear_master(pdev); + mtk_rgu_exit(mdev); + mtk_mhccif_exit(mdev); + mtk_pci_free_irq(mdev); + mtk_pci_bar_exit(mdev); + pci_disable_device(pdev); + pci_load_and_free_saved_state(pdev, &priv->saved_state); + devm_kfree(dev, priv); + devm_kfree(dev, mdev); + dev_info(dev, "Remove done, state_saved[%d]\n", pdev->state_saved); +} + +static pci_ers_result_t mtk_pci_error_detected(struct pci_dev *pdev, + pci_channel_state_t state) +{ + return PCI_ERS_RESULT_NEED_RESET; +} + +static pci_ers_result_t mtk_pci_slot_reset(struct pci_dev *pdev) +{ + return PCI_ERS_RESULT_RECOVERED; +} + +static void mtk_pci_io_resume(struct pci_dev *pdev) +{ +} + +static const struct pci_error_handlers mtk_pci_err_handler = { + .error_detected = mtk_pci_error_detected, + .slot_reset = mtk_pci_slot_reset, + .resume = mtk_pci_io_resume, +}; + +static struct pci_driver mtk_pci_drv = { + .name = "mtk_pci_drv", + .id_table = mtk_pci_ids, + + .probe = mtk_pci_probe, + .remove = mtk_pci_remove, + + .err_handler = &mtk_pci_err_handler +}; + +static int __init mtk_drv_init(void) +{ + return pci_register_driver(&mtk_pci_drv); +} +module_init(mtk_drv_init); + +static void __exit mtk_drv_exit(void) +{ + pci_unregister_driver(&mtk_pci_drv); +} +module_exit(mtk_drv_exit); + +MODULE_LICENSE("GPL"); diff --git a/drivers/net/wwan/mediatek/pcie/mtk_pci.h b/drivers/net/wwan/mediatek/pcie/mtk_pci.h new file mode 100644 index 000000000000..c7b29e94aafc --- /dev/null +++ b/drivers/net/wwan/mediatek/pcie/mtk_pci.h @@ -0,0 +1,150 @@ +/* SPDX-License-Identifier: BSD-3-Clause-Clear + * + * Copyright (c) 2022, MediaTek Inc. + */ + +#ifndef __MTK_PCI_H__ +#define __MTK_PCI_H__ + +#include +#include + +#include "mtk_dev.h" + +enum mtk_atr_type { + ATR_PCI2AXI = 0, + ATR_AXI2PCI +}; + +enum mtk_atr_src_port { + ATR_SRC_PCI_WIN0 = 0, + ATR_SRC_AXIS_0 = 2, + ATR_SRC_AXIS_2 = 3, + ATR_SRC_AXIS_3 = 4, +}; + +enum mtk_atr_dst_port { + ATR_DST_PCI_TRX = 0, + ATR_DST_AXIM_0 = 4, +}; + +#define ATR_PORT_OFFSET 0x100 +#define ATR_TABLE_OFFSET 0x20 +#define ATR_TABLE_NUM_PER_ATR 8 +#define ATR_WIN0_SRC_ADDR_LSB_DEFT 0x0000007f +#define ATR_PCIE_REG_TRSL_ADDR 0x10000000 +#define ATR_PCIE_REG_SIZE 0x00400000 +#define ATR_PCIE_REG_PORT ATR_SRC_PCI_WIN0 +#define ATR_PCIE_REG_TABLE_NUM 1 +#define ART_PCIE_REG_MHCCIF_TABLE_NUM 0 +#define ATR_PCIE_REG_TRSL_PORT ATR_DST_AXIM_0 +#define ATR_PCIE_DEV_DMA_PORT_START ATR_SRC_AXIS_0 +#define ATR_PCIE_DEV_DMA_PORT_END ATR_SRC_AXIS_2 +#define ATR_PCIE_DEV_DMA_SRC_ADDR 0x00000000 +#define ATR_PCIE_DEV_DMA_TRANSPARENT 1 +#define ATR_PCIE_DEV_DMA_SIZE 0 +#define ATR_PCIE_DEV_DMA_TABLE_NUM 0 +#define ATR_PCIE_DEV_DMA_TRSL_ADDR 0x00000000 + +#define MTK_BAR_0_1_IDX 0 +#define MTK_BAR_2_3_IDX 2 +/* Only use BAR0/1 and 2/3, so we should input 0b0101 for the two bar, + * Input 0xf would cause error. + */ +#define MTK_REQUESTED_BARS ((1 << MTK_BAR_0_1_IDX) | (1 << MTK_BAR_2_3_IDX)) + +#define MTK_IRQ_CNT_MIN 1 +#define MTK_IRQ_CNT_MAX 32 +#define MTK_IRQ_NAME_LEN 20 + +#define MTK_INVAL_IRQ_SRC -1 + +#define MTK_FORCE_MAC_ACTIVE_BIT BIT(6) +#define MTK_DS_LOCK_REG_BIT BIT(7) + +/* mhccif registers */ +#define MHCCIF_RC2EP_SW_BSY 0x4 +#define MHCCIF_RC2EP_SW_INT_START 0x8 +#define MHCCIF_RC2EP_SW_TCHNUM 0xC +#define MHCCIF_EP2RC_SW_INT_STS 0x10 +#define MHCCIF_EP2RC_SW_INT_ACK 0x14 +#define MHCCIF_EP2RC_SW_INT_EAP_MASK 0x20 +#define MHCCIF_EP2RC_SW_INT_EAP_MASK_SET 0x30 +#define MHCCIF_EP2RC_SW_INT_EAP_MASK_CLR 0x40 +#define MHCCIF_RC2EP_PCIE_PM_COUNTER 0x12C + +#define MTK_PCI_CLASS 0x0D4000 +#define MTK_PCI_VENDOR_ID 0x14C3 + +#define MTK_DISABLE_DS_BIT(grp) BIT(grp) +#define MTK_ENABLE_DS_BIT(grp) BIT((grp) << 8) + +#define MTK_PCI_DEV_CFG(id, cfg) \ +{ \ + PCI_DEVICE(MTK_PCI_VENDOR_ID, id), \ + MTK_PCI_CLASS, PCI_ANY_ID, \ + .driver_data = (kernel_ulong_t)&(cfg), \ +} + +struct mtk_pci_dev_cfg { + u32 flag; + u32 mhccif_rc_base_addr; + u32 mhccif_rc_reg_trsl_addr; + u32 mhccif_trsl_size; + u32 istatus_host_ctrl_addr; + int irq_tbl[MTK_IRQ_SRC_MAX]; +}; + +struct mtk_pci_irq_desc { + struct mtk_md_dev *mdev; + u32 msix_bits; + char name[MTK_IRQ_NAME_LEN]; +}; + +struct mtk_pci_priv { + const struct mtk_pci_dev_cfg *cfg; + void *mdev; + void __iomem *bar23_addr; + void __iomem *mac_reg_base; + void __iomem *ext_reg_base; + int rc_hp_on; /* Bridge hotplug status */ + int rgu_irq_id; + int irq_cnt; + int irq_type; + void *irq_cb_data[MTK_IRQ_CNT_MAX]; + + int (*irq_cb_list[MTK_IRQ_CNT_MAX])(int irq_id, void *data); + struct mtk_pci_irq_desc irq_desc[MTK_IRQ_CNT_MAX]; + struct list_head mhccif_cb_list; + /* mhccif_lock: lock to protect mhccif_cb_list */ + spinlock_t mhccif_lock; + struct work_struct mhccif_work; + int mhccif_irq_id; + struct delayed_work rgu_work; + struct pci_saved_state *saved_state; + u16 ltr_max_snoop_lat; + u16 ltr_max_nosnoop_lat; + u32 l1ss_ctl1; + u32 l1ss_ctl2; +}; + +struct mtk_mhccif_cb { + struct list_head entry; + int (*evt_cb)(u32 status, void *data); + void *data; + u32 chs; +}; + +struct mtk_atr_cfg { + u64 src_addr; + u64 trsl_addr; + u64 size; + u32 type; /* Port type */ + u32 port; /* Port number */ + u32 table; /* Table number (8 tables for each port) */ + u32 trsl_id; + u32 trsl_param; + u32 transparent; +}; + +#endif /* __MTK_PCI_H__ */ diff --git a/drivers/net/wwan/mediatek/pcie/mtk_reg.h b/drivers/net/wwan/mediatek/pcie/mtk_reg.h new file mode 100644 index 000000000000..23fa7fd9518e --- /dev/null +++ b/drivers/net/wwan/mediatek/pcie/mtk_reg.h @@ -0,0 +1,69 @@ +/* SPDX-License-Identifier: BSD-3-Clause-Clear + * + * Copyright (c) 2022, MediaTek Inc. + */ + +#ifndef __MTK_REG_H__ +#define __MTK_REG_H__ + +enum mtk_ext_evt_h2d { + EXT_EVT_H2D_EXCEPT_ACK = 1 << 1, + EXT_EVT_H2D_EXCEPT_CLEARQ_ACK = 1 << 2, + EXT_EVT_H2D_PCIE_DS_LOCK = 1 << 3, + EXT_EVT_H2D_RESERVED_FOR_DPMAIF = 1 << 8, + EXT_EVT_H2D_PCIE_PM_SUSPEND_REQ = 1 << 9, + EXT_EVT_H2D_PCIE_PM_RESUME_REQ = 1 << 10, + EXT_EVT_H2D_PCIE_PM_SUSPEND_REQ_AP = 1 << 11, + EXT_EVT_H2D_PCIE_PM_RESUME_REQ_AP = 1 << 12, + EXT_EVT_H2D_DEVICE_RESET = 1 << 13, +}; + +#define REG_PCIE_SW_TRIG_INT 0x00BC +#define REG_PCIE_LTSSM_STATUS 0x0150 +#define REG_IMASK_LOCAL 0x0180 +#define REG_ISTATUS_LOCAL 0x0184 +#define REG_INT_ENABLE_HOST 0x0188 +#define REG_ISTATUS_HOST 0x018C +#define REG_PCIE_LOW_POWER_CTRL 0x0194 +#define REG_ISTATUS_HOST_CTRL 0x01AC +#define REG_ISTATUS_PENDING_ADT 0x01D4 +#define REG_INT_ENABLE_HOST_SET 0x01F0 +#define REG_INT_ENABLE_HOST_CLR 0x01F4 +#define REG_PCIE_DMA_DUMMY_0 0x01F8 +#define REG_ISTATUS_HOST_CTRL_NEW 0x031C +#define REG_PCIE_MISC_CTRL 0x0348 +#define REG_PCIE_DUMMY_0 0x03C0 +#define REG_SW_TRIG_INTR_SET 0x03C8 +#define REG_SW_TRIG_INTR_CLR 0x03CC +#define REG_PCIE_CFG_MSIX 0x03EC +#define REG_ATR_PCIE_WIN0_T0_SRC_ADDR_LSB 0x0600 +#define REG_ATR_PCIE_WIN0_T0_SRC_ADDR_MSB 0x0604 +#define REG_ATR_PCIE_WIN0_T0_TRSL_ADDR_LSB 0x0608 +#define REG_ATR_PCIE_WIN0_T0_TRSL_ADDR_MSB 0x060C +#define REG_ATR_PCIE_WIN0_T0_TRSL_PARAM 0x0610 +#define REG_PCIE_DEBUG_DUMMY_0 0x0D00 +#define REG_PCIE_DEBUG_DUMMY_1 0x0D04 +#define REG_PCIE_DEBUG_DUMMY_2 0x0D08 +#define REG_PCIE_DEBUG_DUMMY_3 0x0D0C +#define REG_PCIE_DEBUG_DUMMY_4 0x0D10 +#define REG_PCIE_DEBUG_DUMMY_5 0x0D14 +#define REG_PCIE_DEBUG_DUMMY_6 0x0D18 +#define REG_PCIE_DEBUG_DUMMY_7 0x0D1C +#define REG_PCIE_RESOURCE_STATUS 0x0D28 +#define REG_RC2EP_SW_TRIG_LOCAL_INTR_STAT 0x0D94 +#define REG_RC2EP_SW_TRIG_LOCAL_INTR_SET 0x0D98 +#define REG_RC2EP_SW_TRIG_LOCAL_INTR_CLR 0x0D9C +#define REG_DIS_ASPM_LOWPWR_SET_0 0x0E50 +#define REG_DIS_ASPM_LOWPWR_CLR_0 0x0E54 +#define REG_DIS_ASPM_LOWPWR_SET_1 0x0E58 +#define REG_DIS_ASPM_LOWPWR_CLR_1 0x0E5C +#define REG_DIS_ASPM_LOWPWR_STS_0 0x0E60 +#define REG_DIS_ASPM_LOWPWR_STS_1 0x0E64 +#define REG_PCIE_PEXTP_MAC_SLEEP_CTRL 0x0E70 +#define REG_MSIX_SW_TRIG_SET_GRP0_0 0x0E80 +#define REG_MSIX_ISTATUS_HOST_GRP0_0 0x0F00 +#define REG_IMASK_HOST_MSIX_SET_GRP0_0 0x3000 +#define REG_IMASK_HOST_MSIX_CLR_GRP0_0 0x3080 +#define REG_IMASK_HOST_MSIX_GRP0_0 0x3100 +#define REG_DEV_INFRA_BASE 0x10001000 +#endif /* __MTK_REG_H__ */ From patchwork Tue Nov 22 11:11:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?WWFuY2hhbyBZYW5nICjmnajlvabotoUp?= X-Patchwork-Id: 24295 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp2140194wrr; Tue, 22 Nov 2022 03:21:17 -0800 (PST) X-Google-Smtp-Source: AA0mqf6hyywJP2YN5xTGQPmUb+/0OW3ssRXP8hk2k99+OneEQE215T7fGL3Pq6BryBeaENnKuORl X-Received: by 2002:a17:906:6703:b0:7ae:5dd6:e62d with SMTP id a3-20020a170906670300b007ae5dd6e62dmr9883185ejp.518.1669116076842; Tue, 22 Nov 2022 03:21:16 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1669116076; cv=none; d=google.com; s=arc-20160816; b=qVVCKNGPMyz4/7SS55kyAn7DaYY2QgyZ30bATZuU1kRkBec1p8tSKLo5DhuLgtn/5b bR079HJja4Ajhkn5BO/W4kCrO//xLLNrtwCCAZB+9LFHzzUpuiW46sseO9WfXS3KI4hZ EHIRuxuxNHm9u/wjuz+LLAN15zeQipw4ZGJY17+jk4H3qeo7uJWrPTWKQRuBm1qI35Ke 4QMI3TtIcZo5vqqO2GJj5taBI1eG3wwQWslYnOVGheYA8tA4ihHW+JNaIiBkkNw4SkoK UpsBBhW3hEey8oIIWBTBuyp2fdJ2oEogu6QfrUe/HQlJE/2wTgEK87q5fOm6pA1JoTYe WXFw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=UPVYXybqHdkNefD2nI/WDgupd5oCIdOMKakDKFgr6ck=; b=s0R3LL6eOXqcOmOrgyG6aXIRZHBKbNUY87fCdkVqv91C5BQwEShIClII/k4vX/G1cf NocppOK1IhTiGf/wXd0h2UrEA0mjtra0UkxhgpVuMrGg3bMGX8bTpxWZ3xqyaVndn8zK trNaUoX1wQXb13/KZDcOLOYPX5itwwHfHvKUYjPclPHKijLEDsxWdYPPLa+GTulix0VX Pw+RN2u46iadnMr4tb5s6RcMpSFwpNsnDXLvbukMRb+05aWNlDgKILe7LgLkqnd5i/lD HArGn3JYlBaFwX5VdTofGZWE3VrpXjlNgcyvnm66D1gQvp5MMOoehqSO0iVfvAtEeFAH xiig== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@mediatek.com header.s=dk header.b=Sa6cXoU4; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=mediatek.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id xh1-20020a170906da8100b00779e6c93108si8836ejb.598.2022.11.22.03.20.47; Tue, 22 Nov 2022 03:21:16 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@mediatek.com header.s=dk header.b=Sa6cXoU4; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=mediatek.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232487AbiKVLRo (ORCPT + 99 others); Tue, 22 Nov 2022 06:17:44 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58972 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232920AbiKVLRP (ORCPT ); Tue, 22 Nov 2022 06:17:15 -0500 Received: from mailgw01.mediatek.com (unknown [60.244.123.138]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4188060355; Tue, 22 Nov 2022 03:14:33 -0800 (PST) X-UUID: 4f10d8b7096d41d4969bd686d81cb338-20221122 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=mediatek.com; s=dk; h=Content-Type:MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:CC:To:From; bh=UPVYXybqHdkNefD2nI/WDgupd5oCIdOMKakDKFgr6ck=; b=Sa6cXoU4mIJ0SAGUPRv11k+kfHtyzu7VzvoTaLA4DeZ8aN2wwQEXO74IO/v1PE7peKkJtm6j9vXj7FCa9FhK80ZHDPFZ2faQeJf8ZBKQ5ZmfUvS2FqEn2+0Cdbz1aLbI6eQID2qu5Dmi8mgP7duXugDz201yZLd3R4xbmN/Tapg=; X-CID-P-RULE: Release_Ham X-CID-O-INFO: VERSION:1.1.13,REQID:f4ec40e6-c9ff-4813-84c2-568236c60604,IP:0,U RL:0,TC:0,Content:0,EDM:0,RT:0,SF:95,FILE:0,BULK:0,RULE:Release_Ham,ACTION :release,TS:95 X-CID-INFO: VERSION:1.1.13,REQID:f4ec40e6-c9ff-4813-84c2-568236c60604,IP:0,URL :0,TC:0,Content:0,EDM:0,RT:0,SF:95,FILE:0,BULK:0,RULE:Spam_GS981B3D,ACTION :quarantine,TS:95 X-CID-META: VersionHash:d12e911,CLOUDID:dc64fbf8-3a34-4838-abcf-dfedf9dd068e,B ulkID:221122191429XSO85QPJ,BulkQuantity:1,Recheck:0,SF:38|28|17|19|48,TC:n il,Content:0,EDM:-3,IP:nil,URL:0,File:nil,Bulk:40,QS:nil,BEC:nil,COL:0 X-UUID: 4f10d8b7096d41d4969bd686d81cb338-20221122 Received: from mtkcas11.mediatek.inc [(172.21.101.40)] by mailgw01.mediatek.com (envelope-from ) (Generic MTA with TLSv1.2 ECDHE-RSA-AES256-SHA384 256/256) with ESMTP id 1605129759; Tue, 22 Nov 2022 19:14:26 +0800 Received: from mtkmbs11n2.mediatek.inc (172.21.101.187) by mtkmbs10n2.mediatek.inc (172.21.101.183) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.792.3; Tue, 22 Nov 2022 19:14:25 +0800 Received: from mcddlt001.gcn.mediatek.inc (10.19.240.15) by mtkmbs11n2.mediatek.inc (172.21.101.73) with Microsoft SMTP Server id 15.2.792.15 via Frontend Transport; Tue, 22 Nov 2022 19:14:23 +0800 From: Yanchao Yang To: Loic Poulain , Sergey Ryazanov , Johannes Berg , "David S . Miller" , Eric Dumazet , "Jakub Kicinski" , Paolo Abeni , netdev ML , kernel ML CC: MTK ML , Liang Lu , Haijun Liu , Hua Yang , Ting Wang , Felix Chen , Mingliang Xu , Min Dong , Aiden Wang , Guohao Zhang , Chris Feng , "Yanchao Yang" , Lambert Wang , Mingchuang Qiao , Xiayu Zhang , Haozhe Chang , MediaTek Corporation Subject: [PATCH net-next v1 02/13] net: wwan: tmi: Add buffer management Date: Tue, 22 Nov 2022 19:11:41 +0800 Message-ID: <20221122111152.160377-3-yanchao.yang@mediatek.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20221122111152.160377-1-yanchao.yang@mediatek.com> References: <20221122111152.160377-1-yanchao.yang@mediatek.com> MIME-Version: 1.0 X-MTK: N X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_MSPIKE_H2,SPF_HELO_PASS, SPF_PASS,UNPARSEABLE_RELAY autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1750195059445633284?= X-GMAIL-MSGID: =?utf-8?q?1750195059445633284?= From: MediaTek Corporation To malloc I/O memory as soon as possible, buffer management comes into being. It creates buffer pools that reserve some buffers through deferred works when the driver isn't busy. The buffer management provides unified memory allocation/de-allocation interfaces for other modules. It supports two buffer types of SKB and page. Two reload work queues with different priority values are provided to meet various requirements of the control plane and the data plane. When the reserved buffer count of the pool is less than a threshold (default is 2/3 of the pool size), the reload work will restart to allocate buffers from the OS until the buffer pool becomes full. When the buffer pool fills, the OS will recycle the buffer freed by the user. Signed-off-by: Mingliang Xu Signed-off-by: MediaTek Corporation --- drivers/net/wwan/mediatek/Makefile | 3 +- drivers/net/wwan/mediatek/mtk_bm.c | 369 ++++++++++++++++++++++++++++ drivers/net/wwan/mediatek/mtk_bm.h | 79 ++++++ drivers/net/wwan/mediatek/mtk_dev.c | 11 +- drivers/net/wwan/mediatek/mtk_dev.h | 1 + 5 files changed, 461 insertions(+), 2 deletions(-) create mode 100644 drivers/net/wwan/mediatek/mtk_bm.c create mode 100644 drivers/net/wwan/mediatek/mtk_bm.h diff --git a/drivers/net/wwan/mediatek/Makefile b/drivers/net/wwan/mediatek/Makefile index ae5f8a5ba05a..122a791e1683 100644 --- a/drivers/net/wwan/mediatek/Makefile +++ b/drivers/net/wwan/mediatek/Makefile @@ -4,7 +4,8 @@ MODULE_NAME := mtk_tmi mtk_tmi-y = \ pcie/mtk_pci.o \ - mtk_dev.o + mtk_dev.o \ + mtk_bm.o ccflags-y += -I$(srctree)/$(src)/ ccflags-y += -I$(srctree)/$(src)/pcie/ diff --git a/drivers/net/wwan/mediatek/mtk_bm.c b/drivers/net/wwan/mediatek/mtk_bm.c new file mode 100644 index 000000000000..fa5abb82d038 --- /dev/null +++ b/drivers/net/wwan/mediatek/mtk_bm.c @@ -0,0 +1,369 @@ +// SPDX-License-Identifier: BSD-3-Clause-Clear +/* + * Copyright (c) 2022, MediaTek Inc. + */ + +#include +#include +#include +#include + +#include "mtk_bm.h" + +#define MTK_RELOAD_TH 3 +#define MTK_WQ_NAME_LEN 48 + +static int mtk_bm_page_pool_create(struct mtk_bm_pool *pool) +{ + INIT_LIST_HEAD(&pool->list.buff_list); + + return 0; +} + +static void mtk_bm_page_pool_destroy(struct mtk_bm_pool *pool) +{ + struct mtk_buff *mb, *next; + + list_for_each_entry_safe(mb, next, &pool->list.buff_list, entry) { + list_del(&mb->entry); + skb_free_frag(mb->data); + kmem_cache_free(pool->bm_ctrl->list_cache_pool, mb); + } +} + +static void *mtk_bm_page_buff_alloc(struct mtk_bm_pool *pool) +{ + struct mtk_buff *mb; + void *data; + + spin_lock_bh(&pool->lock); + mb = list_first_entry_or_null(&pool->list.buff_list, struct mtk_buff, entry); + if (!mb) { + spin_unlock_bh(&pool->lock); + data = netdev_alloc_frag(pool->buff_size); + } else { + list_del(&mb->entry); + pool->curr_cnt--; + spin_unlock_bh(&pool->lock); + data = mb->data; + kmem_cache_free(pool->bm_ctrl->list_cache_pool, mb); + } + + if (pool->curr_cnt < pool->threshold) + queue_work(pool->reload_workqueue, &pool->reload_work); + + return data; +} + +static void mtk_bm_page_buff_free(struct mtk_bm_pool *pool, void *data) +{ + struct mtk_buff *mb; + + if (pool->curr_cnt >= pool->buff_cnt) { + skb_free_frag(data); + return; + } + + mb = kmem_cache_alloc(pool->bm_ctrl->list_cache_pool, GFP_KERNEL); + if (mb) { + mb->data = data; + spin_lock_bh(&pool->lock); + list_add_tail(&mb->entry, &pool->list.buff_list); + pool->curr_cnt++; + spin_unlock_bh(&pool->lock); + } else { + skb_free_frag(data); + } +} + +static void mtk_bm_page_pool_reload(struct work_struct *work) +{ + struct mtk_bm_pool *pool = container_of(work, struct mtk_bm_pool, reload_work); + struct mtk_buff *mb; + + while (pool->curr_cnt < pool->buff_cnt && !atomic_read(&pool->work_stop)) { + mb = kmem_cache_alloc(pool->bm_ctrl->list_cache_pool, GFP_KERNEL); + if (!mb) + break; + + mb->data = netdev_alloc_frag(pool->buff_size); + if (!mb->data) { + kmem_cache_free(pool->bm_ctrl->list_cache_pool, mb); + break; + } + + spin_lock_bh(&pool->lock); + list_add_tail(&mb->entry, &pool->list.buff_list); + pool->curr_cnt++; + spin_unlock_bh(&pool->lock); + } +} + +static struct mtk_buff_ops page_buf_ops = { + .pool_create = mtk_bm_page_pool_create, + .pool_destroy = mtk_bm_page_pool_destroy, + .buff_alloc = mtk_bm_page_buff_alloc, + .buff_free = mtk_bm_page_buff_free, + .pool_reload = mtk_bm_page_pool_reload, +}; + +static int mtk_bm_skb_pool_create(struct mtk_bm_pool *pool) +{ + skb_queue_head_init(&pool->list.skb_list); + + return 0; +} + +static void mtk_bm_skb_pool_destroy(struct mtk_bm_pool *pool) +{ + skb_queue_purge(&pool->list.skb_list); +} + +static void *mtk_bm_skb_buff_alloc(struct mtk_bm_pool *pool) +{ + gfp_t gfp = GFP_KERNEL; + struct sk_buff *skb; + + spin_lock_bh(&pool->lock); + skb = __skb_dequeue(&pool->list.skb_list); + spin_unlock_bh(&pool->lock); + if (!skb) { + if (in_irq() || in_softirq()) + gfp = GFP_ATOMIC; + skb = __dev_alloc_skb(pool->buff_size, gfp); + } + + if (pool->list.skb_list.qlen < pool->threshold) + queue_work(pool->reload_workqueue, &pool->reload_work); + + return skb; +} + +static void mtk_bm_skb_buff_free(struct mtk_bm_pool *pool, void *data) +{ + struct sk_buff *skb = data; + + if (pool->list.skb_list.qlen < pool->buff_cnt) { + /* reset sk_buff (take __alloc_skb as ref.) */ + skb->data = skb->head; + skb->len = 0; + skb_reset_tail_pointer(skb); + /* reserve memory as netdev_alloc_skb */ + skb_reserve(skb, NET_SKB_PAD); + + spin_lock_bh(&pool->lock); + __skb_queue_tail(&pool->list.skb_list, skb); + spin_unlock_bh(&pool->lock); + } else { + dev_kfree_skb_any(skb); + } +} + +static void mtk_bm_skb_pool_reload(struct work_struct *work) +{ + struct mtk_bm_pool *pool = container_of(work, struct mtk_bm_pool, reload_work); + struct sk_buff *skb; + + while (pool->list.skb_list.qlen < pool->buff_cnt && !atomic_read(&pool->work_stop)) { + skb = __dev_alloc_skb(pool->buff_size, GFP_KERNEL); + if (!skb) + break; + + spin_lock_bh(&pool->lock); + __skb_queue_tail(&pool->list.skb_list, skb); + spin_unlock_bh(&pool->lock); + } +} + +static struct mtk_buff_ops skb_buf_ops = { + .pool_create = mtk_bm_skb_pool_create, + .pool_destroy = mtk_bm_skb_pool_destroy, + .buff_alloc = mtk_bm_skb_buff_alloc, + .buff_free = mtk_bm_skb_buff_free, + .pool_reload = mtk_bm_skb_pool_reload, +}; + +/* mtk_bm_init - Init struct mtk_bm_ctrl + * + * @mdev: pointer to mtk_md_dev + * + * Return: return value is 0 on success, a negative error code on failure. + */ +int mtk_bm_init(struct mtk_md_dev *mdev) +{ + char wq_name[MTK_WQ_NAME_LEN]; + struct mtk_bm_ctrl *bm; + + bm = devm_kzalloc(mdev->dev, sizeof(*bm), GFP_KERNEL); + if (!bm) + return -ENOMEM; + + bm->list_cache_pool = kmem_cache_create(mdev->dev_str, sizeof(struct mtk_buff), 0, 0, NULL); + if (unlikely(!bm->list_cache_pool)) + goto err_free_buf; + + snprintf(wq_name, sizeof(wq_name), "mtk_pool_reload_work_h_%s", mdev->dev_str); + bm->pool_reload_workqueue_h = alloc_workqueue(wq_name, + WQ_UNBOUND | WQ_MEM_RECLAIM | WQ_HIGHPRI, 0); + if (!bm->pool_reload_workqueue_h) + goto err_destroy_cache_pool; + + snprintf(wq_name, sizeof(wq_name), "mtk_pool_reload_work_l_%s", mdev->dev_str); + bm->pool_reload_workqueue_l = alloc_workqueue(wq_name, + WQ_UNBOUND | WQ_MEM_RECLAIM, 0); + if (!bm->pool_reload_workqueue_l) + goto err_destroy_wq; + + mutex_init(&bm->pool_list_mtx); + INIT_LIST_HEAD(&bm->pool_list); + + bm->m_ops[MTK_BUFF_SKB] = &skb_buf_ops; + bm->m_ops[MTK_BUFF_PAGE] = &page_buf_ops; + mdev->bm_ctrl = bm; + + return 0; + +err_destroy_wq: + flush_workqueue(bm->pool_reload_workqueue_h); + destroy_workqueue(bm->pool_reload_workqueue_h); +err_destroy_cache_pool: + kmem_cache_destroy(bm->list_cache_pool); +err_free_buf: + devm_kfree(mdev->dev, bm); + return -ENOMEM; +} + +/* mtk_bm_pool_create - Create a buffer pool + * + * @mdev: pointer to mtk_md_dev + * @type: pool type + * @buff_size: the buffer size + * @buff_cnt: the buffer count + * @prio: the priority of reload work + * + * Return: return value is a buffer pool on success, a NULL pointer on failure. + */ +struct mtk_bm_pool *mtk_bm_pool_create(struct mtk_md_dev *mdev, + enum mtk_buff_type type, unsigned int buff_size, + unsigned int buff_cnt, unsigned int prio) +{ + struct mtk_bm_ctrl *bm = mdev->bm_ctrl; + struct mtk_bm_pool *pool; + + pool = devm_kzalloc(mdev->dev, sizeof(*pool), GFP_KERNEL); + if (!pool) + return NULL; + + pool->type = type; + pool->buff_size = buff_size; + pool->buff_cnt = buff_cnt; + pool->pool_id = bm->pool_seq++; + pool->threshold = pool->buff_cnt - pool->buff_cnt / MTK_RELOAD_TH; + pool->dev = mdev->dev; + + if (prio == MTK_BM_HIGH_PRIO) + pool->reload_workqueue = bm->pool_reload_workqueue_h; + else + pool->reload_workqueue = bm->pool_reload_workqueue_l; + pool->prio = prio; + + spin_lock_init(&pool->lock); + pool->ops = bm->m_ops[pool->type]; + INIT_WORK(&pool->reload_work, pool->ops->pool_reload); + if (pool->ops->pool_create(pool)) + goto err_free_buf; + queue_work(pool->reload_workqueue, &pool->reload_work); + atomic_set(&pool->work_stop, 0); + pool->bm_ctrl = bm; + + mutex_lock(&bm->pool_list_mtx); + list_add_tail(&pool->entry, &bm->pool_list); + mutex_unlock(&bm->pool_list_mtx); + + return pool; + +err_free_buf: + dev_err(mdev->dev, "Failed to create bm pool\n"); + devm_kfree(mdev->dev, pool); + return NULL; +} + +/* mtk_bm_alloc - alloc a block of buffer from bm pool + * + * @pool: the buffer pool + * + * Return: return value is a block of buffer from bm pool on success, a NULL pointer on failure. + */ +void *mtk_bm_alloc(struct mtk_bm_pool *pool) +{ + return pool->ops->buff_alloc(pool); +} + +/* mtk_bm_free - free a block of buffer to bm pool + * + * @pool: the buffer pool + * @data: the buffer need to free to pool + * + * Return: return value is 0 on success, a negative error code on failure. + */ +int mtk_bm_free(struct mtk_bm_pool *pool, void *data) +{ + if (!data) + return -EINVAL; + + pool->ops->buff_free(pool, data); + + return 0; +} + +/* mtk_bm_pool_destroy - destroy a buffer pool + * rule: we must stop calling alloc/free before this function is called. + * + * @mdev: pointer to mtk_md_dev + * @pool: the buffer pool need to destroy + * + * Return: return value is 0 on success, a negative error code on failure. + */ +int mtk_bm_pool_destroy(struct mtk_md_dev *mdev, struct mtk_bm_pool *pool) +{ + struct mtk_bm_ctrl *bm = mdev->bm_ctrl; + + atomic_set(&pool->work_stop, 1); + cancel_work_sync(&pool->reload_work); + spin_lock_bh(&pool->lock); + pool->curr_cnt = 0; + spin_unlock_bh(&pool->lock); + + mutex_lock(&bm->pool_list_mtx); + list_del(&pool->entry); + mutex_unlock(&bm->pool_list_mtx); + + pool->ops->pool_destroy(pool); + + devm_kfree(mdev->dev, pool); + return 0; +} + +/* mtk_bm_exit - deinit struct mtk_bm_ctrl + * + * @mdev : pointer to mtk_md_dev + * + * Return: return value is 0 on success, a negative error code on failure. + */ +int mtk_bm_exit(struct mtk_md_dev *mdev) +{ + struct mtk_bm_ctrl *bm = mdev->bm_ctrl; + + flush_workqueue(bm->pool_reload_workqueue_h); + destroy_workqueue(bm->pool_reload_workqueue_h); + flush_workqueue(bm->pool_reload_workqueue_l); + destroy_workqueue(bm->pool_reload_workqueue_l); + + if (unlikely(!list_empty(&bm->pool_list))) + dev_warn(mdev->dev, "bm pool not destroyed\n"); + + kmem_cache_destroy(bm->list_cache_pool); + devm_kfree(mdev->dev, bm); + + return 0; +} diff --git a/drivers/net/wwan/mediatek/mtk_bm.h b/drivers/net/wwan/mediatek/mtk_bm.h new file mode 100644 index 000000000000..6ac473c05296 --- /dev/null +++ b/drivers/net/wwan/mediatek/mtk_bm.h @@ -0,0 +1,79 @@ +/* SPDX-License-Identifier: BSD-3-Clause-Clear + * + * Copyright (c) 2022, MediaTek Inc. + */ + +#ifndef __MTK_BM_H__ +#define __MTK_BM_H__ + +#include +#include + +#include "mtk_dev.h" + +#define MTK_BM_LOW_PRIO 0 +#define MTK_BM_HIGH_PRIO 1 + +enum mtk_buff_type { + MTK_BUFF_SKB = 0, + MTK_BUFF_PAGE, + MTK_BUFF_MAX +}; + +struct mtk_bm_ctrl { + unsigned int pool_seq; + struct workqueue_struct *pool_reload_workqueue_h; + struct workqueue_struct *pool_reload_workqueue_l; + struct kmem_cache *list_cache_pool; + struct list_head pool_list; + struct mutex pool_list_mtx; /* protects the pool list */ + struct mtk_buff_ops *m_ops[MTK_BUFF_MAX]; +}; + +struct mtk_buff { + struct list_head entry; + void *data; +}; + +union mtk_buff_list { + struct sk_buff_head skb_list; + struct list_head buff_list; +}; + +struct mtk_bm_pool { + unsigned int pool_id; + enum mtk_buff_type type; + unsigned int threshold; + unsigned int buff_size; + unsigned int buff_cnt; + unsigned int curr_cnt; + unsigned int prio; + atomic_t work_stop; + spinlock_t lock; /* protects the buffer operation */ + union mtk_buff_list list; + struct device *dev; + struct work_struct reload_work; + struct workqueue_struct *reload_workqueue; + struct list_head entry; + struct mtk_bm_ctrl *bm_ctrl; + struct mtk_buff_ops *ops; +}; + +struct mtk_buff_ops { + int (*pool_create)(struct mtk_bm_pool *pool); + void (*pool_destroy)(struct mtk_bm_pool *pool); + void *(*buff_alloc)(struct mtk_bm_pool *pool); + void (*buff_free)(struct mtk_bm_pool *pool, void *data); + void (*pool_reload)(struct work_struct *work); +}; + +int mtk_bm_init(struct mtk_md_dev *mdev); +int mtk_bm_exit(struct mtk_md_dev *mdev); +struct mtk_bm_pool *mtk_bm_pool_create(struct mtk_md_dev *mdev, + enum mtk_buff_type type, unsigned int buff_size, + unsigned int buff_cnt, unsigned int prio); +int mtk_bm_pool_destroy(struct mtk_md_dev *mdev, struct mtk_bm_pool *pool); +void *mtk_bm_alloc(struct mtk_bm_pool *pool); +int mtk_bm_free(struct mtk_bm_pool *pool, void *data); + +#endif /* __MTK_BM_H__ */ diff --git a/drivers/net/wwan/mediatek/mtk_dev.c b/drivers/net/wwan/mediatek/mtk_dev.c index d3d7bf940d78..513aac37cb9c 100644 --- a/drivers/net/wwan/mediatek/mtk_dev.c +++ b/drivers/net/wwan/mediatek/mtk_dev.c @@ -3,15 +3,24 @@ * Copyright (c) 2022, MediaTek Inc. */ +#include "mtk_bm.h" #include "mtk_dev.h" int mtk_dev_init(struct mtk_md_dev *mdev) { - return 0; + int ret; + + ret = mtk_bm_init(mdev); + if (ret) + goto err_bm_init; + +err_bm_init: + return ret; } void mtk_dev_exit(struct mtk_md_dev *mdev) { + mtk_bm_exit(mdev); } int mtk_dev_start(struct mtk_md_dev *mdev) diff --git a/drivers/net/wwan/mediatek/mtk_dev.h b/drivers/net/wwan/mediatek/mtk_dev.h index bd7b1dc11daf..0c4b727b9c53 100644 --- a/drivers/net/wwan/mediatek/mtk_dev.h +++ b/drivers/net/wwan/mediatek/mtk_dev.h @@ -130,6 +130,7 @@ struct mtk_md_dev { u32 hw_ver; int msi_nvecs; char dev_str[MTK_DEV_STR_LEN]; + struct mtk_bm_ctrl *bm_ctrl; }; int mtk_dev_init(struct mtk_md_dev *mdev); From patchwork Tue Nov 22 11:11:42 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?WWFuY2hhbyBZYW5nICjmnajlvabotoUp?= X-Patchwork-Id: 24298 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp2140537wrr; Tue, 22 Nov 2022 03:22:12 -0800 (PST) X-Google-Smtp-Source: AA0mqf4h8XYuO0KkcQxxpKy2678EFXo90NfFSygWl9f5vZ3iQBIGMYnr/qCAiedB2iwq5uP+sUD5 X-Received: by 2002:aa7:c9c3:0:b0:461:8f21:5f12 with SMTP id i3-20020aa7c9c3000000b004618f215f12mr20866667edt.54.1669116132336; Tue, 22 Nov 2022 03:22:12 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1669116132; cv=none; d=google.com; s=arc-20160816; b=podIrGFKn+Z2zllt94JDehrKc24RBRrUcY7/8/sFSONJu+Qn7k2eXWMg5OJjMCnJir BbaQzFNu42IulpyYuTV1OpI+Tw/iS150JiitRV90JGB8eys49YYKiykSpHoMzUQdQSLF 6Nkcucj5aBOesHBS1fapK+RFbHaeaQI1oOzeEfaD1mTy2cTJfqjHmqonHDzhp1De/7NA RNw2sbuj5kSfdWk4GxFsMtlNZF6R2mmZbsfgbU7IdNfiXoo1d85R+Ir6BFwZvRoS3yoB wix7sf2f/7zEFu89kZI3dHXpRnduHwXrYdBUP0LzB7C99ZT1JPBFB7jwxrEUm32vwa2d cWRg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=B1Z+g58MrUghONvFYfzTqLxyQqEgKW0qabrUwXKdlLw=; b=YYWw7pDGKB/g5c5UvM5oIKGNPppLACv231U3zEZNE9V8g7hwrtMy0k7u0rrb1/PIhv 9veHbp8PHh3RsfrCtiP8hh4PF2MuLmA5h5VEPdXz/d1igV6uWFSjKOWAvrvLcPTRhTk5 2cxIAlHNIxz84lecpz5Rcj2Ib2Dq2O93VXMMyMsf3PP1wVVYW3B1XPR3yB4s5kMLAmjm FESUfYA4FE2Txgj2Y3W0fgm+P78q6PW82zMOSG+vqobL+3L/e9620H0bBMC3jrWAkzTy c9wafmrbrzrrM2w2bOZJJnnSftFZdKcpmUwJ93lK1LJkM16VK7LchKBCZBTPLnu4aQVW pjAA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@mediatek.com header.s=dk header.b=qhjNaZPx; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=mediatek.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id gb22-20020a170907961600b00782ff2649a7si12782554ejc.346.2022.11.22.03.21.48; Tue, 22 Nov 2022 03:22:12 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@mediatek.com header.s=dk header.b=qhjNaZPx; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=mediatek.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233389AbiKVLUR (ORCPT + 99 others); Tue, 22 Nov 2022 06:20:17 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40838 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232869AbiKVLTn (ORCPT ); Tue, 22 Nov 2022 06:19:43 -0500 Received: from mailgw02.mediatek.com (unknown [210.61.82.184]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BD3F752884; Tue, 22 Nov 2022 03:15:32 -0800 (PST) X-UUID: 0eaf27726fcf41ca96a95bffa29fc7f2-20221122 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=mediatek.com; s=dk; h=Content-Type:MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:CC:To:From; bh=B1Z+g58MrUghONvFYfzTqLxyQqEgKW0qabrUwXKdlLw=; b=qhjNaZPxXlINBuUn/OaUn+frB4iUsMdg+yKt3hxl7FDtawC87QIFYrS0v+j013klpDbcaP+yQJey1joGoEWSTqk3jsyWnPyH9nSkj+JQaKrBTVDLnvfAaIvITgSoN/GbVMeEz2kh+ysKXyBjCztQRE0SGD+7ds4VnaFwygKkbss=; X-CID-P-RULE: Release_Ham X-CID-O-INFO: VERSION:1.1.12,REQID:8ea36e96-807a-4cb6-a46b-125b7d45f0ee,IP:0,U RL:0,TC:0,Content:-25,EDM:0,RT:0,SF:95,FILE:0,BULK:0,RULE:Release_Ham,ACTI ON:release,TS:70 X-CID-INFO: VERSION:1.1.12,REQID:8ea36e96-807a-4cb6-a46b-125b7d45f0ee,IP:0,URL :0,TC:0,Content:-25,EDM:0,RT:0,SF:95,FILE:0,BULK:0,RULE:Spam_GS981B3D,ACTI ON:quarantine,TS:70 X-CID-META: VersionHash:62cd327,CLOUDID:4c3e7f2f-2938-482e-aafd-98d66723b8a9,B ulkID:221122191529Z45TKU6D,BulkQuantity:0,Recheck:0,SF:38|28|17|19|48,TC:n il,Content:0,EDM:-3,IP:nil,URL:0,File:nil,Bulk:nil,QS:nil,BEC:nil,COL:0 X-UUID: 0eaf27726fcf41ca96a95bffa29fc7f2-20221122 Received: from mtkcas10.mediatek.inc [(172.21.101.39)] by mailgw02.mediatek.com (envelope-from ) (Generic MTA with TLSv1.2 ECDHE-RSA-AES256-SHA384 256/256) with ESMTP id 1633777816; Tue, 22 Nov 2022 19:15:26 +0800 Received: from mtkmbs11n2.mediatek.inc (172.21.101.187) by mtkmbs11n2.mediatek.inc (172.21.101.187) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.792.15; Tue, 22 Nov 2022 19:15:24 +0800 Received: from mcddlt001.gcn.mediatek.inc (10.19.240.15) by mtkmbs11n2.mediatek.inc (172.21.101.73) with Microsoft SMTP Server id 15.2.792.15 via Frontend Transport; Tue, 22 Nov 2022 19:15:22 +0800 From: Yanchao Yang To: Loic Poulain , Sergey Ryazanov , Johannes Berg , "David S . Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , netdev ML , kernel ML CC: MTK ML , Liang Lu , Haijun Liu , Hua Yang , Ting Wang , Felix Chen , Mingliang Xu , Min Dong , Aiden Wang , Guohao Zhang , Chris Feng , Yanchao Yang , Lambert Wang , Mingchuang Qiao , Xiayu Zhang , Haozhe Chang , MediaTek Corporation Subject: [PATCH net-next v1 03/13] net: wwan: tmi: Add control plane transaction layer Date: Tue, 22 Nov 2022 19:11:42 +0800 Message-ID: <20221122111152.160377-4-yanchao.yang@mediatek.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20221122111152.160377-1-yanchao.yang@mediatek.com> References: <20221122111152.160377-1-yanchao.yang@mediatek.com> MIME-Version: 1.0 X-MTK: N X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_MSPIKE_H2,SPF_HELO_PASS, SPF_PASS,UNPARSEABLE_RELAY autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1750195117205721366?= X-GMAIL-MSGID: =?utf-8?q?1750195117205721366?= From: MediaTek Corporation The control plane implements TX services that reside in the transaction layer. The services receive the packets from the port layer and call the corresponding DMA components to transmit data to the device. Meanwhile, TX services receive and manage the port control commands from the port layer. The control plane implements RX services that reside in the transaction layer. The services receive the downlink packets from the modem and transfer the packets to the corresponding port layer interfaces. Signed-off-by: Mingliang Xu Signed-off-by: MediaTek Corporation --- drivers/net/wwan/mediatek/Makefile | 3 +- drivers/net/wwan/mediatek/mtk_ctrl_plane.c | 62 ++++++++++++++++++++++ drivers/net/wwan/mediatek/mtk_ctrl_plane.h | 35 ++++++++++++ drivers/net/wwan/mediatek/mtk_dev.c | 8 +++ drivers/net/wwan/mediatek/mtk_dev.h | 1 + 5 files changed, 108 insertions(+), 1 deletion(-) create mode 100644 drivers/net/wwan/mediatek/mtk_ctrl_plane.c create mode 100644 drivers/net/wwan/mediatek/mtk_ctrl_plane.h diff --git a/drivers/net/wwan/mediatek/Makefile b/drivers/net/wwan/mediatek/Makefile index 122a791e1683..69a9fb7d5b96 100644 --- a/drivers/net/wwan/mediatek/Makefile +++ b/drivers/net/wwan/mediatek/Makefile @@ -5,7 +5,8 @@ MODULE_NAME := mtk_tmi mtk_tmi-y = \ pcie/mtk_pci.o \ mtk_dev.o \ - mtk_bm.o + mtk_bm.o \ + mtk_ctrl_plane.o ccflags-y += -I$(srctree)/$(src)/ ccflags-y += -I$(srctree)/$(src)/pcie/ diff --git a/drivers/net/wwan/mediatek/mtk_ctrl_plane.c b/drivers/net/wwan/mediatek/mtk_ctrl_plane.c new file mode 100644 index 000000000000..4c8f71223a11 --- /dev/null +++ b/drivers/net/wwan/mediatek/mtk_ctrl_plane.c @@ -0,0 +1,62 @@ +// SPDX-License-Identifier: BSD-3-Clause-Clear +/* + * Copyright (c) 2022, MediaTek Inc. + */ + +#include +#include +#include +#include +#include +#include + +#include "mtk_bm.h" +#include "mtk_ctrl_plane.h" + +int mtk_ctrl_init(struct mtk_md_dev *mdev) +{ + struct mtk_ctrl_blk *ctrl_blk; + int err; + + ctrl_blk = devm_kzalloc(mdev->dev, sizeof(*ctrl_blk), GFP_KERNEL); + if (!ctrl_blk) + return -ENOMEM; + + ctrl_blk->mdev = mdev; + mdev->ctrl_blk = ctrl_blk; + + ctrl_blk->bm_pool = mtk_bm_pool_create(mdev, MTK_BUFF_SKB, + VQ_MTU_3_5K, BUFF_3_5K_MAX_CNT, MTK_BM_LOW_PRIO); + if (!ctrl_blk->bm_pool) { + err = -ENOMEM; + goto err_free_mem; + } + + ctrl_blk->bm_pool_63K = mtk_bm_pool_create(mdev, MTK_BUFF_SKB, + VQ_MTU_63K, BUFF_63K_MAX_CNT, MTK_BM_LOW_PRIO); + + if (!ctrl_blk->bm_pool_63K) { + err = -ENOMEM; + goto err_destroy_pool; + } + + return 0; + +err_destroy_pool: + mtk_bm_pool_destroy(mdev, ctrl_blk->bm_pool); +err_free_mem: + devm_kfree(mdev->dev, ctrl_blk); + + return err; +} + +int mtk_ctrl_exit(struct mtk_md_dev *mdev) +{ + struct mtk_ctrl_blk *ctrl_blk = mdev->ctrl_blk; + + mtk_bm_pool_destroy(mdev, ctrl_blk->bm_pool); + mtk_bm_pool_destroy(mdev, ctrl_blk->bm_pool_63K); + devm_kfree(mdev->dev, ctrl_blk); + + return 0; +} diff --git a/drivers/net/wwan/mediatek/mtk_ctrl_plane.h b/drivers/net/wwan/mediatek/mtk_ctrl_plane.h new file mode 100644 index 000000000000..343766a2b39e --- /dev/null +++ b/drivers/net/wwan/mediatek/mtk_ctrl_plane.h @@ -0,0 +1,35 @@ +/* SPDX-License-Identifier: BSD-3-Clause-Clear + * + * Copyright (c) 2022, MediaTek Inc. + */ + +#ifndef __MTK_CTRL_PLANE_H__ +#define __MTK_CTRL_PLANE_H__ + +#include +#include + +#include "mtk_dev.h" + +#define VQ_MTU_3_5K (0xE00) +#define VQ_MTU_63K (0xFC00) + +#define BUFF_3_5K_MAX_CNT (100) +#define BUFF_63K_MAX_CNT (64) + +struct mtk_ctrl_trans { + struct mtk_ctrl_blk *ctrl_blk; + struct mtk_md_dev *mdev; +}; + +struct mtk_ctrl_blk { + struct mtk_md_dev *mdev; + struct mtk_ctrl_trans *trans; + struct mtk_bm_pool *bm_pool; + struct mtk_bm_pool *bm_pool_63K; +}; + +int mtk_ctrl_init(struct mtk_md_dev *mdev); +int mtk_ctrl_exit(struct mtk_md_dev *mdev); + +#endif /* __MTK_CTRL_PLANE_H__ */ diff --git a/drivers/net/wwan/mediatek/mtk_dev.c b/drivers/net/wwan/mediatek/mtk_dev.c index 513aac37cb9c..96b111be206a 100644 --- a/drivers/net/wwan/mediatek/mtk_dev.c +++ b/drivers/net/wwan/mediatek/mtk_dev.c @@ -4,6 +4,7 @@ */ #include "mtk_bm.h" +#include "mtk_ctrl_plane.h" #include "mtk_dev.h" int mtk_dev_init(struct mtk_md_dev *mdev) @@ -14,12 +15,19 @@ int mtk_dev_init(struct mtk_md_dev *mdev) if (ret) goto err_bm_init; + ret = mtk_ctrl_init(mdev); + if (ret) + goto err_ctrl_init; + +err_ctrl_init: + mtk_bm_exit(mdev); err_bm_init: return ret; } void mtk_dev_exit(struct mtk_md_dev *mdev) { + mtk_ctrl_exit(mdev); mtk_bm_exit(mdev); } diff --git a/drivers/net/wwan/mediatek/mtk_dev.h b/drivers/net/wwan/mediatek/mtk_dev.h index 0c4b727b9c53..d6e8e9b2e52a 100644 --- a/drivers/net/wwan/mediatek/mtk_dev.h +++ b/drivers/net/wwan/mediatek/mtk_dev.h @@ -130,6 +130,7 @@ struct mtk_md_dev { u32 hw_ver; int msi_nvecs; char dev_str[MTK_DEV_STR_LEN]; + void *ctrl_blk; struct mtk_bm_ctrl *bm_ctrl; }; From patchwork Tue Nov 22 11:11:43 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?WWFuY2hhbyBZYW5nICjmnajlvabotoUp?= X-Patchwork-Id: 24299 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp2141267wrr; Tue, 22 Nov 2022 03:24:00 -0800 (PST) X-Google-Smtp-Source: AA0mqf4fp3P5TgltDekiTJXZmAYKzgNdzd4ET56PmKNDMzFI8grF7wBsRl6iDRVjWiPQ9IFv07gx X-Received: by 2002:a05:6402:3711:b0:461:b6a9:c5cb with SMTP id ek17-20020a056402371100b00461b6a9c5cbmr5993921edb.148.1669116240712; Tue, 22 Nov 2022 03:24:00 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1669116240; cv=none; d=google.com; s=arc-20160816; b=p1nywqjozT/1BLVCtTcGpr9RkSo02qvzrjwT7GgOYk48f70S+8FJRb2F0fJjPJPfB8 cHCpYGbI8EewZWM0JvGuDEJnmn7pf/SfFih2B6umfrbXvg1gK2zO3ULmkbtU/p/gUb/L plb8ejTTEz7Z/5YgsdLOaw+NcvqMZJ3bjj07r1V5jXaY6Aj0NKQh0DOmz0sj24g9UNOW CoDKdG0pZZOh8RG1aFILmU7esZf4NhRu7oYFdY0l9ypFfmu8rLXRdNy/cUlOojaqXt11 +PqunetXCnbqKiWPLZaZlW0OLqp7pRq0qWyR8XcCq5c9fDFE7xxTbkaq3IiwR1HiNUQ+ PB3w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=XxpcQL+IVli4cuUqQU5xSUpHl1ybB9IdFSIJGM0GQ6I=; b=XajtrjuhJtDc/Eh40QZsgs2s2VIBkQAVbsR0nswZvsczVKKHI32sOlUb2pn0RaNJeY 2xMoTrNCnmgWEnrEZt6Y6NV3NZxyi+uzLMF6GlAZCHyqrDGn2U4lcjfow2FwmDt+YWzR ds4rhFsFK79EthvpGdeNgIFCA9bMPx594sLYuTntxDV05hMkXobcjqy9P6sdRKZdF20s SuSICeCJnzvnu993BVzEbmM805ArkDstWOsz317M3ScJZ3MyCsPWkYMUupc2o5XP+2bi k24KuTmvbcosgTmgOb3AAvPzeD0Q/08DmAzXNXeKWLDkRw8jgFhqhU5EapP6hwKyQzvN AY8w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@mediatek.com header.s=dk header.b=mI0mTCW4; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=mediatek.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id wy4-20020a170906fe0400b0078db89b2566si12343026ejb.699.2022.11.22.03.23.36; Tue, 22 Nov 2022 03:24:00 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@mediatek.com header.s=dk header.b=mI0mTCW4; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=mediatek.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233400AbiKVLVr (ORCPT + 99 others); Tue, 22 Nov 2022 06:21:47 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41036 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233116AbiKVLVH (ORCPT ); Tue, 22 Nov 2022 06:21:07 -0500 Received: from mailgw02.mediatek.com (unknown [210.61.82.184]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BB55E5B85E; Tue, 22 Nov 2022 03:16:40 -0800 (PST) X-UUID: 6da7404c36324464a6706fb6dca43682-20221122 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=mediatek.com; s=dk; h=Content-Type:MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:CC:To:From; bh=XxpcQL+IVli4cuUqQU5xSUpHl1ybB9IdFSIJGM0GQ6I=; b=mI0mTCW4QYq/7tRUax8AFMIaUMzBOEPdkvJO4pTfC+Wu8qtghPvFVUBe23Z5GWoUOef8B6+8GxMPb3bFL/13VaYzAaIJXwVQK8qMljwRpyblMya86XJWWHe9EBLfFDT4cmCT120jb1MefWZyk/zEY1MlqgRXba6QuODujRRNs9Q=; X-CID-P-RULE: Release_Ham X-CID-O-INFO: VERSION:1.1.12,REQID:a705a95e-e13f-41f0-8b35-2b6d3fd7ce3a,IP:0,U RL:0,TC:0,Content:-25,EDM:0,RT:0,SF:0,FILE:0,BULK:0,RULE:Release_Ham,ACTIO N:release,TS:-25 X-CID-META: VersionHash:62cd327,CLOUDID:306ffbf8-3a34-4838-abcf-dfedf9dd068e,B ulkID:nil,BulkQuantity:0,Recheck:0,SF:102,TC:nil,Content:0,EDM:-3,IP:nil,U RL:0,File:nil,Bulk:nil,QS:nil,BEC:nil,COL:0 X-UUID: 6da7404c36324464a6706fb6dca43682-20221122 Received: from mtkexhb02.mediatek.inc [(172.21.101.103)] by mailgw02.mediatek.com (envelope-from ) (Generic MTA with TLSv1.2 ECDHE-RSA-AES256-SHA384 256/256) with ESMTP id 631295798; Tue, 22 Nov 2022 19:16:34 +0800 Received: from mtkmbs11n2.mediatek.inc (172.21.101.187) by mtkmbs10n2.mediatek.inc (172.21.101.183) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.792.3; Tue, 22 Nov 2022 19:16:33 +0800 Received: from mcddlt001.gcn.mediatek.inc (10.19.240.15) by mtkmbs11n2.mediatek.inc (172.21.101.73) with Microsoft SMTP Server id 15.2.792.15 via Frontend Transport; Tue, 22 Nov 2022 19:16:30 +0800 From: Yanchao Yang To: Loic Poulain , Sergey Ryazanov , Johannes Berg , "David S . Miller" , Eric Dumazet , "Jakub Kicinski" , Paolo Abeni , netdev ML , kernel ML CC: MTK ML , Liang Lu , Haijun Liu , Hua Yang , Ting Wang , Felix Chen , Mingliang Xu , Min Dong , Aiden Wang , Guohao Zhang , Chris Feng , "Yanchao Yang" , Lambert Wang , Mingchuang Qiao , Xiayu Zhang , Haozhe Chang , MediaTek Corporation Subject: [PATCH net-next v1 04/13] net: wwan: tmi: Add control DMA interface Date: Tue, 22 Nov 2022 19:11:43 +0800 Message-ID: <20221122111152.160377-5-yanchao.yang@mediatek.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20221122111152.160377-1-yanchao.yang@mediatek.com> References: <20221122111152.160377-1-yanchao.yang@mediatek.com> MIME-Version: 1.0 X-MTK: N X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_MSPIKE_H2,SPF_HELO_PASS, SPF_PASS,UNPARSEABLE_RELAY autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1750195231291141974?= X-GMAIL-MSGID: =?utf-8?q?1750195231291141974?= From: MediaTek Corporation Cross Layer Direct Memory Access(CLDMA) is the hardware interface used by the control plane and designated to translate data between the host and the device. It supports 8 hardware queues for the device AP and modem respectively. CLDMA driver uses General Purpose Descriptor (GPD) to describe transaction information that can be recognized by CLDMA hardware. Once CLDMA hardware transaction is started, it would fetch and parse GPD to transfer data correctly. To facilitate the CLDMA transaction, a GPD ring for each queue is used. Once the transaction is started, CLDMA hardware will traverse the GPD ring to transfer data between the host and the device until no GPD is available. CLDMA TX flow: Once a TX service receives the TX data from the port layer, it uses APIs exported by the CLDMA driver to configure GPD with the DMA address of TX data. After that, the service triggers CLDMA to fetch the first available GPD to transfer data. CLDMA RX flow: When there is RX data from the MD, CLDMA hardware asserts an interrupt to notify the host to fetch data and dispatch it to FSM (for handshake messages) or the port layer. After CLDMA opening is finished, All RX GPDs are fulfilled and ready to receive data from the device. Signed-off-by: Min Dong Signed-off-by: MediaTek Corporation --- drivers/net/wwan/mediatek/Makefile | 8 +- drivers/net/wwan/mediatek/mtk_cldma.c | 258 +++++ drivers/net/wwan/mediatek/mtk_cldma.h | 158 +++ drivers/net/wwan/mediatek/mtk_ctrl_plane.h | 48 + .../wwan/mediatek/pcie/mtk_cldma_drv_t800.c | 948 ++++++++++++++++++ .../wwan/mediatek/pcie/mtk_cldma_drv_t800.h | 20 + 6 files changed, 1437 insertions(+), 3 deletions(-) create mode 100644 drivers/net/wwan/mediatek/mtk_cldma.c create mode 100644 drivers/net/wwan/mediatek/mtk_cldma.h create mode 100644 drivers/net/wwan/mediatek/pcie/mtk_cldma_drv_t800.c create mode 100644 drivers/net/wwan/mediatek/pcie/mtk_cldma_drv_t800.h diff --git a/drivers/net/wwan/mediatek/Makefile b/drivers/net/wwan/mediatek/Makefile index 69a9fb7d5b96..77158d3d587a 100644 --- a/drivers/net/wwan/mediatek/Makefile +++ b/drivers/net/wwan/mediatek/Makefile @@ -4,9 +4,11 @@ MODULE_NAME := mtk_tmi mtk_tmi-y = \ pcie/mtk_pci.o \ - mtk_dev.o \ - mtk_bm.o \ - mtk_ctrl_plane.o + mtk_dev.o \ + mtk_bm.o \ + mtk_ctrl_plane.o \ + mtk_cldma.o \ + pcie/mtk_cldma_drv_t800.o ccflags-y += -I$(srctree)/$(src)/ ccflags-y += -I$(srctree)/$(src)/pcie/ diff --git a/drivers/net/wwan/mediatek/mtk_cldma.c b/drivers/net/wwan/mediatek/mtk_cldma.c new file mode 100644 index 000000000000..dc1713307797 --- /dev/null +++ b/drivers/net/wwan/mediatek/mtk_cldma.c @@ -0,0 +1,258 @@ +// SPDX-License-Identifier: BSD-3-Clause-Clear +/* + * Copyright (c) 2022, MediaTek Inc. + */ + +#include +#include +#include +#include + +#include "mtk_cldma.h" +#include "mtk_cldma_drv_t800.h" + +/* cldma_init() - Initialize CLDMA + * + * @trans: pointer to transaction structure + * + * Return: + * 0 - OK + * -ENOMEM - out of memory + */ +static int mtk_cldma_init(struct mtk_ctrl_trans *trans) +{ + struct cldma_dev *cd; + + cd = devm_kzalloc(trans->mdev->dev, sizeof(*cd), GFP_KERNEL); + if (!cd) + return -ENOMEM; + + cd->trans = trans; + cd->hw_ops.init = mtk_cldma_hw_init_t800; + cd->hw_ops.exit = mtk_cldma_hw_exit_t800; + cd->hw_ops.txq_alloc = mtk_cldma_txq_alloc_t800; + cd->hw_ops.rxq_alloc = mtk_cldma_rxq_alloc_t800; + cd->hw_ops.txq_free = mtk_cldma_txq_free_t800; + cd->hw_ops.rxq_free = mtk_cldma_rxq_free_t800; + cd->hw_ops.start_xfer = mtk_cldma_start_xfer_t800; + + trans->dev[CLDMA_CLASS_ID] = cd; + + return 0; +} + +/* cldma_exit() - De-Initialize CLDMA + * + * @trans: pointer to transaction structure + * + * Return: + * 0 - OK + */ +static int mtk_cldma_exit(struct mtk_ctrl_trans *trans) +{ + struct cldma_dev *cd; + + cd = trans->dev[CLDMA_CLASS_ID]; + if (!cd) + return 0; + + devm_kfree(trans->mdev->dev, cd); + + return 0; +} + +/* cldma_open() - Initialize CLDMA hardware queue + * + * @cd: pointer to CLDMA device + * @skb: pointer to socket buffer + * + * Return: + * 0 - OK + * -EBUSY - hardware queue is busy + * -EIO - failed to initialize hardware queue + * -EINVAL - invalid input parameters + */ +static int mtk_cldma_open(struct cldma_dev *cd, struct sk_buff *skb) +{ + struct trb_open_priv *trb_open_priv = (struct trb_open_priv *)skb->data; + struct trb *trb = (struct trb *)skb->cb; + struct cldma_hw *hw; + struct virtq *vq; + struct txq *txq; + struct rxq *rxq; + int err = 0; + + vq = cd->trans->vq_tbl + trb->vqno; + hw = cd->cldma_hw[vq->hif_id & HIF_ID_BITMASK]; + trb_open_priv->tx_mtu = vq->tx_mtu; + trb_open_priv->rx_mtu = vq->rx_mtu; + if (unlikely(vq->rxqno < 0 || vq->rxqno >= HW_QUEUE_NUM) || + unlikely(vq->txqno < 0 || vq->txqno >= HW_QUEUE_NUM)) { + err = -EINVAL; + goto exit; + } + + if (hw->txq[vq->txqno] || hw->rxq[vq->rxqno]) { + err = -EBUSY; + goto exit; + } + + txq = cd->hw_ops.txq_alloc(hw, skb); + if (!txq) { + err = -EIO; + goto exit; + } + + rxq = cd->hw_ops.rxq_alloc(hw, skb); + if (!rxq) { + err = -EIO; + cd->hw_ops.txq_free(hw, trb->vqno); + goto exit; + } + +exit: + trb->status = err; + trb->trb_complete(skb); + + return err; +} + +/* cldma_tx() - start CLDMA TX transaction + * + * @cd: pointer to CLDMA device + * @skb: pointer to socket buffer + * + * Return: + * 0 - OK + * -EPIPE - hardware queue is broken + */ +static int mtk_cldma_tx(struct cldma_dev *cd, struct sk_buff *skb) +{ + struct trb *trb = (struct trb *)skb->cb; + struct cldma_hw *hw; + struct virtq *vq; + struct txq *txq; + + vq = cd->trans->vq_tbl + trb->vqno; + hw = cd->cldma_hw[vq->hif_id & HIF_ID_BITMASK]; + txq = hw->txq[vq->txqno]; + if (txq->is_stopping) + return -EPIPE; + + cd->hw_ops.start_xfer(hw, vq->txqno); + + return 0; +} + +/* cldma_close() - De-Initialize CLDMA hardware queue + * + * @cd: pointer to CLDMA device + * @skb: pointer to socket buffer + * + * Return: + * 0 - OK + */ +static int mtk_cldma_close(struct cldma_dev *cd, struct sk_buff *skb) +{ + struct trb *trb = (struct trb *)skb->cb; + struct cldma_hw *hw; + struct virtq *vq; + + vq = cd->trans->vq_tbl + trb->vqno; + hw = cd->cldma_hw[vq->hif_id & HIF_ID_BITMASK]; + + cd->hw_ops.txq_free(hw, trb->vqno); + cd->hw_ops.rxq_free(hw, trb->vqno); + + trb->status = 0; + trb->trb_complete(skb); + + return 0; +} + +static int mtk_cldma_submit_tx(void *dev, struct sk_buff *skb) +{ + struct trb *trb = (struct trb *)skb->cb; + struct cldma_dev *cd = dev; + dma_addr_t data_dma_addr; + struct cldma_hw *hw; + struct tx_req *req; + struct virtq *vq; + struct txq *txq; + int err; + + vq = cd->trans->vq_tbl + trb->vqno; + hw = cd->cldma_hw[vq->hif_id & HIF_ID_BITMASK]; + txq = hw->txq[vq->txqno]; + + if (!txq->req_budget) + return -EAGAIN; + + err = mtk_dma_map_single(hw->mdev, &data_dma_addr, skb->data, + skb->len, DMA_TO_DEVICE); + if (err) + return -EFAULT; + + mutex_lock(&txq->lock); + txq->req_budget--; + mutex_unlock(&txq->lock); + + req = txq->req_pool + txq->wr_idx; + req->gpd->tx_gpd.debug_id = 0x01; + req->gpd->tx_gpd.data_buff_ptr_h = cpu_to_le32((u64)(data_dma_addr) >> 32); + req->gpd->tx_gpd.data_buff_ptr_l = cpu_to_le32(data_dma_addr); + req->gpd->tx_gpd.data_buff_len = cpu_to_le16(skb->len); + req->gpd->tx_gpd.gpd_flags = CLDMA_GPD_FLAG_IOC | CLDMA_GPD_FLAG_HWO; + + req->data_vm_addr = skb->data; + req->data_dma_addr = data_dma_addr; + req->data_len = skb->len; + req->skb = skb; + txq->wr_idx = (txq->wr_idx + 1) % txq->req_pool_size; + + wmb(); /* ensure GPD setup done before HW start */ + + return 0; +} + +/* cldma_trb_process() - Dispatch trb request to low-level CLDMA routine + * + * @dev: pointer to CLDMA device + * @skb: pointer to socket buffer + * + * Return: + * 0 - OK + * -EBUSY - hardware queue is busy + * -EINVAL - invalid input + * -EIO - failed to initialize hardware queue + * -EPIPE - hardware queue is broken + */ +static int mtk_cldma_trb_process(void *dev, struct sk_buff *skb) +{ + struct trb *trb = (struct trb *)skb->cb; + struct cldma_dev *cd = dev; + int err; + + switch (trb->cmd) { + case TRB_CMD_ENABLE: + err = mtk_cldma_open(cd, skb); + break; + case TRB_CMD_TX: + err = mtk_cldma_tx(cd, skb); + break; + case TRB_CMD_DISABLE: + err = mtk_cldma_close(cd, skb); + break; + default: + err = -EINVAL; + } + + return err; +} + +struct hif_ops cldma_ops = { + .init = mtk_cldma_init, + .exit = mtk_cldma_exit, + .trb_process = mtk_cldma_trb_process, + .submit_tx = mtk_cldma_submit_tx, +}; diff --git a/drivers/net/wwan/mediatek/mtk_cldma.h b/drivers/net/wwan/mediatek/mtk_cldma.h new file mode 100644 index 000000000000..4fd5f826bcf6 --- /dev/null +++ b/drivers/net/wwan/mediatek/mtk_cldma.h @@ -0,0 +1,158 @@ +/* SPDX-License-Identifier: BSD-3-Clause-Clear + * + * Copyright (c) 2022, MediaTek Inc. + */ + +#ifndef __MTK_CLDMA_H__ +#define __MTK_CLDMA_H__ + +#include + +#include "mtk_ctrl_plane.h" +#include "mtk_dev.h" + +#define HW_QUEUE_NUM 8 +#define ALLQ (0XFF) +#define LINK_ERROR_VAL (0XFFFFFFFF) + +#define CLDMA_CLASS_ID 0 + +#define NR_CLDMA 2 +#define CLDMA0 (((CLDMA_CLASS_ID) << HIF_CLASS_SHIFT) + 0) +#define CLDMA1 (((CLDMA_CLASS_ID) << HIF_CLASS_SHIFT) + 1) + +#define TXQ(N) (N) +#define RXQ(N) (N) + +#define CLDMA_GPD_FLAG_HWO BIT(0) +#define CLDMA_GPD_FLAG_IOC BIT(7) + +enum mtk_ip_busy_src { + IP_BUSY_TXDONE = 0, + IP_BUSY_RXDONE = 24, +}; + +enum mtk_intr_type { + QUEUE_XFER_DONE = 0, + QUEUE_ERROR = 16, + INVALID_TYPE +}; + +enum mtk_tx_rx { + DIR_TX, + DIR_RX, + INVALID_DIR +}; + +union gpd { + struct { + u8 gpd_flags; + u8 non_used1; + __le16 data_allow_len; + __le32 next_gpd_ptr_h; + __le32 next_gpd_ptr_l; + __le32 data_buff_ptr_h; + __le32 data_buff_ptr_l; + __le16 data_recv_len; + u8 non_used2; + u8 debug_id; + } rx_gpd; + + struct { + u8 gpd_flags; + u8 non_used1; + u8 non_used2; + u8 debug_id; + __le32 next_gpd_ptr_h; + __le32 next_gpd_ptr_l; + __le32 data_buff_ptr_h; + __le32 data_buff_ptr_l; + __le16 data_buff_len; + __le16 non_used3; + } tx_gpd; +}; + +struct rx_req { + union gpd *gpd; + int mtu; + struct sk_buff *skb; + size_t data_len; + dma_addr_t gpd_dma_addr; + dma_addr_t data_dma_addr; +}; + +struct rxq { + struct cldma_hw *hw; + int rxqno; + int vqno; + struct virtq *vq; + struct work_struct rx_done_work; + struct rx_req *req_pool; + int req_pool_size; + int free_idx; + unsigned short rx_done_cnt; + void *arg; + int (*rx_done)(struct sk_buff *skb, int len, void *priv); +}; + +struct tx_req { + union gpd *gpd; + int mtu; + void *data_vm_addr; + size_t data_len; + dma_addr_t data_dma_addr; + dma_addr_t gpd_dma_addr; + struct sk_buff *skb; + int (*trb_complete)(struct sk_buff *skb); +}; + +struct txq { + struct cldma_hw *hw; + int txqno; + int vqno; + struct virtq *vq; + struct mutex lock; /* protect structure fields */ + struct work_struct tx_done_work; + struct tx_req *req_pool; + int req_pool_size; + int req_budget; + int wr_idx; + int free_idx; + bool tx_started; + bool is_stopping; + unsigned short tx_done_cnt; +}; + +struct cldma_dev; +struct cldma_hw; + +struct cldma_hw_ops { + int (*init)(struct cldma_dev *cd, int hif_id); + int (*exit)(struct cldma_dev *cd, int hif_id); + struct txq* (*txq_alloc)(struct cldma_hw *hw, struct sk_buff *skb); + struct rxq* (*rxq_alloc)(struct cldma_hw *hw, struct sk_buff *skb); + int (*txq_free)(struct cldma_hw *hw, int vqno); + int (*rxq_free)(struct cldma_hw *hw, int vqno); + int (*start_xfer)(struct cldma_hw *hw, int qno); +}; + +struct cldma_hw { + int hif_id; + int base_addr; + int pci_ext_irq_id; + struct mtk_md_dev *mdev; + struct cldma_dev *cd; + struct txq *txq[HW_QUEUE_NUM]; + struct rxq *rxq[HW_QUEUE_NUM]; + struct dma_pool *dma_pool; + struct workqueue_struct *wq; +}; + +struct cldma_dev { + struct cldma_hw *cldma_hw[NR_CLDMA]; + struct mtk_ctrl_trans *trans; + struct cldma_hw_ops hw_ops; +}; + +extern struct hif_ops cldma_ops; +#endif diff --git a/drivers/net/wwan/mediatek/mtk_ctrl_plane.h b/drivers/net/wwan/mediatek/mtk_ctrl_plane.h index 343766a2b39e..427d5a06b3cc 100644 --- a/drivers/net/wwan/mediatek/mtk_ctrl_plane.h +++ b/drivers/net/wwan/mediatek/mtk_ctrl_plane.h @@ -17,7 +17,55 @@ #define BUFF_3_5K_MAX_CNT (100) #define BUFF_63K_MAX_CNT (64) +#define HIF_CLASS_NUM (1) +#define HIF_CLASS_SHIFT (8) +#define HIF_ID_BITMASK (0x01) + +enum mtk_trb_cmd_type { + TRB_CMD_ENABLE = 1, + TRB_CMD_TX, + TRB_CMD_DISABLE, +}; + +struct trb_open_priv { + u16 tx_mtu; + u16 rx_mtu; + int (*rx_done)(struct sk_buff *skb, int len, void *priv); +}; + +struct trb { + u8 vqno; + enum mtk_trb_cmd_type cmd; + int status; + struct kref kref; + void *priv; + int (*trb_complete)(struct sk_buff *skb); +}; + +struct virtq { + int vqno; + int hif_id; + int txqno; + int rxqno; + int tx_mtu; + int rx_mtu; + int tx_req_num; + int rx_req_num; +}; + +struct mtk_ctrl_trans; + +struct hif_ops { + int (*init)(struct mtk_ctrl_trans *trans); + int (*exit)(struct mtk_ctrl_trans *trans); + int (*submit_tx)(void *dev, struct sk_buff *skb); + int (*trb_process)(void *dev, struct sk_buff *skb); +}; + struct mtk_ctrl_trans { + struct virtq *vq_tbl; + void *dev[HIF_CLASS_NUM]; + struct hif_ops *ops[HIF_CLASS_NUM]; struct mtk_ctrl_blk *ctrl_blk; struct mtk_md_dev *mdev; }; diff --git a/drivers/net/wwan/mediatek/pcie/mtk_cldma_drv_t800.c b/drivers/net/wwan/mediatek/pcie/mtk_cldma_drv_t800.c new file mode 100644 index 000000000000..d2e682453b57 --- /dev/null +++ b/drivers/net/wwan/mediatek/pcie/mtk_cldma_drv_t800.c @@ -0,0 +1,948 @@ +// SPDX-License-Identifier: BSD-3-Clause-Clear +/* + * Copyright (c) 2022, MediaTek Inc. + */ + +#include +#include +#include +#include +#include +#include +#include +#include + +#include "mtk_bm.h" +#include "mtk_cldma_drv_t800.h" +#include "mtk_ctrl_plane.h" +#include "mtk_dev.h" +#include "mtk_reg.h" + +#define DMA_POOL_NAME_LEN 64 + +#define CLDMA_STOP_HW_WAIT_TIME_MS (20) +#define CLDMA_STOP_HW_POLLING_MAX_CNT (10) + +#define CLDMA0_BASE_ADDR (0x1021C000) +#define CLDMA1_BASE_ADDR (0x1021E000) + +/* CLDMA IN(Tx) */ +#define REG_CLDMA_UL_START_ADDRL_0 (0x0004) +#define REG_CLDMA_UL_START_ADDRH_0 (0x0008) +#define REG_CLDMA_UL_STATUS (0x0084) +#define REG_CLDMA_UL_START_CMD (0x0088) +#define REG_CLDMA_UL_RESUME_CMD (0x008C) +#define REG_CLDMA_UL_STOP_CMD (0x0090) +#define REG_CLDMA_UL_ERROR (0x0094) +#define REG_CLDMA_UL_CFG (0x0098) +#define REG_CLDMA_UL_DUMMY_0 (0x009C) + +/* CLDMA OUT(Rx) */ +#define REG_CLDMA_SO_START_CMD (0x0400 + 0x01BC) +#define REG_CLDMA_SO_RESUME_CMD (0x0400 + 0x01C0) +#define REG_CLDMA_SO_STOP_CMD (0x0400 + 0x01C4) +#define REG_CLDMA_SO_DUMMY_0 (0x0400 + 0x0108) +#define REG_CLDMA_SO_CFG (0x0400 + 0x0004) +#define REG_CLDMA_SO_START_ADDRL_0 (0x0400 + 0x0078) +#define REG_CLDMA_SO_START_ADDRH_0 (0x0400 + 0x007C) +#define REG_CLDMA_SO_CUR_ADDRL_0 (0x0400 + 0x00B8) +#define REG_CLDMA_SO_CUR_ADDRH_0 (0x0400 + 0x00BC) +#define REG_CLDMA_SO_STATUS (0x0400 + 0x00F8) + +/* CLDMA MISC */ +#define REG_CLDMA_L2TISAR0 (0x0800 + 0x0010) +#define REG_CLDMA_L2TISAR1 (0x0800 + 0x0014) +#define REG_CLDMA_L2TIMR0 (0x0800 + 0x0018) +#define REG_CLDMA_L2TIMR1 (0x0800 + 0x001C) +#define REG_CLDMA_L2TIMCR0 (0x0800 + 0x0020) +#define REG_CLDMA_L2TIMCR1 (0x0800 + 0x0024) +#define REG_CLDMA_L2TIMSR0 (0x0800 + 0x0028) +#define REG_CLDMA_L2TIMSR1 (0x0800 + 0x002C) +#define REG_CLDMA_L3TISAR0 (0x0800 + 0x0030) +#define REG_CLDMA_L3TISAR1 (0x0800 + 0x0034) +#define REG_CLDMA_L3TIMR0 (0x0800 + 0x0038) +#define REG_CLDMA_L3TIMR1 (0x0800 + 0x003C) +#define REG_CLDMA_L3TIMCR0 (0x0800 + 0x0040) +#define REG_CLDMA_L3TIMCR1 (0x0800 + 0x0044) +#define REG_CLDMA_L3TIMSR0 (0x0800 + 0x0048) +#define REG_CLDMA_L3TIMSR1 (0x0800 + 0x004C) +#define REG_CLDMA_L2RISAR0 (0x0800 + 0x0050) +#define REG_CLDMA_L2RISAR1 (0x0800 + 0x0054) +#define REG_CLDMA_L3RISAR0 (0x0800 + 0x0070) +#define REG_CLDMA_L3RISAR1 (0x0800 + 0x0074) +#define REG_CLDMA_L3RIMR0 (0x0800 + 0x0078) +#define REG_CLDMA_L3RIMR1 (0x0800 + 0x007C) +#define REG_CLDMA_L3RIMCR0 (0x0800 + 0x0080) +#define REG_CLDMA_L3RIMCR1 (0x0800 + 0x0084) +#define REG_CLDMA_L3RIMSR0 (0x0800 + 0x0088) +#define REG_CLDMA_L3RIMSR1 (0x0800 + 0x008C) +#define REG_CLDMA_IP_BUSY (0x0800 + 0x00B4) +#define REG_CLDMA_L3TISAR2 (0x0800 + 0x00C0) +#define REG_CLDMA_L3TIMR2 (0x0800 + 0x00C4) +#define REG_CLDMA_L3TIMCR2 (0x0800 + 0x00C8) +#define REG_CLDMA_L3TIMSR2 (0x0800 + 0x00CC) + +#define REG_CLDMA_L2RIMR0 (0x0800 + 0x00E8) +#define REG_CLDMA_L2RIMR1 (0x0800 + 0x00EC) +#define REG_CLDMA_L2RIMCR0 (0x0800 + 0x00F0) +#define REG_CLDMA_L2RIMCR1 (0x0800 + 0x00F4) +#define REG_CLDMA_L2RIMSR0 (0x0800 + 0x00F8) +#define REG_CLDMA_L2RIMSR1 (0x0800 + 0x00FC) + +#define REG_CLDMA_INT_EAP_USIP_MASK (0x0800 + 0x011C) +#define REG_CLDMA_RQ1_GPD_DONE_CNT (0x0800 + 0x0174) +#define REG_CLDMA_TQ1_GPD_DONE_CNT (0x0800 + 0x0184) + +#define REG_CLDMA_IP_BUSY_TO_PCIE_MASK (0x0800 + 0x0194) +#define REG_CLDMA_IP_BUSY_TO_PCIE_MASK_SET (0x0800 + 0x0198) +#define REG_CLDMA_IP_BUSY_TO_PCIE_MASK_CLR (0x0800 + 0x019C) + +#define REG_CLDMA_IP_BUSY_TO_AP_MASK (0x0800 + 0x0200) +#define REG_CLDMA_IP_BUSY_TO_AP_MASK_SET (0x0800 + 0x0204) +#define REG_CLDMA_IP_BUSY_TO_AP_MASK_CLR (0x0800 + 0x0208) + +/* CLDMA RESET */ +#define REG_INFRA_RST0_SET (0x120) +#define REG_INFRA_RST0_CLR (0x124) +#define REG_CLDMA0_RST_SET_BIT (8) +#define REG_CLDMA0_RST_CLR_BIT (8) + +static void mtk_cldma_setup_start_addr(struct mtk_md_dev *mdev, int base, + enum mtk_tx_rx dir, int qno, dma_addr_t addr) +{ + unsigned int addr_l; + unsigned int addr_h; + + if (dir == DIR_TX) { + addr_l = base + REG_CLDMA_UL_START_ADDRL_0 + qno * HW_QUEUE_NUM; + addr_h = base + REG_CLDMA_UL_START_ADDRH_0 + qno * HW_QUEUE_NUM; + } else { + addr_l = base + REG_CLDMA_SO_START_ADDRL_0 + qno * HW_QUEUE_NUM; + addr_h = base + REG_CLDMA_SO_START_ADDRH_0 + qno * HW_QUEUE_NUM; + } + + mtk_hw_write32(mdev, addr_l, (u32)addr); + mtk_hw_write32(mdev, addr_h, (u32)((u64)addr >> 32)); +} + +static void mtk_cldma_mask_intr(struct mtk_md_dev *mdev, int base, + enum mtk_tx_rx dir, int qno, enum mtk_intr_type type) +{ + u32 addr; + u32 val; + + if (unlikely(qno < 0 || qno >= HW_QUEUE_NUM)) + return; + + if (dir == DIR_TX) + addr = base + REG_CLDMA_L2TIMSR0; + else + addr = base + REG_CLDMA_L2RIMSR0; + + if (qno == ALLQ) + val = qno << type; + else + val = BIT(qno) << type; + + mtk_hw_write32(mdev, addr, val); +} + +static void mtk_cldma_unmask_intr(struct mtk_md_dev *mdev, int base, + enum mtk_tx_rx dir, int qno, enum mtk_intr_type type) +{ + u32 addr; + u32 val; + + if (unlikely(qno < 0 || qno >= HW_QUEUE_NUM)) + return; + + if (dir == DIR_TX) + addr = base + REG_CLDMA_L2TIMCR0; + else + addr = base + REG_CLDMA_L2RIMCR0; + + if (qno == ALLQ) + val = qno << type; + else + val = BIT(qno) << type; + + mtk_hw_write32(mdev, addr, val); +} + +static void mtk_cldma_clr_intr_status(struct mtk_md_dev *mdev, int base, + int dir, int qno, enum mtk_intr_type type) +{ + u32 addr; + u32 val; + + if (unlikely(qno < 0 || qno >= HW_QUEUE_NUM)) + return; + + if (type == QUEUE_ERROR) { + if (dir == DIR_TX) { + val = mtk_hw_read32(mdev, base + REG_CLDMA_L3TISAR0); + mtk_hw_write32(mdev, base + REG_CLDMA_L3TISAR0, val); + val = mtk_hw_read32(mdev, base + REG_CLDMA_L3TISAR1); + mtk_hw_write32(mdev, base + REG_CLDMA_L3TISAR1, val); + } else { + val = mtk_hw_read32(mdev, base + REG_CLDMA_L3RISAR0); + mtk_hw_write32(mdev, base + REG_CLDMA_L3RISAR0, val); + val = mtk_hw_read32(mdev, base + REG_CLDMA_L3RISAR1); + mtk_hw_write32(mdev, base + REG_CLDMA_L3RISAR1, val); + } + } + + if (dir == DIR_TX) + addr = base + REG_CLDMA_L2TISAR0; + else + addr = base + REG_CLDMA_L2RISAR0; + + if (qno == ALLQ) + val = qno << type; + else + val = BIT(qno) << type; + + mtk_hw_write32(mdev, addr, val); + val = mtk_hw_read32(mdev, addr); +} + +static u32 mtk_cldma_check_intr_status(struct mtk_md_dev *mdev, int base, + int dir, int qno, enum mtk_intr_type type) +{ + u32 addr; + u32 val; + u32 sta; + + if (dir == DIR_TX) + addr = base + REG_CLDMA_L2TISAR0; + else + addr = base + REG_CLDMA_L2RISAR0; + + val = mtk_hw_read32(mdev, addr); + if (val == LINK_ERROR_VAL) + sta = val; + else if (qno == ALLQ) + sta = (val >> type) & 0xFF; + else + sta = (val >> type) & BIT(qno); + return sta; +} + +static void mtk_cldma_start_queue(struct mtk_md_dev *mdev, int base, enum mtk_tx_rx dir, int qno) +{ + u32 val = BIT(qno); + u32 addr; + + if (dir == DIR_TX) + addr = base + REG_CLDMA_UL_START_CMD; + else + addr = base + REG_CLDMA_SO_START_CMD; + + mtk_hw_write32(mdev, addr, val); +} + +static void mtk_cldma_resume_queue(struct mtk_md_dev *mdev, int base, enum mtk_tx_rx dir, int qno) +{ + u32 val = BIT(qno); + u32 addr; + + if (dir == DIR_TX) + addr = base + REG_CLDMA_UL_RESUME_CMD; + else + addr = base + REG_CLDMA_SO_RESUME_CMD; + + mtk_hw_write32(mdev, addr, val); +} + +static u32 mtk_cldma_queue_status(struct mtk_md_dev *mdev, int base, enum mtk_tx_rx dir, int qno) +{ + u32 addr; + u32 val; + + if (dir == DIR_TX) + addr = base + REG_CLDMA_UL_STATUS; + else + addr = base + REG_CLDMA_SO_STATUS; + + val = mtk_hw_read32(mdev, addr); + + if (qno == ALLQ || val == LINK_ERROR_VAL) + return val; + else + return val & BIT(qno); +} + +static void mtk_cldma_mask_ip_busy_to_pci(struct mtk_md_dev *mdev, + int base, int qno, enum mtk_ip_busy_src type) +{ + if (qno == ALLQ) + mtk_hw_write32(mdev, base + REG_CLDMA_IP_BUSY_TO_PCIE_MASK_SET, qno << type); + else + mtk_hw_write32(mdev, base + REG_CLDMA_IP_BUSY_TO_PCIE_MASK_SET, BIT(qno) << type); +} + +static void mtk_cldma_unmask_ip_busy_to_pci(struct mtk_md_dev *mdev, + int base, int qno, enum mtk_ip_busy_src type) +{ + if (qno == ALLQ) + mtk_hw_write32(mdev, base + REG_CLDMA_IP_BUSY_TO_PCIE_MASK_CLR, qno << type); + else + mtk_hw_write32(mdev, base + REG_CLDMA_IP_BUSY_TO_PCIE_MASK_CLR, BIT(qno) << type); +} + +static void mtk_cldma_stop_queue(struct mtk_md_dev *mdev, int base, enum mtk_tx_rx dir, int qno) +{ + u32 val = (qno == ALLQ) ? qno : BIT(qno); + u32 addr; + + if (dir == DIR_TX) + addr = base + REG_CLDMA_UL_STOP_CMD; + else + addr = base + REG_CLDMA_SO_STOP_CMD; + + mtk_hw_write32(mdev, addr, val); +} + +static void mtk_cldma_clear_ip_busy(struct mtk_md_dev *mdev, int base) +{ + mtk_hw_write32(mdev, base + REG_CLDMA_IP_BUSY, 0x01); +} + +static void mtk_cldma_hw_init(struct mtk_md_dev *mdev, int base) +{ + u32 val = mtk_hw_read32(mdev, base + REG_CLDMA_UL_CFG); + + val = (val & (~(0x7 << 5))) | ((0x4) << 5); + mtk_hw_write32(mdev, base + REG_CLDMA_UL_CFG, val); + + val = mtk_hw_read32(mdev, base + REG_CLDMA_SO_CFG); + val = (val & (~(0x7 << 10))) | ((0x4) << 10) | (1 << 2); + mtk_hw_write32(mdev, base + REG_CLDMA_SO_CFG, val); + + mtk_hw_write32(mdev, base + REG_CLDMA_IP_BUSY_TO_PCIE_MASK_CLR, 0); + mtk_hw_write32(mdev, base + REG_CLDMA_IP_BUSY_TO_AP_MASK_CLR, 0); + + /* enable interrupt to PCIe */ + mtk_hw_write32(mdev, base + REG_CLDMA_INT_EAP_USIP_MASK, 0); + + /* disable illegal memory check */ + mtk_hw_write32(mdev, base + REG_CLDMA_UL_DUMMY_0, 1); + mtk_hw_write32(mdev, base + REG_CLDMA_SO_DUMMY_0, 1); +} + +static void mtk_cldma_tx_done_work(struct work_struct *work) +{ + struct txq *txq = container_of(work, struct txq, tx_done_work); + struct mtk_md_dev *mdev = txq->hw->mdev; + struct tx_req *req; + unsigned int state; + struct trb *trb; + int i; + +again: + for (i = 0; i < txq->req_pool_size; i++) { + req = txq->req_pool + txq->free_idx; + if ((req->gpd->tx_gpd.gpd_flags & CLDMA_GPD_FLAG_HWO) || !req->data_vm_addr) + break; + + mtk_dma_unmap_single(mdev, req->data_dma_addr, req->data_len, DMA_TO_DEVICE); + + trb = (struct trb *)req->skb->cb; + trb->status = 0; + trb->trb_complete(req->skb); + + req->data_vm_addr = NULL; + req->data_dma_addr = 0; + req->data_len = 0; + + txq->free_idx = (txq->free_idx + 1) % txq->req_pool_size; + mutex_lock(&txq->lock); + txq->req_budget++; + mutex_unlock(&txq->lock); + } + mtk_cldma_unmask_ip_busy_to_pci(mdev, txq->hw->base_addr, txq->txqno, IP_BUSY_TXDONE); + state = mtk_cldma_check_intr_status(mdev, txq->hw->base_addr, + DIR_TX, txq->txqno, QUEUE_XFER_DONE); + if (state) { + if (unlikely(state == LINK_ERROR_VAL)) + return; + + mtk_cldma_clr_intr_status(mdev, txq->hw->base_addr, DIR_TX, + txq->txqno, QUEUE_XFER_DONE); + + if (need_resched()) { + mtk_cldma_mask_ip_busy_to_pci(mdev, txq->hw->base_addr, + txq->txqno, IP_BUSY_TXDONE); + cond_resched(); + mtk_cldma_unmask_ip_busy_to_pci(mdev, txq->hw->base_addr, + txq->txqno, IP_BUSY_TXDONE); + } + + goto again; + } + + mtk_cldma_unmask_intr(mdev, txq->hw->base_addr, DIR_TX, txq->txqno, QUEUE_XFER_DONE); + mtk_cldma_clear_ip_busy(mdev, txq->hw->base_addr); +} + +static void mtk_cldma_rx_done_work(struct work_struct *work) +{ + struct rxq *rxq = container_of(work, struct rxq, rx_done_work); + struct cldma_hw *hw = rxq->hw; + u32 curr_addr_h, curr_addr_l; + struct mtk_bm_pool *bm_pool; + struct mtk_md_dev *mdev; + struct rx_req *req; + u64 curr_addr; + int i, err; + u32 state; + u64 addr; + + mdev = hw->mdev; + if (rxq->vq->rx_mtu > VQ_MTU_3_5K) + bm_pool = rxq->hw->cd->trans->ctrl_blk->bm_pool_63K; + else + bm_pool = rxq->hw->cd->trans->ctrl_blk->bm_pool; + + do { + for (i = 0; i < rxq->req_pool_size; i++) { + req = rxq->req_pool + rxq->free_idx; + if ((req->gpd->rx_gpd.gpd_flags & CLDMA_GPD_FLAG_HWO)) { + addr = hw->base_addr + REG_CLDMA_SO_CUR_ADDRH_0 + + (u64)rxq->rxqno * HW_QUEUE_NUM; + curr_addr_h = mtk_hw_read32(mdev, addr); + addr = hw->base_addr + REG_CLDMA_SO_CUR_ADDRL_0 + + (u64)rxq->rxqno * HW_QUEUE_NUM; + curr_addr_l = mtk_hw_read32(mdev, addr); + curr_addr = ((u64)curr_addr_h << 32) | curr_addr_l; + + if (req->gpd_dma_addr == curr_addr && + (req->gpd->rx_gpd.gpd_flags & CLDMA_GPD_FLAG_HWO)) + break; + } + + mtk_dma_unmap_single(mdev, req->data_dma_addr, req->mtu, DMA_FROM_DEVICE); + + rxq->rx_done(req->skb, le16_to_cpu(req->gpd->rx_gpd.data_recv_len), + rxq->arg); + + rxq->free_idx = (rxq->free_idx + 1) % rxq->req_pool_size; + req->skb = mtk_bm_alloc(bm_pool); + if (!req->skb) + break; + + err = mtk_dma_map_single(mdev, &req->data_dma_addr, req->skb->data, + req->mtu, DMA_FROM_DEVICE); + if (unlikely(err)) { + mtk_bm_free(bm_pool, req->skb); + break; + } + + req->gpd->rx_gpd.data_recv_len = 0; + req->gpd->rx_gpd.data_buff_ptr_h = + cpu_to_le32((u64)req->data_dma_addr >> 32); + req->gpd->rx_gpd.data_buff_ptr_l = cpu_to_le32(req->data_dma_addr); + req->gpd->rx_gpd.gpd_flags = CLDMA_GPD_FLAG_IOC | CLDMA_GPD_FLAG_HWO; + } + + mtk_cldma_resume_queue(mdev, rxq->hw->base_addr, DIR_RX, rxq->rxqno); + state = mtk_cldma_check_intr_status(mdev, rxq->hw->base_addr, + DIR_RX, rxq->rxqno, QUEUE_XFER_DONE); + + if (!state) + break; + + mtk_cldma_clr_intr_status(mdev, rxq->hw->base_addr, DIR_RX, + rxq->rxqno, QUEUE_XFER_DONE); + + if (need_resched()) + cond_resched(); + } while (true); + + mtk_cldma_unmask_intr(mdev, rxq->hw->base_addr, DIR_RX, rxq->rxqno, QUEUE_XFER_DONE); + mtk_cldma_mask_ip_busy_to_pci(mdev, rxq->hw->base_addr, rxq->rxqno, IP_BUSY_RXDONE); + mtk_cldma_clear_ip_busy(mdev, rxq->hw->base_addr); +} + +static int mtk_cldma_isr(int irq_id, void *param) +{ + u32 txq_xfer_done, rxq_xfer_done; + struct cldma_hw *hw = param; + u32 tx_mask, rx_mask; + u32 txq_err, rxq_err; + u32 tx_sta, rx_sta; + struct txq *txq; + struct rxq *rxq; + int i; + + tx_sta = mtk_hw_read32(hw->mdev, hw->base_addr + REG_CLDMA_L2TISAR0); + tx_mask = mtk_hw_read32(hw->mdev, hw->base_addr + REG_CLDMA_L2TIMR0); + rx_sta = mtk_hw_read32(hw->mdev, hw->base_addr + REG_CLDMA_L2RISAR0); + rx_mask = mtk_hw_read32(hw->mdev, hw->base_addr + REG_CLDMA_L2RIMR0); + + tx_sta = tx_sta & (~tx_mask); + rx_sta = rx_sta & (~rx_mask); + + if (tx_sta) { + /* TX mask */ + mtk_hw_write32(hw->mdev, hw->base_addr + REG_CLDMA_L2TIMSR0, tx_sta); + + txq_err = (tx_sta >> QUEUE_ERROR) & 0xFF; + if (txq_err) { + mtk_cldma_clr_intr_status(hw->mdev, hw->base_addr, + DIR_TX, ALLQ, QUEUE_ERROR); + mtk_hw_write32(hw->mdev, hw->base_addr + REG_CLDMA_L2TIMCR0, + (txq_err << QUEUE_ERROR)); + } + + /* TX clear */ + mtk_hw_write32(hw->mdev, hw->base_addr + REG_CLDMA_L2TISAR0, tx_sta); + + txq_xfer_done = (tx_sta >> QUEUE_XFER_DONE) & 0xFF; + if (txq_xfer_done) { + for (i = 0; i < HW_QUEUE_NUM; i++) { + if (txq_xfer_done & (1 << i)) { + txq = hw->txq[i]; + queue_work(hw->wq, &txq->tx_done_work); + } + } + } + } + + if (rx_sta) { + /* RX mask */ + mtk_hw_write32(hw->mdev, hw->base_addr + REG_CLDMA_L2RIMSR0, rx_sta); + + rxq_err = (rx_sta >> QUEUE_ERROR) & 0xFF; + if (rxq_err) { + mtk_cldma_clr_intr_status(hw->mdev, hw->base_addr, + DIR_RX, ALLQ, QUEUE_ERROR); + mtk_hw_write32(hw->mdev, hw->base_addr + REG_CLDMA_L2RIMCR0, + (rxq_err << QUEUE_ERROR)); + } + + /* RX clear */ + mtk_hw_write32(hw->mdev, hw->base_addr + REG_CLDMA_L2RISAR0, rx_sta); + + rxq_xfer_done = (rx_sta >> QUEUE_XFER_DONE) & 0xFF; + if (rxq_xfer_done) { + for (i = 0; i < HW_QUEUE_NUM; i++) { + if (rxq_xfer_done & (1 << i)) { + rxq = hw->rxq[i]; + queue_work(hw->wq, &rxq->rx_done_work); + } + } + } + } + + mtk_hw_clear_irq(hw->mdev, hw->pci_ext_irq_id); + mtk_hw_unmask_irq(hw->mdev, hw->pci_ext_irq_id); + + return IRQ_HANDLED; +} + +int mtk_cldma_hw_init_t800(struct cldma_dev *cd, int hif_id) +{ + char pool_name[DMA_POOL_NAME_LEN]; + struct cldma_hw *hw; + unsigned int flag; + + if (cd->cldma_hw[hif_id]) + return 0; + + hw = devm_kzalloc(cd->trans->mdev->dev, sizeof(*hw), GFP_KERNEL); + if (!hw) + return -ENOMEM; + + hw->cd = cd; + hw->mdev = cd->trans->mdev; + hw->hif_id = ((CLDMA_CLASS_ID) << 8) + hif_id; + snprintf(pool_name, DMA_POOL_NAME_LEN, "cldma%d_pool_%s", hw->hif_id, hw->mdev->dev_str); + hw->dma_pool = mtk_dma_pool_create(hw->mdev, pool_name, sizeof(union gpd), 64, 0); + if (!hw->dma_pool) + goto err_exit; + + switch (hif_id) { + case CLDMA0: + hw->pci_ext_irq_id = mtk_hw_get_irq_id(hw->mdev, MTK_IRQ_SRC_CLDMA0); + hw->base_addr = CLDMA0_BASE_ADDR; + break; + case CLDMA1: + hw->pci_ext_irq_id = mtk_hw_get_irq_id(hw->mdev, MTK_IRQ_SRC_CLDMA1); + hw->base_addr = CLDMA1_BASE_ADDR; + break; + default: + break; + } + + flag = WQ_UNBOUND | WQ_MEM_RECLAIM | WQ_HIGHPRI; + hw->wq = alloc_workqueue("cldma%d_workq_%s", flag, 0, hif_id, hw->mdev->dev_str); + + mtk_cldma_hw_init(hw->mdev, hw->base_addr); + + /* mask/clear PCI CLDMA L1 interrupt */ + mtk_hw_mask_irq(hw->mdev, hw->pci_ext_irq_id); + mtk_hw_clear_irq(hw->mdev, hw->pci_ext_irq_id); + + /* register CLDMA interrupt handler */ + mtk_hw_register_irq(hw->mdev, hw->pci_ext_irq_id, mtk_cldma_isr, hw); + + /* unmask PCI CLDMA L1 interrupt */ + mtk_hw_unmask_irq(hw->mdev, hw->pci_ext_irq_id); + + cd->cldma_hw[hif_id] = hw; + return 0; + +err_exit: + devm_kfree(hw->mdev->dev, hw); + + return -EIO; +} + +int mtk_cldma_hw_exit_t800(struct cldma_dev *cd, int hif_id) +{ + struct mtk_md_dev *mdev; + struct cldma_hw *hw; + int i; + + if (!cd->cldma_hw[hif_id]) + return 0; + + /* free cldma descriptor */ + hw = cd->cldma_hw[hif_id]; + mdev = cd->trans->mdev; + mtk_hw_mask_irq(mdev, hw->pci_ext_irq_id); + for (i = 0; i < HW_QUEUE_NUM; i++) { + if (hw->txq[i]) + cd->hw_ops.txq_free(hw, hw->txq[i]->vqno); + if (hw->rxq[i]) + cd->hw_ops.rxq_free(hw, hw->rxq[i]->vqno); + } + + flush_workqueue(hw->wq); + destroy_workqueue(hw->wq); + mtk_dma_pool_destroy(hw->dma_pool); + mtk_hw_unregister_irq(mdev, hw->pci_ext_irq_id); + + devm_kfree(mdev->dev, hw); + cd->cldma_hw[hif_id] = NULL; + + return 0; +} + +struct txq *mtk_cldma_txq_alloc_t800(struct cldma_hw *hw, struct sk_buff *skb) +{ + struct trb *trb = (struct trb *)skb->cb; + struct tx_req *next; + struct tx_req *req; + struct txq *txq; + int i; + + txq = devm_kzalloc(hw->mdev->dev, sizeof(*txq), GFP_KERNEL); + if (!txq) + return NULL; + + txq->hw = hw; + txq->vqno = trb->vqno; + txq->vq = hw->cd->trans->vq_tbl + trb->vqno; + txq->txqno = txq->vq->txqno; + txq->req_pool_size = txq->vq->tx_req_num; + txq->req_budget = txq->vq->tx_req_num; + txq->is_stopping = false; + mutex_init(&txq->lock); + if (unlikely(txq->txqno < 0 || txq->txqno >= HW_QUEUE_NUM)) + goto err_exit; + + txq->req_pool = devm_kcalloc(hw->mdev->dev, txq->req_pool_size, sizeof(*req), GFP_KERNEL); + if (!txq->req_pool) + goto err_exit; + + for (i = 0; i < txq->req_pool_size; i++) { + req = txq->req_pool + i; + req->mtu = txq->vq->tx_mtu; + req->gpd = mtk_dma_pool_alloc(hw->dma_pool, GFP_KERNEL, &req->gpd_dma_addr); + if (!req->gpd) + goto exit_free_req; + } + + for (i = 0; i < txq->req_pool_size; i++) { + req = txq->req_pool + i; + next = txq->req_pool + ((i + 1) % txq->req_pool_size); + req->gpd->tx_gpd.next_gpd_ptr_h = cpu_to_le32((u64)(next->gpd_dma_addr) >> 32); + req->gpd->tx_gpd.next_gpd_ptr_l = cpu_to_le32(next->gpd_dma_addr); + } + + INIT_WORK(&txq->tx_done_work, mtk_cldma_tx_done_work); + + mtk_cldma_stop_queue(hw->mdev, hw->base_addr, DIR_TX, txq->txqno); + txq->tx_started = false; + mtk_cldma_setup_start_addr(hw->mdev, hw->base_addr, DIR_TX, txq->txqno, + txq->req_pool[0].gpd_dma_addr); + mtk_cldma_unmask_intr(hw->mdev, hw->base_addr, DIR_TX, txq->txqno, QUEUE_ERROR); + mtk_cldma_unmask_intr(hw->mdev, hw->base_addr, DIR_TX, txq->txqno, QUEUE_XFER_DONE); + + hw->txq[txq->txqno] = txq; + return txq; + +exit_free_req: + for (i--; i >= 0; i--) { + req = txq->req_pool + i; + mtk_dma_pool_free(hw->dma_pool, req->gpd, req->gpd_dma_addr); + } + + devm_kfree(hw->mdev->dev, txq->req_pool); +err_exit: + devm_kfree(hw->mdev->dev, txq); + return NULL; +} + +int mtk_cldma_txq_free_t800(struct cldma_hw *hw, int vqno) +{ + struct virtq *vq = hw->cd->trans->vq_tbl + vqno; + unsigned int active; + struct tx_req *req; + struct txq *txq; + struct trb *trb; + int cnt = 0; + int irq_id; + int txqno; + int i; + + txqno = vq->txqno; + if (unlikely(txqno < 0 || txqno >= HW_QUEUE_NUM)) + return -EINVAL; + txq = hw->txq[txqno]; + if (!txq) + return -EINVAL; + + /* stop HW tx transaction */ + mtk_cldma_stop_queue(hw->mdev, hw->base_addr, DIR_TX, txqno); + txq->tx_started = false; + do { + active = mtk_cldma_queue_status(hw->mdev, hw->base_addr, DIR_TX, txqno); + if (active == LINK_ERROR_VAL) + break; + msleep(CLDMA_STOP_HW_WAIT_TIME_MS); /* ensure HW tx transaction done */ + cnt++; + } while (active && cnt < CLDMA_STOP_HW_POLLING_MAX_CNT); + + irq_id = mtk_hw_get_virq_id(hw->mdev, hw->pci_ext_irq_id); + synchronize_irq(irq_id); + + flush_work(&txq->tx_done_work); + mtk_cldma_mask_intr(hw->mdev, hw->base_addr, DIR_TX, txqno, QUEUE_XFER_DONE); + mtk_cldma_mask_intr(hw->mdev, hw->base_addr, DIR_TX, txqno, QUEUE_ERROR); + + /* free tx req resource */ + for (i = 0; i < txq->req_pool_size; i++) { + req = txq->req_pool + i; + if (req->data_dma_addr && req->data_len) { + mtk_dma_unmap_single(hw->mdev, req->data_dma_addr, + req->data_len, DMA_TO_DEVICE); + trb = (struct trb *)req->skb->cb; + trb->status = -EPIPE; + trb->trb_complete(req->skb); + } + mtk_dma_pool_free(hw->dma_pool, req->gpd, req->gpd_dma_addr); + } + + devm_kfree(hw->mdev->dev, txq->req_pool); + devm_kfree(hw->mdev->dev, txq); + hw->txq[txqno] = NULL; + + return 0; +} + +struct rxq *mtk_cldma_rxq_alloc_t800(struct cldma_hw *hw, struct sk_buff *skb) +{ + struct trb_open_priv *trb_open_priv = (struct trb_open_priv *)skb->data; + struct trb *trb = (struct trb *)skb->cb; + struct mtk_bm_pool *bm_pool; + struct rx_req *next; + struct rx_req *req; + struct rxq *rxq; + int err; + int i; + + rxq = devm_kzalloc(hw->mdev->dev, sizeof(*rxq), GFP_KERNEL); + if (!rxq) + return NULL; + + rxq->hw = hw; + rxq->vqno = trb->vqno; + rxq->vq = hw->cd->trans->vq_tbl + trb->vqno; + rxq->rxqno = rxq->vq->rxqno; + rxq->req_pool_size = rxq->vq->rx_req_num; + rxq->arg = trb->priv; + rxq->rx_done = trb_open_priv->rx_done; + if (unlikely(rxq->rxqno < 0 || rxq->rxqno >= HW_QUEUE_NUM)) + goto err_exit; + + rxq->req_pool = devm_kcalloc(hw->mdev->dev, rxq->req_pool_size, sizeof(*req), GFP_KERNEL); + if (!rxq->req_pool) + goto err_exit; + + if (rxq->vq->rx_mtu > VQ_MTU_3_5K) + bm_pool = hw->cd->trans->ctrl_blk->bm_pool_63K; + else + bm_pool = hw->cd->trans->ctrl_blk->bm_pool; + + /* setup rx request */ + for (i = 0; i < rxq->req_pool_size; i++) { + req = rxq->req_pool + i; + req->mtu = rxq->vq->rx_mtu; + req->gpd = mtk_dma_pool_alloc(hw->dma_pool, GFP_KERNEL, &req->gpd_dma_addr); + if (!req->gpd) + goto exit_free_req; + + req->skb = mtk_bm_alloc(bm_pool); + if (!req->skb) { + mtk_dma_pool_free(hw->dma_pool, req->gpd, req->gpd_dma_addr); + goto exit_free_req; + } + + err = mtk_dma_map_single(hw->mdev, &req->data_dma_addr, req->skb->data, + req->mtu, DMA_FROM_DEVICE); + if (err) { + i++; + goto exit_free_req; + } + } + + for (i = 0; i < rxq->req_pool_size; i++) { + req = rxq->req_pool + i; + next = rxq->req_pool + ((i + 1) % rxq->req_pool_size); + req->gpd->rx_gpd.gpd_flags = CLDMA_GPD_FLAG_IOC | CLDMA_GPD_FLAG_HWO; + req->gpd->rx_gpd.data_allow_len = cpu_to_le16(req->mtu); + req->gpd->rx_gpd.next_gpd_ptr_h = cpu_to_le32((u64)(next->gpd_dma_addr) >> 32); + req->gpd->rx_gpd.next_gpd_ptr_l = cpu_to_le32(next->gpd_dma_addr); + req->gpd->rx_gpd.data_buff_ptr_h = cpu_to_le32((u64)(req->data_dma_addr) >> 32); + req->gpd->rx_gpd.data_buff_ptr_l = cpu_to_le32(req->data_dma_addr); + } + + INIT_WORK(&rxq->rx_done_work, mtk_cldma_rx_done_work); + + hw->rxq[rxq->rxqno] = rxq; + mtk_cldma_stop_queue(hw->mdev, hw->base_addr, DIR_RX, rxq->rxqno); + mtk_cldma_setup_start_addr(hw->mdev, hw->base_addr, DIR_RX, + rxq->rxqno, rxq->req_pool[0].gpd_dma_addr); + mtk_cldma_start_queue(hw->mdev, hw->base_addr, DIR_RX, rxq->rxqno); + mtk_cldma_unmask_intr(hw->mdev, hw->base_addr, DIR_RX, rxq->rxqno, QUEUE_ERROR); + mtk_cldma_unmask_intr(hw->mdev, hw->base_addr, DIR_RX, rxq->rxqno, QUEUE_XFER_DONE); + + return rxq; + +exit_free_req: + for (i--; i >= 0; i--) { + req = rxq->req_pool + i; + mtk_dma_unmap_single(hw->mdev, req->data_dma_addr, req->mtu, DMA_FROM_DEVICE); + mtk_dma_pool_free(hw->dma_pool, req->gpd, req->gpd_dma_addr); + if (req->skb) + mtk_bm_free(bm_pool, req->skb); + } + + devm_kfree(hw->mdev->dev, rxq->req_pool); +err_exit: + devm_kfree(hw->mdev->dev, rxq); + return NULL; +} + +int mtk_cldma_rxq_free_t800(struct cldma_hw *hw, int vqno) +{ + struct mtk_bm_pool *bm_pool; + struct mtk_md_dev *mdev; + unsigned int active; + struct rx_req *req; + struct virtq *vq; + struct rxq *rxq; + int cnt = 0; + int irq_id; + int rxqno; + int i; + + mdev = hw->mdev; + vq = hw->cd->trans->vq_tbl + vqno; + rxqno = vq->rxqno; + if (unlikely(rxqno < 0 || rxqno >= HW_QUEUE_NUM)) + return -EINVAL; + rxq = hw->rxq[rxqno]; + if (!rxq) + return -EINVAL; + + if (rxq->vq->rx_mtu > VQ_MTU_3_5K) + bm_pool = hw->cd->trans->ctrl_blk->bm_pool_63K; + else + bm_pool = hw->cd->trans->ctrl_blk->bm_pool; + + mtk_cldma_stop_queue(mdev, hw->base_addr, DIR_RX, rxqno); + do { + /* check CLDMA HW state register */ + active = mtk_cldma_queue_status(mdev, hw->base_addr, DIR_RX, rxqno); + if (active == LINK_ERROR_VAL) + break; + msleep(CLDMA_STOP_HW_WAIT_TIME_MS); /* ensure HW rx transaction done */ + cnt++; + } while (active && cnt < CLDMA_STOP_HW_POLLING_MAX_CNT); + + irq_id = mtk_hw_get_virq_id(hw->mdev, hw->pci_ext_irq_id); + synchronize_irq(irq_id); + + flush_work(&rxq->rx_done_work); + mtk_cldma_mask_intr(mdev, hw->base_addr, DIR_RX, rxqno, QUEUE_XFER_DONE); + mtk_cldma_mask_intr(mdev, hw->base_addr, DIR_RX, rxqno, QUEUE_ERROR); + + /* free rx req resource */ + for (i = 0; i < rxq->req_pool_size; i++) { + req = rxq->req_pool + i; + if (!(req->gpd->rx_gpd.gpd_flags & CLDMA_GPD_FLAG_HWO) && + le16_to_cpu(req->gpd->rx_gpd.data_recv_len)) { + mtk_dma_unmap_single(mdev, req->data_dma_addr, + req->mtu, DMA_FROM_DEVICE); + rxq->rx_done(req->skb, le16_to_cpu(req->gpd->rx_gpd.data_recv_len), + rxq->arg); + req->skb = NULL; + } + + mtk_dma_pool_free(hw->dma_pool, req->gpd, req->gpd_dma_addr); + if (req->skb) { + mtk_bm_free(bm_pool, req->skb); + mtk_dma_unmap_single(mdev, req->data_dma_addr, + req->mtu, DMA_FROM_DEVICE); + } + } + + devm_kfree(mdev->dev, rxq->req_pool); + devm_kfree(mdev->dev, rxq); + hw->rxq[rxqno] = NULL; + + return 0; +} + +int mtk_cldma_start_xfer_t800(struct cldma_hw *hw, int qno) +{ + struct txq *txq; + u32 addr, val; + int idx; + + txq = hw->txq[qno]; + addr = hw->base_addr + REG_CLDMA_UL_START_ADDRL_0 + qno * HW_QUEUE_NUM; + val = mtk_hw_read32(hw->mdev, addr); + if (unlikely(!val)) { + mtk_cldma_hw_init(hw->mdev, hw->base_addr); + txq = hw->txq[qno]; + idx = (txq->wr_idx + txq->req_pool_size - 1) % txq->req_pool_size; + mtk_cldma_setup_start_addr(hw->mdev, hw->base_addr, DIR_TX, qno, + txq->req_pool[idx].gpd_dma_addr); + mtk_cldma_start_queue(hw->mdev, hw->base_addr, DIR_TX, qno); + txq->tx_started = true; + } else { + if (unlikely(!txq->tx_started)) { + mtk_cldma_start_queue(hw->mdev, hw->base_addr, DIR_TX, qno); + txq->tx_started = true; + } else { + mtk_cldma_resume_queue(hw->mdev, hw->base_addr, DIR_TX, qno); + } + } + + return 0; +} diff --git a/drivers/net/wwan/mediatek/pcie/mtk_cldma_drv_t800.h b/drivers/net/wwan/mediatek/pcie/mtk_cldma_drv_t800.h new file mode 100644 index 000000000000..b89d45a81c4f --- /dev/null +++ b/drivers/net/wwan/mediatek/pcie/mtk_cldma_drv_t800.h @@ -0,0 +1,20 @@ +/* SPDX-License-Identifier: BSD-3-Clause-Clear + * + * Copyright (c) 2022, MediaTek Inc. + */ + +#ifndef __MTK_CLDMA_DRV_T800_H__ +#define __MTK_CLDMA_DRV_T800_H__ + +#include + +#include "mtk_cldma.h" + +int mtk_cldma_hw_init_t800(struct cldma_dev *cd, int hif_id); +int mtk_cldma_hw_exit_t800(struct cldma_dev *cd, int hif_id); +struct txq *mtk_cldma_txq_alloc_t800(struct cldma_hw *hw, struct sk_buff *skb); +int mtk_cldma_txq_free_t800(struct cldma_hw *hw, int vqno); +struct rxq *mtk_cldma_rxq_alloc_t800(struct cldma_hw *hw, struct sk_buff *skb); +int mtk_cldma_rxq_free_t800(struct cldma_hw *hw, int vqno); +int mtk_cldma_start_xfer_t800(struct cldma_hw *hw, int qno); +#endif From patchwork Tue Nov 22 11:11:44 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?WWFuY2hhbyBZYW5nICjmnajlvabotoUp?= X-Patchwork-Id: 24300 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp2141551wrr; Tue, 22 Nov 2022 03:24:36 -0800 (PST) X-Google-Smtp-Source: AA0mqf790Hn8bbKYlVVpfE95lTDJmrTTOQ3nQq1hwtl0+KAaDhURAdC71O6ne+f+WQL/fgbRSLTi X-Received: by 2002:a17:907:2143:b0:7ae:27ed:e90e with SMTP id rk3-20020a170907214300b007ae27ede90emr19069050ejb.224.1669116276433; Tue, 22 Nov 2022 03:24:36 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1669116276; cv=none; d=google.com; s=arc-20160816; b=G5A0DP//O1cMWCd2xSbaFvc+rDXg5wbZkI6ZOGB8/auD1gOA3c1qmxtEse5RhL9vKt hbqOLstmIF3QPi5Z2Vls/SzUNz1vr+PcKvp3dY5P2Nrgu2kfjx7NGxB6Adh3Kmi9TNFg Ggo8uhwljlxZPAG+uToxlxhLgw8ReihO7nROzSZzbs2qmyIcYhOA/FYSj047N85yETPu f2CIql8HtMByckD95/DyHhPUuQmGmdhw0/8eE2sBgN6o9EOWpcZ9TotsMtHmeGnwD9lU sLY26LVO0fye3dV5eJZjhjJK7BVC27C8ZxraYcqXvr20nvBY/yrZkfJNHIrTZUTlzF1a yrqg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=CYsGYFIPjOQtrLrq8IyLjWMBcf2smOYowiQ2yxlTPMg=; b=AtRZR0AzS50Ci4tIq9l6wF3lbQZ23D1H4SFX+Bt9ZTWmdaI0i2XN5oKInv+s+qjTgX BRO8sHcVZlE407/WIj4obnaRsVkP906oChDhOzHVcY0CRqlzumDNCl1/CyEfxbTjPFTP hKYRSQSLgLTuat2KzCHiRFNEh2Ne0ARFrxAyxmJB32xDXamHNtGRjptQh9PjY3Im+5Qa v78DsoKso7wlf4IlKL96c2VUIjmtaRE5OdOz442+ia1Ho25Bu+KpcYT37nm/EISEGToJ S6YBwxgVj2f8z8odrAfKMLVWldaMcSp5wxqIdsPsgX50wJ++BQWMMtrXnhExMvtggqSZ wiIQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@mediatek.com header.s=dk header.b=tMsr6F9h; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=mediatek.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id sb21-20020a1709076d9500b0078c8dadd4bbsi4850845ejc.742.2022.11.22.03.24.12; Tue, 22 Nov 2022 03:24:36 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@mediatek.com header.s=dk header.b=tMsr6F9h; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=mediatek.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232479AbiKVLWs (ORCPT + 99 others); Tue, 22 Nov 2022 06:22:48 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38716 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233013AbiKVLVl (ORCPT ); Tue, 22 Nov 2022 06:21:41 -0500 Received: from mailgw01.mediatek.com (unknown [60.244.123.138]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6494215A29; Tue, 22 Nov 2022 03:17:48 -0800 (PST) X-UUID: 445555db92ee4bae91e313b0a597193b-20221122 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=mediatek.com; s=dk; h=Content-Type:MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:CC:To:From; bh=CYsGYFIPjOQtrLrq8IyLjWMBcf2smOYowiQ2yxlTPMg=; b=tMsr6F9hgaaNqlLnO2uq/T7YkUJ2bAx1k7euaRzgkMCrF1Ppy3t8e0otYfHSDgoDmorHEmTOdavdorjzaG20WiWKvRm1qd6HR3A1Lhx1WL7keNQINOZZEuyNq2TiygSIiWb0Zh0fcTN9lDI4HHLo4bdbEt341WPDNhlp4MYt1hs=; X-CID-P-RULE: Release_Ham X-CID-O-INFO: VERSION:1.1.13,REQID:5f80e7e7-316c-441d-a8c2-7d851beaeb7b,IP:0,U RL:0,TC:0,Content:-25,EDM:0,RT:0,SF:0,FILE:0,BULK:0,RULE:Release_Ham,ACTIO N:release,TS:-25 X-CID-META: VersionHash:d12e911,CLOUDID:69497f2f-2938-482e-aafd-98d66723b8a9,B ulkID:nil,BulkQuantity:0,Recheck:0,SF:102,TC:nil,Content:0,EDM:-3,IP:nil,U RL:11|1,File:nil,Bulk:nil,QS:nil,BEC:nil,COL:0 X-UUID: 445555db92ee4bae91e313b0a597193b-20221122 Received: from mtkexhb01.mediatek.inc [(172.21.101.102)] by mailgw01.mediatek.com (envelope-from ) (Generic MTA with TLSv1.2 ECDHE-RSA-AES256-SHA384 256/256) with ESMTP id 449910048; Tue, 22 Nov 2022 19:17:41 +0800 Received: from mtkmbs11n2.mediatek.inc (172.21.101.187) by mtkmbs11n2.mediatek.inc (172.21.101.187) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.792.15; Tue, 22 Nov 2022 19:17:40 +0800 Received: from mcddlt001.gcn.mediatek.inc (10.19.240.15) by mtkmbs11n2.mediatek.inc (172.21.101.73) with Microsoft SMTP Server id 15.2.792.15 via Frontend Transport; Tue, 22 Nov 2022 19:17:38 +0800 From: Yanchao Yang To: Loic Poulain , Sergey Ryazanov , Johannes Berg , "David S . Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , netdev ML , kernel ML CC: MTK ML , Liang Lu , Haijun Liu , Hua Yang , Ting Wang , Felix Chen , Mingliang Xu , Min Dong , Aiden Wang , Guohao Zhang , Chris Feng , Yanchao Yang , Lambert Wang , Mingchuang Qiao , Xiayu Zhang , Haozhe Chang , MediaTek Corporation Subject: [PATCH net-next v1 05/13] net: wwan: tmi: Add control port Date: Tue, 22 Nov 2022 19:11:44 +0800 Message-ID: <20221122111152.160377-6-yanchao.yang@mediatek.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20221122111152.160377-1-yanchao.yang@mediatek.com> References: <20221122111152.160377-1-yanchao.yang@mediatek.com> MIME-Version: 1.0 X-MTK: N X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_MSPIKE_H2,SPF_HELO_PASS, SPF_PASS,UNPARSEABLE_RELAY autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1750195268755028079?= X-GMAIL-MSGID: =?utf-8?q?1750195268755028079?= From: MediaTek Corporation The control port consists of port I/O and port manager. Port I/O provides a common operation as defined by "struct port_ops", and the operation is managed by the "port manager". It provides interfaces to internal users, the implemented internal interfaces are open, close, write and recv_register. The port manager defines and implements port management interfaces and structures. It is responsible for port creation, destroying, and managing port states. It sends data from port I/O to CLDMA via TRB ( Transaction Request Block ), and dispatches received data from CLDMA to port I/O. The using port will be held in the "stale list" when the driver destroys it, and after creating it again, the user can continue to use it. Signed-off-by: Felix Chen Signed-off-by: MediaTek Corporation --- drivers/net/wwan/mediatek/Makefile | 4 +- drivers/net/wwan/mediatek/mtk_ctrl_plane.c | 113 +++ drivers/net/wwan/mediatek/mtk_ctrl_plane.h | 27 +- drivers/net/wwan/mediatek/mtk_port.c | 981 +++++++++++++++++++++ drivers/net/wwan/mediatek/mtk_port.h | 222 +++++ drivers/net/wwan/mediatek/mtk_port_io.c | 301 +++++++ drivers/net/wwan/mediatek/mtk_port_io.h | 45 + drivers/net/wwan/mediatek/pcie/mtk_pci.c | 2 + 8 files changed, 1692 insertions(+), 3 deletions(-) create mode 100644 drivers/net/wwan/mediatek/mtk_port.c create mode 100644 drivers/net/wwan/mediatek/mtk_port.h create mode 100644 drivers/net/wwan/mediatek/mtk_port_io.c create mode 100644 drivers/net/wwan/mediatek/mtk_port_io.h diff --git a/drivers/net/wwan/mediatek/Makefile b/drivers/net/wwan/mediatek/Makefile index 77158d3d587a..177211b92826 100644 --- a/drivers/net/wwan/mediatek/Makefile +++ b/drivers/net/wwan/mediatek/Makefile @@ -8,7 +8,9 @@ mtk_tmi-y = \ mtk_bm.o \ mtk_ctrl_plane.o \ mtk_cldma.o \ - pcie/mtk_cldma_drv_t800.o + pcie/mtk_cldma_drv_t800.o \ + mtk_port.o \ + mtk_port_io.o ccflags-y += -I$(srctree)/$(src)/ ccflags-y += -I$(srctree)/$(src)/pcie/ diff --git a/drivers/net/wwan/mediatek/mtk_ctrl_plane.c b/drivers/net/wwan/mediatek/mtk_ctrl_plane.c index 4c8f71223a11..74845f8afa3e 100644 --- a/drivers/net/wwan/mediatek/mtk_ctrl_plane.c +++ b/drivers/net/wwan/mediatek/mtk_ctrl_plane.c @@ -11,7 +11,113 @@ #include #include "mtk_bm.h" +#include "mtk_cldma.h" #include "mtk_ctrl_plane.h" +#include "mtk_port.h" + +static int mtk_ctrl_get_hif_id(unsigned char peer_id) +{ + if (peer_id == MTK_PEER_ID_SAP) + return CLDMA0; + else if (peer_id == MTK_PEER_ID_MD) + return CLDMA1; + else + return -EINVAL; +} + +int mtk_ctrl_vq_search(struct mtk_ctrl_blk *ctrl_blk, unsigned char peer_id, + unsigned char tx_hwq, unsigned char rx_hwq) +{ + struct mtk_port_mngr *port_mngr = ctrl_blk->port_mngr; + struct mtk_ctrl_trans *trans = ctrl_blk->trans; + int hif_id = mtk_ctrl_get_hif_id(peer_id); + struct virtq *vq; + int vq_num = 0; + + if (hif_id < 0) + return -EINVAL; + + do { + vq = trans->vq_tbl + vq_num; + if (port_mngr->vq_info[vq_num].color && vq->txqno == tx_hwq && + vq->rxqno == rx_hwq && vq->hif_id == hif_id) + return vq_num; + + vq_num++; + } while (vq_num < VQ_NUM); + + return -ENOENT; +} + +int mtk_ctrl_vq_color_paint(struct mtk_ctrl_blk *ctrl_blk, unsigned char peer_id, + unsigned char tx_hwq, unsigned char rx_hwq, + unsigned int tx_mtu, unsigned int rx_mtu) +{ + struct mtk_port_mngr *port_mngr = ctrl_blk->port_mngr; + struct mtk_ctrl_trans *trans = ctrl_blk->trans; + int hif_id = mtk_ctrl_get_hif_id(peer_id); + struct virtq *vq; + int vq_num = 0; + + if (hif_id < 0) + return -EINVAL; + + do { + vq = trans->vq_tbl + vq_num; + if (vq->hif_id == hif_id && vq->txqno == tx_hwq && vq->rxqno == rx_hwq && + vq->tx_mtu <= tx_mtu && vq->rx_mtu >= rx_mtu) + port_mngr->vq_info[vq_num].color = true; + + vq_num++; + } while (vq_num < VQ_NUM); + + return 0; +} + +int mtk_ctrl_vq_color_cleanup(struct mtk_ctrl_blk *ctrl_blk, unsigned char peer_id) +{ + struct mtk_port_mngr *port_mngr = ctrl_blk->port_mngr; + struct mtk_ctrl_trans *trans = ctrl_blk->trans; + int hif_id = mtk_ctrl_get_hif_id(peer_id); + struct virtq *vq; + int vq_num = 0; + + if (hif_id < 0) + return -EINVAL; + + do { + vq = trans->vq_tbl + vq_num; + if (vq->hif_id == hif_id) + port_mngr->vq_info[vq_num].color = false; + + vq_num++; + } while (vq_num < VQ_NUM); + + return 0; +} + +int mtk_ctrl_trb_submit(struct mtk_ctrl_blk *blk, struct sk_buff *skb) +{ + struct mtk_ctrl_trans *trans = blk->trans; + struct trb *trb; + int vqno; + + trb = (struct trb *)skb->cb; + if (trb->vqno >= VQ_NUM) + return -EINVAL; + + if (!atomic_read(&trans->available)) + return -EIO; + + vqno = trb->vqno; + if (VQ_LIST_FULL(trans, vqno) && trb->cmd != TRB_CMD_DISABLE) + return -EAGAIN; + + /* This function will implement in next patch */ + wake_up(&trans->trb_srv->trb_waitq); + + return 0; +} int mtk_ctrl_init(struct mtk_md_dev *mdev) { @@ -40,8 +146,14 @@ int mtk_ctrl_init(struct mtk_md_dev *mdev) goto err_destroy_pool; } + err = mtk_port_mngr_init(ctrl_blk); + if (err) + goto err_destroy_pool_63K; + return 0; +err_destroy_pool_63K: + mtk_bm_pool_destroy(mdev, ctrl_blk->bm_pool_63K); err_destroy_pool: mtk_bm_pool_destroy(mdev, ctrl_blk->bm_pool); err_free_mem: @@ -54,6 +166,7 @@ int mtk_ctrl_exit(struct mtk_md_dev *mdev) { struct mtk_ctrl_blk *ctrl_blk = mdev->ctrl_blk; + mtk_port_mngr_exit(ctrl_blk); mtk_bm_pool_destroy(mdev, ctrl_blk->bm_pool); mtk_bm_pool_destroy(mdev, ctrl_blk->bm_pool_63K); devm_kfree(mdev->dev, ctrl_blk); diff --git a/drivers/net/wwan/mediatek/mtk_ctrl_plane.h b/drivers/net/wwan/mediatek/mtk_ctrl_plane.h index 427d5a06b3cc..38574dc21455 100644 --- a/drivers/net/wwan/mediatek/mtk_ctrl_plane.h +++ b/drivers/net/wwan/mediatek/mtk_ctrl_plane.h @@ -11,9 +11,13 @@ #include "mtk_dev.h" +#define VQ(N) (N) +#define VQ_NUM (2) + #define VQ_MTU_3_5K (0xE00) #define VQ_MTU_63K (0xFC00) +#define SKB_LIST_MAX_LEN (16) #define BUFF_3_5K_MAX_CNT (100) #define BUFF_63K_MAX_CNT (64) @@ -21,6 +25,8 @@ #define HIF_CLASS_SHIFT (8) #define HIF_ID_BITMASK (0x01) +#define VQ_LIST_FULL(trans, vqno) ((trans)->skb_list[vqno].qlen >= SKB_LIST_MAX_LEN) + enum mtk_trb_cmd_type { TRB_CMD_ENABLE = 1, TRB_CMD_TX, @@ -42,6 +48,14 @@ struct trb { int (*trb_complete)(struct sk_buff *skb); }; +struct trb_srv { + int vq_cnt; + int vq_start; + struct mtk_ctrl_trans *trans; + wait_queue_head_t trb_waitq; + struct task_struct *trb_thread; +}; + struct virtq { int vqno; int hif_id; @@ -53,8 +67,6 @@ struct virtq { int rx_req_num; }; -struct mtk_ctrl_trans; - struct hif_ops { int (*init)(struct mtk_ctrl_trans *trans); int (*exit)(struct mtk_ctrl_trans *trans); @@ -63,20 +75,31 @@ struct hif_ops { }; struct mtk_ctrl_trans { + struct sk_buff_head skb_list[VQ_NUM]; + struct trb_srv *trb_srv; struct virtq *vq_tbl; void *dev[HIF_CLASS_NUM]; struct hif_ops *ops[HIF_CLASS_NUM]; struct mtk_ctrl_blk *ctrl_blk; struct mtk_md_dev *mdev; + atomic_t available; }; struct mtk_ctrl_blk { struct mtk_md_dev *mdev; + struct mtk_port_mngr *port_mngr; struct mtk_ctrl_trans *trans; struct mtk_bm_pool *bm_pool; struct mtk_bm_pool *bm_pool_63K; }; +int mtk_ctrl_vq_search(struct mtk_ctrl_blk *ctrl_blk, unsigned char peer_id, + unsigned char tx_hwq, unsigned char rx_hwq); +int mtk_ctrl_vq_color_paint(struct mtk_ctrl_blk *ctrl_blk, unsigned char peer_id, + unsigned char tx_hwq, unsigned char rx_hwq, + unsigned int tx_mtu, unsigned int rx_mtu); +int mtk_ctrl_vq_color_cleanup(struct mtk_ctrl_blk *ctrl_blk, unsigned char peer_id); +int mtk_ctrl_trb_submit(struct mtk_ctrl_blk *blk, struct sk_buff *skb); int mtk_ctrl_init(struct mtk_md_dev *mdev); int mtk_ctrl_exit(struct mtk_md_dev *mdev); diff --git a/drivers/net/wwan/mediatek/mtk_port.c b/drivers/net/wwan/mediatek/mtk_port.c new file mode 100644 index 000000000000..b9bf2a57f763 --- /dev/null +++ b/drivers/net/wwan/mediatek/mtk_port.c @@ -0,0 +1,981 @@ +// SPDX-License-Identifier: BSD-3-Clause-Clear +/* + * Copyright (c) 2022, MediaTek Inc. + */ + +#include +#include +#include +#include +#include +#include +#include + +#include "mtk_port.h" +#include "mtk_port_io.h" + +#define MTK_DFLT_TRB_TIMEOUT (5 * HZ) +#define MTK_DFLT_TRB_STATUS (0x1) +#define MTK_CHECK_RX_SEQ_MASK (0x7fff) + +#define MTK_PORT_SEARCH_FROM_RADIX_TREE(p, s) ({\ + struct mtk_port *_p; \ + _p = radix_tree_deref_slot(s); \ + if (!_p) \ + continue; \ + p = _p; \ +}) + +#define MTK_PORT_INTERNAL_NODE_CHECK(p, s, i) ({\ + if (radix_tree_is_internal_node(p)) { \ + s = radix_tree_iter_retry(&(i));\ + continue; \ + } \ +}) + +/* global group for stale ports */ +static LIST_HEAD(stale_list_grp); +/* mutex lock for stale_list_group */ +DEFINE_MUTEX(port_mngr_grp_mtx); + +static DEFINE_IDA(ccci_dev_ids); + +static const struct mtk_port_cfg port_cfg[] = { + {CCCI_CONTROL_TX, CCCI_CONTROL_RX, VQ(1), PORT_TYPE_INTERNAL, "MDCTRL", PORT_F_ALLOW_DROP}, + {CCCI_SAP_CONTROL_TX, CCCI_SAP_CONTROL_RX, VQ(0), PORT_TYPE_INTERNAL, "SAPCTRL", + PORT_F_ALLOW_DROP}, +}; + +/* This function working always under mutex lock port_mngr_grp_mtx */ +void mtk_port_release(struct kref *port_kref) +{ + struct mtk_stale_list *s_list; + struct mtk_port *port; + + port = container_of(port_kref, struct mtk_port, kref); + /* The port on stale list also be deleted when release this port */ + if (!test_bit(PORT_S_ON_STALE_LIST, &port->status)) + goto port_exit; + + list_del(&port->stale_entry); + list_for_each_entry(s_list, &stale_list_grp, entry) { + /* If this port is the last port of stale list, free the list and dev_id */ + if (!strncmp(s_list->dev_str, port->dev_str, MTK_DEV_STR_LEN) && + list_empty(&s_list->ports) && s_list->dev_id >= 0) { + pr_info("Free dev id of stale list(%s)\n", s_list->dev_str); + ida_free(&ccci_dev_ids, s_list->dev_id); + s_list->dev_id = -1; + break; + } + } + +port_exit: + ports_ops[port->info.type]->exit(port); + kfree(port); +} + +static int mtk_port_tbl_add(struct mtk_port_mngr *port_mngr, struct mtk_port *port) +{ + int ret; + + ret = radix_tree_insert(&port_mngr->port_tbl[MTK_PORT_TBL_TYPE(port->info.rx_ch)], + port->info.rx_ch & 0xFFF, port); + if (ret) + dev_err(port_mngr->ctrl_blk->mdev->dev, + "port(%s) add to port_tbl failed, return %d\n", + port->info.name, ret); + + return ret; +} + +static void mtk_port_tbl_del(struct mtk_port_mngr *port_mngr, struct mtk_port *port) +{ + radix_tree_delete(&port_mngr->port_tbl[MTK_PORT_TBL_TYPE(port->info.rx_ch)], + port->info.rx_ch & 0xFFF); +} + +static struct mtk_port *mtk_port_get_from_stale_list(struct mtk_port_mngr *port_mngr, + struct mtk_stale_list *s_list, + int rx_ch) +{ + struct mtk_port *port, *next_port; + int ret; + + mutex_lock(&port_mngr_grp_mtx); + list_for_each_entry_safe(port, next_port, &s_list->ports, stale_entry) { + if (port->info.rx_ch == rx_ch) { + kref_get(&port->kref); + list_del(&port->stale_entry); + ret = mtk_port_tbl_add(port_mngr, port); + if (ret) { + list_add_tail(&port->stale_entry, &s_list->ports); + kref_put(&port->kref, mtk_port_release); + mutex_unlock(&port_mngr_grp_mtx); + dev_err(port_mngr->ctrl_blk->mdev->dev, + "Failed when adding (%s) to port mngr\n", + port->info.name); + return ERR_PTR(ret); + } + + port->port_mngr = port_mngr; + clear_bit(PORT_S_ON_STALE_LIST, &port->status); + mutex_unlock(&port_mngr_grp_mtx); + return port; + } + } + mutex_unlock(&port_mngr_grp_mtx); + + return NULL; +} + +static struct mtk_port *mtk_port_alloc_or_restore(struct mtk_port_mngr *port_mngr, + struct mtk_port_cfg *dflt_info, + struct mtk_stale_list *s_list) +{ + struct mtk_port *port; + int ret; + + port = mtk_port_get_from_stale_list(port_mngr, s_list, dflt_info->rx_ch); + if (IS_ERR(port)) { + /* Failed when adding to port mngr */ + return port; + } + + if (port) { + ports_ops[port->info.type]->reset(port); + dev_info(port_mngr->ctrl_blk->mdev->dev, + "Port(%s) move from stale list\n", port->info.name); + goto return_port; + } + + /* This memory will be free in function "mtk_port_release", if + * "mtk_port_release" called by mtk_port_stale_list_grp_cleanup, + * we can't use "devm_free" due to no dev(struct device) entity. + */ + port = kzalloc(sizeof(*port), GFP_KERNEL); + if (!port) { + ret = -ENOMEM; + goto err_alloc_port; + } + + memcpy(port, dflt_info, sizeof(*dflt_info)); + ret = mtk_port_tbl_add(port_mngr, port); + if (ret < 0) { + dev_err(port_mngr->ctrl_blk->mdev->dev, + "Failed to add port(%s) to port tbl\n", dflt_info->name); + goto err_add_port; + } + + port->port_mngr = port_mngr; + ports_ops[port->info.type]->init(port); + dev_info(port_mngr->ctrl_blk->mdev->dev, + "Port(%s) alloc and init\n", port->info.name); + +return_port: + return port; +err_add_port: + kfree(port); +err_alloc_port: + return ERR_PTR(ret); +} + +static void mtk_port_free_or_backup(struct mtk_port_mngr *port_mngr, + struct mtk_port *port, struct mtk_stale_list *s_list) +{ + mutex_lock(&port_mngr_grp_mtx); + mtk_port_tbl_del(port_mngr, port); + if (port->info.type != PORT_TYPE_INTERNAL) { + if (test_bit(PORT_S_OPEN, &port->status)) { + /* backup: move using ports to stale list, for no need to + * re-open ports after remove and plug-in device again + */ + list_add_tail(&port->stale_entry, &s_list->ports); + set_bit(PORT_S_ON_STALE_LIST, &port->status); + dev_info(port_mngr->ctrl_blk->mdev->dev, + "Port(%s) move to stale list\n", port->info.name); + memcpy(port->dev_str, port_mngr->ctrl_blk->mdev->dev_str, MTK_DEV_STR_LEN); + port->port_mngr = NULL; + } + kref_put(&port->kref, mtk_port_release); + } else { + mtk_port_release(&port->kref); + } + mutex_unlock(&port_mngr_grp_mtx); +} + +static struct mtk_port *mtk_port_search_by_id(struct mtk_port_mngr *port_mngr, int rx_ch) +{ + int tbl_type = MTK_PORT_TBL_TYPE(rx_ch); + + if (tbl_type < PORT_TBL_SAP || tbl_type >= PORT_TBL_MAX) + return NULL; + + return radix_tree_lookup(&port_mngr->port_tbl[tbl_type], MTK_CH_ID(rx_ch)); +} + +struct mtk_port *mtk_port_search_by_name(struct mtk_port_mngr *port_mngr, char *name) +{ + int tbl_type = PORT_TBL_SAP; + struct radix_tree_iter iter; + struct mtk_port *port; + void __rcu **slot; + + do { + radix_tree_for_each_slot(slot, &port_mngr->port_tbl[tbl_type], &iter, 0) { + MTK_PORT_SEARCH_FROM_RADIX_TREE(port, slot); + MTK_PORT_INTERNAL_NODE_CHECK(port, slot, iter); + if (!strncmp(port->info.name, name, strlen(port->info.name))) + return port; + } + tbl_type++; + } while (tbl_type < PORT_TBL_MAX); + return NULL; +} + +static int mtk_port_tbl_create(struct mtk_port_mngr *port_mngr, struct mtk_port_cfg *cfg, + const int port_cnt, struct mtk_stale_list *s_list) +{ + struct mtk_port_cfg *dflt_port; + struct mtk_port *port; + int i, ret; + + INIT_RADIX_TREE(&port_mngr->port_tbl[PORT_TBL_SAP], GFP_KERNEL); + INIT_RADIX_TREE(&port_mngr->port_tbl[PORT_TBL_MD], GFP_KERNEL); + + /* copy ports from static port cfg table */ + for (i = 0; i < port_cnt; i++) { + dflt_port = cfg + i; + port = mtk_port_alloc_or_restore(port_mngr, dflt_port, s_list); + if (IS_ERR(port)) { + ret = PTR_ERR(port); + goto err_alloc_port; + } + } + return 0; + +err_alloc_port: + /* free the other ports in port table */ + for (i--; i >= 0; i--) { + dflt_port = cfg + i; + port = mtk_port_search_by_id(port_mngr, dflt_port->rx_ch); + if (port) + mtk_port_free_or_backup(port_mngr, port, s_list); + } + + return ret; +} + +static void mtk_port_tbl_destroy(struct mtk_port_mngr *port_mngr, struct mtk_stale_list *s_list) +{ + struct radix_tree_iter iter; + struct mtk_port *port; + void __rcu **slot; + int tbl_type; + + /* VQ may be shared by multiple ports, we have to free or move the ports + * after all the ports on the VQ are closed. + */ + /* 1. All ports disable and send trb to close vq */ + tbl_type = PORT_TBL_SAP; + do { + radix_tree_for_each_slot(slot, &port_mngr->port_tbl[tbl_type], &iter, 0) { + MTK_PORT_SEARCH_FROM_RADIX_TREE(port, slot); + MTK_PORT_INTERNAL_NODE_CHECK(port, slot, iter); + ports_ops[port->info.type]->disable(port); + } + tbl_type++; + } while (tbl_type < PORT_TBL_MAX); + + /* 2. After all vq closed, free or backup the ports */ + tbl_type = PORT_TBL_SAP; + do { + radix_tree_for_each_slot(slot, &port_mngr->port_tbl[tbl_type], &iter, 0) { + MTK_PORT_SEARCH_FROM_RADIX_TREE(port, slot); + MTK_PORT_INTERNAL_NODE_CHECK(port, slot, iter); + mtk_port_free_or_backup(port_mngr, port, s_list); + } + tbl_type++; + } while (tbl_type < PORT_TBL_MAX); +} + +static struct mtk_stale_list *mtk_port_stale_list_create(struct mtk_port_mngr *port_mngr) +{ + struct mtk_stale_list *s_list; + + /* cannot use devm_kzalloc here, because should pair with the free operation which + * may be no dev pointer. + */ + s_list = kzalloc(sizeof(*s_list), GFP_KERNEL); + if (!s_list) + return NULL; + + memcpy(s_list->dev_str, port_mngr->ctrl_blk->mdev->dev_str, MTK_DEV_STR_LEN); + s_list->dev_id = -1; + INIT_LIST_HEAD(&s_list->ports); + + mutex_lock(&port_mngr_grp_mtx); + list_add_tail(&s_list->entry, &stale_list_grp); + mutex_unlock(&port_mngr_grp_mtx); + + return s_list; +} + +static void mtk_port_stale_list_destroy(struct mtk_stale_list *s_list) +{ + mutex_lock(&port_mngr_grp_mtx); + list_del(&s_list->entry); + mutex_unlock(&port_mngr_grp_mtx); + kfree(s_list); +} + +static struct mtk_stale_list *mtk_port_stale_list_search(const char *dev_str) +{ + struct mtk_stale_list *tmp, *s_list = NULL; + + mutex_lock(&port_mngr_grp_mtx); + list_for_each_entry(tmp, &stale_list_grp, entry) { + if (!strncmp(tmp->dev_str, dev_str, MTK_DEV_STR_LEN)) { + s_list = tmp; + break; + } + } + mutex_unlock(&port_mngr_grp_mtx); + + return s_list; +} + +/* mtk_port_stale_list_grp_cleanup() - free all stale lists and all ports on it. + * + * This function will be called when driver will be removed. It will search all the stale lists. + * For each stale list, it will free the stale ports. + * + * Return: No return value. + */ +void mtk_port_stale_list_grp_cleanup(void) +{ + struct mtk_stale_list *s_list, *next_s_list; + struct mtk_port *port, *next_port; + + mutex_lock(&port_mngr_grp_mtx); + list_for_each_entry_safe(s_list, next_s_list, &stale_list_grp, entry) { + list_del(&s_list->entry); + + list_for_each_entry_safe(port, next_port, &s_list->ports, stale_entry) { + list_del(&port->stale_entry); + mtk_port_release(&port->kref); + } + + /* can't use devm_kfree, because the port is free, + * can't use port to get dev pointer + */ + kfree(s_list); + } + mutex_unlock(&port_mngr_grp_mtx); +} + +static struct mtk_stale_list *mtk_port_stale_list_init(struct mtk_port_mngr *port_mngr) +{ + struct mtk_stale_list *s_list; + + s_list = mtk_port_stale_list_search(port_mngr->ctrl_blk->mdev->dev_str); + if (!s_list) { + dev_info(port_mngr->ctrl_blk->mdev->dev, "Create stale list\n"); + s_list = mtk_port_stale_list_create(port_mngr); + if (unlikely(!s_list)) + return NULL; + } else { + dev_info(port_mngr->ctrl_blk->mdev->dev, "Reuse old stale list\n"); + } + + mutex_lock(&port_mngr_grp_mtx); + if (s_list->dev_id < 0) { + port_mngr->dev_id = ida_alloc_range(&ccci_dev_ids, 0, + MTK_DFLT_MAX_DEV_CNT - 1, + GFP_KERNEL); + } else { + port_mngr->dev_id = s_list->dev_id; + s_list->dev_id = -1; + } + mutex_unlock(&port_mngr_grp_mtx); + + return s_list; +} + +static void mtk_port_stale_list_exit(struct mtk_port_mngr *port_mngr, struct mtk_stale_list *s_list) +{ + mutex_lock(&port_mngr_grp_mtx); + if (list_empty(&s_list->ports)) { + ida_free(&ccci_dev_ids, port_mngr->dev_id); + mutex_unlock(&port_mngr_grp_mtx); + mtk_port_stale_list_destroy(s_list); + dev_info(port_mngr->ctrl_blk->mdev->dev, "Destroy stale list\n"); + } else { + s_list->dev_id = port_mngr->dev_id; + mutex_unlock(&port_mngr_grp_mtx); + dev_info(port_mngr->ctrl_blk->mdev->dev, "Reserve stale list\n"); + } +} + +static void mtk_port_trb_init(struct mtk_port *port, struct trb *trb, enum mtk_trb_cmd_type cmd, + int (*trb_complete)(struct sk_buff *skb)) +{ + kref_init(&trb->kref); + trb->vqno = port->info.vq_id; + trb->status = MTK_DFLT_TRB_STATUS; + trb->priv = port; + trb->cmd = cmd; + trb->trb_complete = trb_complete; +} + +static void mtk_port_trb_free(struct kref *trb_kref) +{ + struct trb *trb = container_of(trb_kref, struct trb, kref); + struct mtk_port *port = trb->priv; + struct sk_buff *skb; + + skb = container_of((char *)trb, struct sk_buff, cb[0]); + if (trb->cmd == TRB_CMD_TX) + dev_kfree_skb_any(skb); + else + mtk_bm_free(port->port_mngr->ctrl_blk->bm_pool, skb); +} + +static int mtk_port_open_trb_complete(struct sk_buff *skb) +{ + struct trb_open_priv *trb_open_priv = (struct trb_open_priv *)skb->data; + struct trb *trb = (struct trb *)skb->cb; + struct mtk_port *port = trb->priv; + struct mtk_port_mngr *port_mngr; + + port_mngr = port->port_mngr; + + if (trb->status && trb->status != -EBUSY) + goto out; + + if (!trb->status) { + /* The first port which opens the VQ should let port_mngr record the MTU */ + port_mngr->vq_info[trb->vqno].tx_mtu = trb_open_priv->tx_mtu; + port_mngr->vq_info[trb->vqno].rx_mtu = trb_open_priv->rx_mtu; + } + + port->tx_mtu = port_mngr->vq_info[trb->vqno].tx_mtu; + port->rx_mtu = port_mngr->vq_info[trb->vqno].rx_mtu; + + /* Minus the len of the header */ + port->tx_mtu -= MTK_CCCI_H_ELEN; + port->rx_mtu -= MTK_CCCI_H_ELEN; + +out: + wake_up_interruptible_all(&port->trb_wq); + + dev_info(port->port_mngr->ctrl_blk->mdev->dev, + "Open VQ TRB:status:%d, vq:%d, port:%s, tx_mtu:%d. rx_mtu:%d\n", + trb->status, trb->vqno, port->info.name, port->tx_mtu, port->rx_mtu); + kref_put(&trb->kref, mtk_port_trb_free); + return 0; +} + +static int mtk_port_close_trb_complete(struct sk_buff *skb) +{ + struct trb *trb = (struct trb *)skb->cb; + struct mtk_port *port = trb->priv; + + wake_up_interruptible_all(&port->trb_wq); + dev_info(port->port_mngr->ctrl_blk->mdev->dev, + "Close VQ TRB: trb->status:%d, vq:%d, port:%s\n", + trb->status, trb->vqno, port->info.name); + kref_put(&trb->kref, mtk_port_trb_free); + + return 0; +} + +static int mtk_port_tx_complete(struct sk_buff *skb) +{ + struct trb *trb = (struct trb *)skb->cb; + struct mtk_port *port = trb->priv; + + if (trb->status < 0) + dev_warn(port->port_mngr->ctrl_blk->mdev->dev, + "Failed to send data: trb->status:%d, vq:%d, port:%s\n", + trb->status, trb->vqno, port->info.name); + + if (port->info.flags & PORT_F_BLOCKING) + wake_up_interruptible_all(&port->trb_wq); + + kref_put(&trb->kref, mtk_port_trb_free); + + return 0; +} + +static int mtk_port_status_check(struct mtk_port *port) +{ + /* If port is enable, it must on port_mngr's port_tbl, so the mdev must exist. */ + if (!test_bit(PORT_S_ENABLE, &port->status)) { + pr_err("[TMI]Unable to use port: (%s) disabled. Caller: %ps\n", + port->info.name, __builtin_return_address(0)); + return -ENODEV; + } + + if (!test_bit(PORT_S_OPEN, &port->status) || test_bit(PORT_S_FLUSH, &port->status) || + !test_bit(PORT_S_RDWR, &port->status)) { + dev_err(port->port_mngr->ctrl_blk->mdev->dev, + "Unable to use port: (%s), port status = 0x%lx. Caller: %ps\n", + port->info.name, port->status, __builtin_return_address(0)); + + return -EBADF; + } + + return 0; +} + +/* mtk_port_send_data() - send data to device through trans layer. + * @port: pointer to channel structure for sending data. + * @data: data to be sent. + * + * This function will be called by port io. + * + * Return: + * actual sent data length if success. + * error value if send failed. + */ +int mtk_port_send_data(struct mtk_port *port, void *data) +{ + struct mtk_port_mngr *port_mngr; + struct mtk_ctrl_trans *trans; + struct sk_buff *skb = data; + struct trb *trb; + int ret, len; + + port_mngr = port->port_mngr; + trans = port_mngr->ctrl_blk->trans; + + trb = (struct trb *)skb->cb; + mtk_port_trb_init(port, trb, TRB_CMD_TX, mtk_port_tx_complete); + len = skb->len; + kref_get(&trb->kref); /* kref count 1->2 */ + +submit_trb: + mutex_lock(&port->write_lock); + ret = mtk_port_status_check(port); + if (!ret) + ret = mtk_ctrl_trb_submit(port_mngr->ctrl_blk, skb); + mutex_unlock(&port->write_lock); + + if (ret == -EAGAIN && port->info.flags & PORT_F_BLOCKING) { + dev_warn(port_mngr->ctrl_blk->mdev->dev, + "Failed to submit trb for port(%s), ret=%d\n", port->info.name, ret); + wait_event_interruptible(port->trb_wq, !VQ_LIST_FULL(trans, trb->vqno)); + goto submit_trb; + } else if (ret < 0) { + dev_warn(port_mngr->ctrl_blk->mdev->dev, + "Failed to submit trb for port(%s), ret=%d\n", port->info.name, ret); + kref_put(&trb->kref, mtk_port_trb_free); /* kref count 2->1 */ + dev_kfree_skb_any(skb); + goto end; + } + + if (!(port->info.flags & PORT_F_BLOCKING)) { + kref_put(&trb->kref, mtk_port_trb_free); + ret = len; + goto end; + } +start_wait: + /* wait trb done, and no timeout in tx blocking mode */ + ret = wait_event_interruptible_timeout(port->trb_wq, + trb->status <= 0 || + test_bit(PORT_S_FLUSH, &port->status), + MTK_DFLT_TRB_TIMEOUT); + + if (ret == -ERESTARTSYS) + goto start_wait; + else if (test_bit(PORT_S_FLUSH, &port->status)) + ret = -EBUSY; + else if (!ret) + ret = -ETIMEDOUT; + else + ret = (!trb->status) ? len : trb->status; + + kref_put(&trb->kref, mtk_port_trb_free); + +end: + return ret; +} + +static int mtk_port_check_rx_seq(struct mtk_port *port, struct mtk_ccci_header *ccci_h) +{ + u16 seq_num, assert_bit; + + seq_num = FIELD_GET(MTK_HDR_FLD_SEQ, le32_to_cpu(ccci_h->status)); + assert_bit = FIELD_GET(MTK_HDR_FLD_AST, le32_to_cpu(ccci_h->status)); + if (assert_bit && port->rx_seq && + ((seq_num - port->rx_seq) & MTK_CHECK_RX_SEQ_MASK) != 1) { + dev_err(port->port_mngr->ctrl_blk->mdev->dev, + " seq num out-of-order %d->%d", + FIELD_GET(MTK_HDR_FLD_CHN, le32_to_cpu(ccci_h->status)), + seq_num, port->rx_seq); + return -EPROTO; + } + + return 0; +} + +static int mtk_port_rx_dispatch(struct sk_buff *skb, int len, void *priv) +{ + struct mtk_port_mngr *port_mngr; + struct mtk_ccci_header *ccci_h; + struct mtk_port *port = priv; + int ret = -EPROTO; + u16 channel; + + if (!skb || !priv) { + pr_err("[TMI] Invalid input value in rx dispatch\n"); + ret = -EINVAL; + goto err_done; + } + + port_mngr = port->port_mngr; + + /* CLDMA will not handle skb structure, so must handle here */ + skb->len = 0; + skb_reset_tail_pointer(skb); + skb_put(skb, len); + + ccci_h = mtk_port_strip_header(skb); + if (unlikely(!ccci_h)) { + dev_warn(port_mngr->ctrl_blk->mdev->dev, + "Unsupported: skb length(%d) is less than ccci header\n", + skb->len); + goto drop_data; + } + + dev_dbg(port_mngr->ctrl_blk->mdev->dev, + "RX header:%08x %08x\n", ccci_h->packet_len, ccci_h->status); + + channel = FIELD_GET(MTK_HDR_FLD_CHN, le32_to_cpu(ccci_h->status)); + port = mtk_port_search_by_id(port_mngr, channel); + if (unlikely(!port)) { + dev_warn(port_mngr->ctrl_blk->mdev->dev, + "Failed to find port by channel:%d\n", channel); + goto drop_data; + } + + /* The sequence number must be continuous */ + ret = mtk_port_check_rx_seq(port, ccci_h); + if (unlikely(ret)) + goto drop_data; + + port->rx_seq = FIELD_GET(MTK_HDR_FLD_SEQ, le32_to_cpu(ccci_h->status)); + + ret = ports_ops[port->info.type]->recv(port, skb); + + return ret; + +drop_data: + dev_kfree_skb_any(skb); +err_done: + return ret; +} + +/* mtk_port_add_header() - Add mtk_ccci_header to TX packet. + * @skb: pointer to socket buffer + * + * This function is called by trb sevice. And it will help to + * add mtk_ccci_header data to the head of skb->data. + * + */ +int mtk_port_add_header(struct sk_buff *skb) +{ + struct mtk_ccci_header *ccci_h; + struct mtk_port *port; + struct trb *trb; + int ret = 0; + + trb = (struct trb *)skb->cb; + if (trb->status == 0xADDED) + goto end; + + port = trb->priv; + if (!port) { + ret = -EINVAL; + goto end; + } + + /* Port layer have reserved data length of ccci_head at the skb head */ + ccci_h = skb_push(skb, sizeof(*ccci_h)); + + ccci_h->packet_header = cpu_to_le32(0); + ccci_h->packet_len = cpu_to_le32(skb->len); + ccci_h->ex_msg = cpu_to_le32(0); + ccci_h->status = cpu_to_le32(FIELD_PREP(MTK_HDR_FLD_CHN, port->info.tx_ch) | + FIELD_PREP(MTK_HDR_FLD_SEQ, port->tx_seq++) | + FIELD_PREP(MTK_HDR_FLD_AST, 1)); + + trb->status = 0xADDED; +end: + return ret; +} + +/* mtk_port_strip_header() - remove mtk_ccci_header from RX packet. + * @skb: pointer to socket buffer. + * + * This function will help to remove mtk_ccci_header data from the head of skb->data. + * But it will not check if the data of skb head is mtk_ccci_header actually. + * + * Return: + * ccci_h: pointer to mtk_ccci_header stripped from socket buffer. + * NULL: data length is invalid. + */ +struct mtk_ccci_header *mtk_port_strip_header(struct sk_buff *skb) +{ + struct mtk_ccci_header *ccci_h; + + if (skb->len < sizeof(*ccci_h)) { + pr_err("[TMI] Invalid input value\n"); + return NULL; + } + + ccci_h = (struct mtk_ccci_header *)skb->data; + skb_pull(skb, sizeof(*ccci_h)); + + return ccci_h; +} + +/* mtk_port_mngr_vq_status_check() - Checking VQ status before enable or disable VQ. + * @skb: pointer to socket buffer + * + * This function called before enable or disable VQ, check the VQ status by calculate + * count of ports which have enabled the VQ. + * + * Return: + * 0: first user for enable or last user for disable + * -EBUSY: current VQ is occupied by other ports + * -EINVAL: error command + */ +int mtk_port_mngr_vq_status_check(struct sk_buff *skb) +{ + struct trb *trb = (struct trb *)skb->cb; + struct trb_open_priv *trb_open_priv; + struct mtk_port *port = trb->priv; + struct mtk_port_mngr *port_mngr; + int ret = 0; + + port_mngr = port->port_mngr; + switch (trb->cmd) { + case TRB_CMD_ENABLE: + port_mngr->vq_info[trb->vqno].port_cnt++; + if (port_mngr->vq_info[trb->vqno].port_cnt == 1) { + trb_open_priv = (struct trb_open_priv *)skb->data; + trb_open_priv->rx_done = mtk_port_rx_dispatch; + break; + } + + trb->status = -EBUSY; + trb->trb_complete(skb); + ret = -EBUSY; + break; + case TRB_CMD_DISABLE: + port_mngr->vq_info[trb->vqno].port_cnt--; + if (!port_mngr->vq_info[trb->vqno].port_cnt) + break; + + dev_info(port_mngr->ctrl_blk->mdev->dev, + "VQ(%d) still has %d port, skip to handle close skb\n", + trb->vqno, port_mngr->vq_info[trb->vqno].port_cnt); + trb->status = -EBUSY; + trb->trb_complete(skb); + ret = -EBUSY; + break; + default: + dev_err(port_mngr->ctrl_blk->mdev->dev, "Invalid trb command(%d)\n", trb->cmd); + ret = -EINVAL; + break; + } + return ret; +} + +/* mtk_port_vq_enable() - Function for enable virtual queue. + * @port: pointer to channel structure for sending data. + * + * This function will be called when enable/create port. + * + * Return: + * trb->status if success. + * error value if fail. + */ +int mtk_port_vq_enable(struct mtk_port *port) +{ + struct mtk_port_mngr *port_mngr = port->port_mngr; + struct sk_buff *skb; + int ret = -ENOMEM; + struct trb *trb; + + skb = mtk_bm_alloc(port_mngr->ctrl_blk->bm_pool); + if (!skb) { + dev_err(port->port_mngr->ctrl_blk->mdev->dev, + "Failed to alloc skb of port(%s)\n", port->info.name); + goto end; + } + skb_put(skb, sizeof(struct trb_open_priv)); + trb = (struct trb *)skb->cb; + mtk_port_trb_init(port, trb, TRB_CMD_ENABLE, mtk_port_open_trb_complete); + kref_get(&trb->kref); + + ret = mtk_ctrl_trb_submit(port_mngr->ctrl_blk, skb); + if (ret) { + dev_err(port_mngr->ctrl_blk->mdev->dev, + "Failed to submit trb for port(%s), ret=%d\n", port->info.name, ret); + kref_put(&trb->kref, mtk_port_trb_free); + mtk_port_trb_free(&trb->kref); + goto end; + } + +start_wait: + /* wait trb done */ + ret = wait_event_interruptible_timeout(port->trb_wq, trb->status <= 0, + MTK_DFLT_TRB_TIMEOUT); + if (ret == -ERESTARTSYS) + goto start_wait; + else if (!ret) + ret = -ETIMEDOUT; + else + ret = trb->status; + + kref_put(&trb->kref, mtk_port_trb_free); + +end: + return ret; +} + +/* mtk_port_vq_disable() - Function for disable virtual queue. + * @port: pointer to channel structure for sending data. + * + * This function will be called when disable/destroy port. + * + * Return: + * trb->status if success. + * error value if fail. + */ +int mtk_port_vq_disable(struct mtk_port *port) +{ + struct mtk_port_mngr *port_mngr = port->port_mngr; + struct sk_buff *skb; + int ret = -ENOMEM; + struct trb *trb; + + skb = mtk_bm_alloc(port->port_mngr->ctrl_blk->bm_pool); + if (!skb) { + dev_err(port->port_mngr->ctrl_blk->mdev->dev, + "Failed to alloc skb of port(%s)\n", port->info.name); + goto end; + } + skb_put(skb, sizeof(struct trb_open_priv)); + trb = (struct trb *)skb->cb; + mtk_port_trb_init(port, trb, TRB_CMD_DISABLE, mtk_port_close_trb_complete); + kref_get(&trb->kref); + + mutex_lock(&port->write_lock); + ret = mtk_ctrl_trb_submit(port_mngr->ctrl_blk, skb); + mutex_unlock(&port->write_lock); + if (ret) { + dev_warn(port_mngr->ctrl_blk->mdev->dev, + "Failed to submit trb for port(%s), ret=%d\n", port->info.name, ret); + kref_put(&trb->kref, mtk_port_trb_free); + mtk_port_trb_free(&trb->kref); + goto end; + } + +start_wait: + /* wait trb done (must wait until close vq done) */ + ret = wait_event_interruptible(port->trb_wq, trb->status <= 0); + if (ret == -ERESTARTSYS) + goto start_wait; + + ret = trb->status; + kref_put(&trb->kref, mtk_port_trb_free); + +end: + return ret; +} + +/* mtk_port_mngr_init() - Initialize mtk_port_mngr and mtk_stale_list. + * @ctrl_blk: pointer to mtk_ctrl_blk. + * + * This function called after trans layer complete initialization. + * Structure mtk_port_mngr is main body responsible for port management; + * and this function alloc memory for it. + * If port manager can't find stale list in stale list group by + * using dev_str, it will also alloc memory for structure mtk_stale_list. + * And then it will initialize port table. + * + * Return: + * 0: -success to initialize mtk_port_mngr + * -ENOMEM: -alloc memory for structure failed + */ +int mtk_port_mngr_init(struct mtk_ctrl_blk *ctrl_blk) +{ + struct mtk_port_mngr *port_mngr; + struct mtk_stale_list *s_list; + int ret = -ENOMEM; + + port_mngr = devm_kzalloc(ctrl_blk->mdev->dev, sizeof(*port_mngr), GFP_KERNEL); + if (unlikely(!port_mngr)) { + dev_err(ctrl_blk->mdev->dev, "Failed to alloc memory for port_mngr\n"); + goto err_done; + } + + /* 1.Init port manager basic fields */ + port_mngr->ctrl_blk = ctrl_blk; + + /* 2.Init mtk_stale_list or re-use old one */ + s_list = mtk_port_stale_list_init(port_mngr); + if (!s_list) { + dev_err(ctrl_blk->mdev->dev, "Failed to init mtk_stale_list\n"); + goto err_init_stale_list; + } + + /* 3.Put default ports and stale ports to port table */ + ret = mtk_port_tbl_create(port_mngr, (struct mtk_port_cfg *)port_cfg, + ARRAY_SIZE(port_cfg), s_list); + if (unlikely(ret)) { + dev_err(ctrl_blk->mdev->dev, "Failed to create port_tbl\n"); + goto err_create_tbl; + } + ctrl_blk->port_mngr = port_mngr; + dev_info(ctrl_blk->mdev->dev, "Initialize port_mngr successfully\n"); + + return ret; + +err_create_tbl: + mtk_port_stale_list_exit(port_mngr, s_list); +err_init_stale_list: + devm_kfree(ctrl_blk->mdev->dev, port_mngr); +err_done: + return ret; +} + +/* mtk_port_mngr_exit() - Free the structure mtk_port_mngr. + * @ctrl_blk: pointer to mtk_ctrl_blk. + * + * This function called before trans layer start to exit. + * It will destroy port table and stale list, free port manager entity. + * If there are ports that are opened, move these ports to stale list + * and free the rest ports; if there are ports that are all closed, + * then also free stale list. + * + * Return: No return value. + */ +void mtk_port_mngr_exit(struct mtk_ctrl_blk *ctrl_blk) +{ + struct mtk_port_mngr *port_mngr = ctrl_blk->port_mngr; + struct mtk_stale_list *s_list; + + s_list = mtk_port_stale_list_search(port_mngr->ctrl_blk->mdev->dev_str); + /* 1.free or backup ports, then destroy port table */ + mtk_port_tbl_destroy(port_mngr, s_list); + /* 2.destroy stale list or backup register info to it */ + mtk_port_stale_list_exit(port_mngr, s_list); + /* 3.free port_mngr structure */ + devm_kfree(ctrl_blk->mdev->dev, port_mngr); + ctrl_blk->port_mngr = NULL; + dev_info(ctrl_blk->mdev->dev, "Exit port_mngr successfully\n"); +} diff --git a/drivers/net/wwan/mediatek/mtk_port.h b/drivers/net/wwan/mediatek/mtk_port.h new file mode 100644 index 000000000000..56ed82c41cc2 --- /dev/null +++ b/drivers/net/wwan/mediatek/mtk_port.h @@ -0,0 +1,222 @@ +/* SPDX-License-Identifier: BSD-3-Clause-Clear + * + * Copyright (c) 2022, MediaTek Inc. + */ + +#ifndef __MTK_PORT_H__ +#define __MTK_PORT_H__ + +#include +#include +#include +#include +#include + +#include "mtk_ctrl_plane.h" +#include "mtk_dev.h" + +#define MTK_PEER_ID_MASK (0xF000) +#define MTK_PEER_ID_SHIFT (12) +#define MTK_PEER_ID(ch) (((ch) & MTK_PEER_ID_MASK) >> MTK_PEER_ID_SHIFT) +#define MTK_PEER_ID_SAP (0x1) +#define MTK_PEER_ID_MD (0x2) +#define MTK_CH_ID_MASK (0x0FFF) +#define MTK_CH_ID(ch) ((ch) & MTK_CH_ID_MASK) +#define MTK_DFLT_MAX_DEV_CNT (10) +#define MTK_DFLT_PORT_NAME_LEN (20) + +/* Mapping MTK_PEER_ID and mtk_port_tbl index */ +#define MTK_PORT_TBL_TYPE(ch) (MTK_PEER_ID(ch) - 1) + +/* ccci header length + reserved space that is used in exception flow */ +#define MTK_CCCI_H_ELEN (128) + +#define MTK_HDR_FLD_AST ((u32)BIT(31)) +#define MTK_HDR_FLD_SEQ GENMASK(30, 16) +#define MTK_HDR_FLD_CHN GENMASK(15, 0) + +#define MTK_INFO_FLD_EN ((u16)BIT(15)) +#define MTK_INFO_FLD_CHID GENMASK(14, 0) + +/* enum mtk_port_status - Descript port's some status. + * @PORT_S_DFLT: default value when port initialize. + * @PORT_S_ENABLE: port has been enabled. + * @PORT_S_OPEN: port has been opened. + * @PORT_S_RDWR: port R/W is allowed. + * @PORT_S_FLUSH: driver is flushing. + * @PORT_S_ON_STALE_LIST: port is on stale list. + */ +enum mtk_port_status { + PORT_S_DFLT = 0, + PORT_S_ENABLE, + PORT_S_OPEN, + PORT_S_RDWR, + PORT_S_FLUSH, + PORT_S_ON_STALE_LIST, +}; + +enum mtk_ccci_ch { + /* to sAP */ + CCCI_SAP_CONTROL_RX = 0x1000, + CCCI_SAP_CONTROL_TX = 0x1001, + /* to MD */ + CCCI_CONTROL_RX = 0x2000, + CCCI_CONTROL_TX = 0x2001, +}; + +enum mtk_port_flag { + PORT_F_DFLT = 0, + PORT_F_BLOCKING = BIT(1), + PORT_F_ALLOW_DROP = BIT(2), +}; + +enum mtk_port_tbl { + PORT_TBL_SAP, + PORT_TBL_MD, + PORT_TBL_MAX +}; + +enum mtk_port_type { + PORT_TYPE_INTERNAL, + PORT_TYPE_MAX +}; + +struct mtk_internal_port { + void *arg; + int (*recv_cb)(void *arg, struct sk_buff *skb); +}; + +/* union mtk_port_priv - Contains private data for different type of ports. + * @i_priv: private data for internal other user. + */ +union mtk_port_priv { + struct mtk_internal_port i_priv; +}; + +/* struct mtk_port_cfg - Contains port's basic configuration. + * @tx_ch: TX channel id (peer id (bit 12~15)+ channel id(bit 0 ~11)). + * @rx_ch: RX channel id. + * @vq_id: virtual queue id. + * @type: port type. + * @name: port name. + * @flags: port flags. + */ +struct mtk_port_cfg { + enum mtk_ccci_ch tx_ch; + enum mtk_ccci_ch rx_ch; + unsigned char vq_id; + enum mtk_port_type type; + char name[MTK_DFLT_PORT_NAME_LEN]; + unsigned char flags; +}; + +/* struct mtk_port - Represents a port of the control plane. + * @mtk_port_cfg: port's basic configuration. + * @kref: reference count. + * @enable: enable msg from modem. + * @status: port's current state, like open, enable etc. + * @minor: device minor id offset. + * @tx_seq: TX sequence id for mtk_ccci_header. + * @rx_seq: RX sequence id for mtk_ccci_header. + * @tx_mtu: TX max trans unit (64k at most). + * @rx_mtu: RX max trans unit (64k at most). + * @rx_skb_list: RX skb buffer. + * @rx_data_len: data length in RX skb buffer. + * @rx_buf_size: max size of RX skb buffer. + * @trb_wq: wait queue for trb submit. + * @rx_wq: wait queue for reading. + * @read_buf_lock: mutex lock used in user read function. + * @stale_entry: list head entry for stale list. + * @dev_str: string to identify the device which the port belongs. + * @port_mngr: point to mtk_port_mngr. + * @priv: private data for different type. + */ +struct mtk_port { + struct mtk_port_cfg info; + struct kref kref; + bool enable; + unsigned long status; + unsigned int minor; + unsigned short tx_seq; + unsigned short rx_seq; + unsigned int tx_mtu; + unsigned int rx_mtu; + struct sk_buff_head rx_skb_list; + unsigned int rx_data_len; + unsigned int rx_buf_size; + wait_queue_head_t trb_wq; + wait_queue_head_t rx_wq; + /* Use write_lock to lock user's write and disable thread */ + struct mutex write_lock; + /* Used to lock user's read thread */ + struct mutex read_buf_lock; + struct list_head stale_entry; + char dev_str[MTK_DEV_STR_LEN]; + struct mtk_port_mngr *port_mngr; + union mtk_port_priv priv; +}; + +struct mtk_vq_info { + int tx_mtu; + int rx_mtu; + unsigned int port_cnt; + bool color; +}; + +/* struct mtk_port_mngr - Include all the port information of a device. + * @ctrl_blk: pointer to mtk_ctrl_blk structure. + * @port_tbl: the table which manages sAP ports and md ports. + * @vq_info : manages the control port's virtual queue. + * @port_attr_kobj: pointer to attribute kobject structure. + * @dev_id: index to identify the device. + */ +struct mtk_port_mngr { + struct mtk_ctrl_blk *ctrl_blk; + struct radix_tree_root port_tbl[PORT_TBL_MAX]; + struct mtk_vq_info vq_info[VQ_NUM]; + struct kobject *port_attr_kobj; + int dev_id; +}; + +struct mtk_stale_list { + struct list_head entry; + struct list_head ports; + char dev_str[MTK_DEV_STR_LEN]; + int dev_id; +}; + +struct mtk_port_info { + __le16 channel; + __le16 reserved; +} __packed; + +struct mtk_port_enum_msg { + __le32 head_pattern; + __le16 port_cnt; + __le16 version; + __le32 tail_pattern; + u8 data[]; +} __packed; + +struct mtk_ccci_header { + __le32 packet_header; + __le32 packet_len; + __le32 status; + __le32 ex_msg; +}; + +extern const struct port_ops *ports_ops[PORT_TYPE_MAX]; + +void mtk_port_release(struct kref *port_kref); +struct mtk_port *mtk_port_search_by_name(struct mtk_port_mngr *port_mngr, char *name); +void mtk_port_stale_list_grp_cleanup(void); +int mtk_port_add_header(struct sk_buff *skb); +struct mtk_ccci_header *mtk_port_strip_header(struct sk_buff *skb); +int mtk_port_send_data(struct mtk_port *port, void *data); +int mtk_port_vq_enable(struct mtk_port *port); +int mtk_port_vq_disable(struct mtk_port *port); +int mtk_port_mngr_vq_status_check(struct sk_buff *skb); +int mtk_port_mngr_init(struct mtk_ctrl_blk *ctrl_blk); +void mtk_port_mngr_exit(struct mtk_ctrl_blk *ctrl_blk); + +#endif /* __MTK_PORT_H__ */ diff --git a/drivers/net/wwan/mediatek/mtk_port_io.c b/drivers/net/wwan/mediatek/mtk_port_io.c new file mode 100644 index 000000000000..efbbe97c50dd --- /dev/null +++ b/drivers/net/wwan/mediatek/mtk_port_io.c @@ -0,0 +1,301 @@ +// SPDX-License-Identifier: BSD-3-Clause-Clear +/* + * Copyright (c) 2022, MediaTek Inc. + */ + +#include "mtk_port_io.h" + +#define MTK_DFLT_READ_TIMEOUT (1 * HZ) + +static int mtk_port_get_locked(struct mtk_port *port) +{ + int ret = 0; + + /* Protect the structure not released suddenly during the check */ + mutex_lock(&port_mngr_grp_mtx); + if (!port) { + mutex_unlock(&port_mngr_grp_mtx); + pr_err("[TMI] Port does not exist\n"); + return -ENODEV; + } + kref_get(&port->kref); + mutex_unlock(&port_mngr_grp_mtx); + + return ret; +} + +/* After calling the mtk_port_put_locked(), + * do not use the port pointer because the port structure might be freed. + */ +static void mtk_port_put_locked(struct mtk_port *port) +{ + mutex_lock(&port_mngr_grp_mtx); + kref_put(&port->kref, mtk_port_release); + mutex_unlock(&port_mngr_grp_mtx); +} + +static void mtk_port_struct_init(struct mtk_port *port) +{ + port->tx_seq = 0; + port->rx_seq = -1; + clear_bit(PORT_S_ENABLE, &port->status); + kref_init(&port->kref); + skb_queue_head_init(&port->rx_skb_list); + port->rx_buf_size = MTK_RX_BUF_SIZE; + init_waitqueue_head(&port->trb_wq); + init_waitqueue_head(&port->rx_wq); + mutex_init(&port->read_buf_lock); +} + +static int mtk_port_internal_init(struct mtk_port *port) +{ + mtk_port_struct_init(port); + port->enable = false; + + return 0; +} + +static int mtk_port_internal_exit(struct mtk_port *port) +{ + if (test_bit(PORT_S_ENABLE, &port->status)) + ports_ops[port->info.type]->disable(port); + + return 0; +} + +static int mtk_port_reset(struct mtk_port *port) +{ + port->tx_seq = 0; + port->rx_seq = -1; + + return 0; +} + +static int mtk_port_internal_enable(struct mtk_port *port) +{ + int ret; + + if (test_bit(PORT_S_ENABLE, &port->status)) { + dev_info(port->port_mngr->ctrl_blk->mdev->dev, + "Skip to enable port( %s )\n", port->info.name); + return 0; + } + + ret = mtk_port_vq_enable(port); + if (ret && ret != -EBUSY) + return ret; + + set_bit(PORT_S_RDWR, &port->status); + set_bit(PORT_S_ENABLE, &port->status); + dev_info(port->port_mngr->ctrl_blk->mdev->dev, + "Port(%s) enable is complete\n", port->info.name); + + return 0; +} + +static int mtk_port_internal_disable(struct mtk_port *port) +{ + if (!test_and_clear_bit(PORT_S_ENABLE, &port->status)) { + dev_info(port->port_mngr->ctrl_blk->mdev->dev, + "Skip to disable port(%s)\n", port->info.name); + return 0; + } + + clear_bit(PORT_S_RDWR, &port->status); + mtk_port_vq_disable(port); + + dev_info(port->port_mngr->ctrl_blk->mdev->dev, + "Port(%s) disable is complete\n", port->info.name); + + return 0; +} + +static int mtk_port_internal_recv(struct mtk_port *port, struct sk_buff *skb) +{ + struct mtk_internal_port *priv; + int ret = -ENXIO; + + if (!test_bit(PORT_S_OPEN, &port->status)) { + /* If current port is not opened by any user, the received data will be dropped */ + dev_warn_ratelimited(port->port_mngr->ctrl_blk->mdev->dev, + "Unabled to recv: (%s) not opened\n", port->info.name); + goto drop_data; + } + + priv = &port->priv.i_priv; + if (!priv->recv_cb || !priv->arg) { + dev_warn_ratelimited(port->port_mngr->ctrl_blk->mdev->dev, + "Invalid (%s) recv_cb, drop packet\n", port->info.name); + goto drop_data; + } + + ret = priv->recv_cb(priv->arg, skb); + return ret; + +drop_data: + mtk_port_free_rx_skb(port, skb); + return ret; +} + +static int mtk_port_common_open(struct mtk_port *port) +{ + int ret = 0; + + if (!test_bit(PORT_S_ENABLE, &port->status)) { + pr_err("[TMI] Failed to open: (%s) is disabled\n", port->info.name); + ret = -ENODEV; + goto err; + } + + if (test_bit(PORT_S_OPEN, &port->status)) { + dev_warn(port->port_mngr->ctrl_blk->mdev->dev, + "Unabled to open port(%s) twice\n", port->info.name); + ret = -EBUSY; + goto err; + } + + dev_info(port->port_mngr->ctrl_blk->mdev->dev, "Open port %s\n", port->info.name); + skb_queue_purge(&port->rx_skb_list); + set_bit(PORT_S_OPEN, &port->status); + +err: + return ret; +} + +static void mtk_port_common_close(struct mtk_port *port) +{ + dev_info(port->port_mngr->ctrl_blk->mdev->dev, "Close port %s\n", port->info.name); + + clear_bit(PORT_S_OPEN, &port->status); + + skb_queue_purge(&port->rx_skb_list); +} + +/* mtk_port_internal_open() - Function for open internal port. + * @mdev: pointer to mtk_md_dev. + * @name: the name of port will be opened. + * @flag: optional operation type. + * + * This function called by FSM. Used to open interal port MDCTRL/SAPCTRL, + * when need to transer some control message. + * + * Return: + * mtk_port structure if success. + * error valude if fail. + */ +void *mtk_port_internal_open(struct mtk_md_dev *mdev, char *name, int flag) +{ + struct mtk_port_mngr *port_mngr; + struct mtk_ctrl_blk *ctrl_blk; + struct mtk_port *port; + int ret; + + ctrl_blk = mdev->ctrl_blk; + port_mngr = ctrl_blk->port_mngr; + + port = mtk_port_search_by_name(port_mngr, name); + ret = mtk_port_get_locked(port); + if (ret) + goto err; + + ret = mtk_port_common_open(port); + if (ret) { + mtk_port_put_locked(port); + goto err; + } + + if (flag & O_NONBLOCK) + port->info.flags &= ~PORT_F_BLOCKING; + else + port->info.flags |= PORT_F_BLOCKING; +err: + return port; +} + +/* mtk_port_internal_close() - Function for close internal port. + * @i_port: which port need close. + * + * This function called by FSM. Used to close interal port MDCTRL/SAPCTRL. + * + * Return: + * 0: success. + * -EINVAL: port is NULL. + * -EBADF: port is not opened. + */ +int mtk_port_internal_close(void *i_port) +{ + struct mtk_port *port = i_port; + int ret = 0; + + if (!port) { + ret = -EINVAL; + goto err; + } + + /* Avoid close port twice */ + if (!test_bit(PORT_S_OPEN, &port->status)) { + pr_err("[TMI] Port(%s) has been closed\n", port->info.name); + ret = -EBADF; + goto err; + } + + mtk_port_common_close(port); + mtk_port_put_locked(port); +err: + return ret; +} + +/* mtk_port_internal_write() - Function for writing interal data. + * @i_port: pointer to mtk_port, indicate channel for sending data. + * @skb: inlude the data to be sent. + * + * This function called by FSM. Used to write control message through + * interal port MDCTRL/SAPCTRL, example of handshake message. + * + * Return: + * actual sent data length if success. + * error value if send failed. + */ +int mtk_port_internal_write(void *i_port, struct sk_buff *skb) +{ + struct mtk_port *port = i_port; + + if (!port) + return -EINVAL; + + return mtk_port_send_data(port, skb); +} + +/* mtk_port_internal_recv_register() - Function for register receive callback. + * @i_port: pointer to mtk_port, indicate channel for receiving data. + * @cb: callback for receiving data. + * + * This function called by FSM. Used to register callback for receiving data. + * + * Return: No return valude. + * + */ +void mtk_port_internal_recv_register(void *i_port, + int (*cb)(void *priv, struct sk_buff *skb), + void *arg) +{ + struct mtk_port *port = i_port; + struct mtk_internal_port *priv; + + priv = &port->priv.i_priv; + priv->arg = arg; + priv->recv_cb = cb; +} + +static const struct port_ops port_internal_ops = { + .init = mtk_port_internal_init, + .exit = mtk_port_internal_exit, + .reset = mtk_port_reset, + .enable = mtk_port_internal_enable, + .disable = mtk_port_internal_disable, + .recv = mtk_port_internal_recv, +}; + +const struct port_ops *ports_ops[PORT_TYPE_MAX] = { + &port_internal_ops, +}; diff --git a/drivers/net/wwan/mediatek/mtk_port_io.h b/drivers/net/wwan/mediatek/mtk_port_io.h new file mode 100644 index 000000000000..859ade43d923 --- /dev/null +++ b/drivers/net/wwan/mediatek/mtk_port_io.h @@ -0,0 +1,45 @@ +/* SPDX-License-Identifier: BSD-3-Clause-Clear + * + * Copyright (c) 2022, MediaTek Inc. + */ + +#ifndef __MTK_PORT_IO_H__ +#define __MTK_PORT_IO_H__ + +#include +#include + +#include "mtk_bm.h" +#include "mtk_port.h" + +#define MTK_RX_BUF_SIZE (1024 * 1024) + +extern struct mutex port_mngr_grp_mtx; + +struct port_ops { + int (*init)(struct mtk_port *port); + int (*exit)(struct mtk_port *port); + int (*reset)(struct mtk_port *port); + int (*enable)(struct mtk_port *port); + int (*disable)(struct mtk_port *port); + int (*recv)(struct mtk_port *port, struct sk_buff *skb); +}; + +void *mtk_port_internal_open(struct mtk_md_dev *mdev, char *name, int flag); +int mtk_port_internal_close(void *i_port); +int mtk_port_internal_write(void *i_port, struct sk_buff *skb); +void mtk_port_internal_recv_register(void *i_port, + int (*cb)(void *priv, struct sk_buff *skb), + void *arg); + +static inline void mtk_port_free_rx_skb(struct mtk_port *port, struct sk_buff *skb) +{ + if (!port) + dev_kfree_skb_any(skb); + else if (port->rx_mtu > VQ_MTU_3_5K) + mtk_bm_free(port->port_mngr->ctrl_blk->bm_pool_63K, skb); + else + mtk_bm_free(port->port_mngr->ctrl_blk->bm_pool, skb); +} + +#endif /* __MTK_PORT_IO_H__ */ diff --git a/drivers/net/wwan/mediatek/pcie/mtk_pci.c b/drivers/net/wwan/mediatek/pcie/mtk_pci.c index 5be61178d30d..49cd98627410 100644 --- a/drivers/net/wwan/mediatek/pcie/mtk_pci.c +++ b/drivers/net/wwan/mediatek/pcie/mtk_pci.c @@ -12,6 +12,7 @@ #include #include "mtk_pci.h" +#include "mtk_port_io.h" #include "mtk_reg.h" #define MTK_PCI_TRANSPARENT_ATR_SIZE (0x3F) @@ -1158,6 +1159,7 @@ module_init(mtk_drv_init); static void __exit mtk_drv_exit(void) { pci_unregister_driver(&mtk_pci_drv); + mtk_port_stale_list_grp_cleanup(); } module_exit(mtk_drv_exit); From patchwork Tue Nov 22 11:11:45 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?WWFuY2hhbyBZYW5nICjmnajlvabotoUp?= X-Patchwork-Id: 24301 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp2146548wrr; Tue, 22 Nov 2022 03:36:47 -0800 (PST) X-Google-Smtp-Source: AA0mqf727FdsJDy6hOV4GnWZUtSzgNNUZjmJoVZbNDDTDVVdrW8CWkh4iQ/zqNlnbhBVOS6zBHby X-Received: by 2002:a62:79c9:0:b0:56d:2e42:987e with SMTP id u192-20020a6279c9000000b0056d2e42987emr25204157pfc.61.1669117007594; Tue, 22 Nov 2022 03:36:47 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1669117007; cv=none; d=google.com; s=arc-20160816; b=GhynwY9qxnwh4Rjb5VeowX73oPQbimecEm71ixO4GikT3MuQT0vzJ5CFVBQ0YIPZBf AzHsG0waORNMT/c1ky0+dE/xBrDytwK0ZPPgOB/EF2mvYo8BX06VeU0ctnU+INntgo9S WHuQ19qmrpo7mNZvD09hka2yQ0XU0U7DnACVfQ6fFCjjpGGfr9C7e8UsRdyT4f1ugoKT 8c64xFQY0Wdyai5hA+wJHLDYvEi8VLR3CBd4RUmYkkcGOmTY6WwcSqxI7ij713YbTr8M nFGxf1vGG+7oxugVR9uMk2ABBhz1z1sssslQaf+c9DLTN8xzuC7kP2+OYSWojxm6Nd/O wHnw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=MOZ9FhjMeuaBi89bJk9xeVRqPccawdZDCzt2YxI6/qs=; b=0eel6ETA3/64qFiHmath7P/0JSTQKTWr9i2QYcbNoA3pd/9GCxi6PUUXoK+WhEJ1DJ G5QfzEJ+bkO+MeIILHs7iNKVrGTZpryKVAJpIu8DxE4EcAlEH1t6K/qx3QyFbPAycYvC bvOXCS8bOsD8GQ1E/vaankD8TUFSCL4/v1sRD/ElYjRxXZUyTpNgFYFVXGJ7xGYmBATu sDkHEOsoglHY4wPvObe8Py4MNDYGjSfTsgAHRezDpjCDKUksoDm28v2gdbhCmUxgU7w0 wimjMY+SP1Yj2DWSovDOpX0m99X1lKptl67amdgSBfH2+IxqP8dIQYEiewCpHIGeAhPi YPQg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@mediatek.com header.s=dk header.b=Pj+puQ0D; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=mediatek.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id b21-20020a63d815000000b004769246309dsi13191576pgh.501.2022.11.22.03.36.29; Tue, 22 Nov 2022 03:36:47 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@mediatek.com header.s=dk header.b=Pj+puQ0D; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=mediatek.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233437AbiKVLYL (ORCPT + 99 others); Tue, 22 Nov 2022 06:24:11 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39124 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233232AbiKVLX1 (ORCPT ); Tue, 22 Nov 2022 06:23:27 -0500 Received: from mailgw02.mediatek.com (unknown [210.61.82.184]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CF43360352; Tue, 22 Nov 2022 03:18:52 -0800 (PST) X-UUID: bfb1bc07acb44716b9bd6c3fa823d591-20221122 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=mediatek.com; s=dk; h=Content-Type:MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:CC:To:From; bh=MOZ9FhjMeuaBi89bJk9xeVRqPccawdZDCzt2YxI6/qs=; b=Pj+puQ0DVHzQusN1ujhhxs2U+lWOTTElDMiFSbr0EdBB2MGIXbu6nHEpofmqNGUV2cGj7PhDqgHMarPmjiyNXQqiz2pOBqdX/emTNi1KUvRN9gQ9sP7PZKbS9Cwg++FMf9IxJwNrcj8SaybLuRxWEZ1aN0lIMZR1RtJ2GVKQVC0=; X-CID-P-RULE: Release_Ham X-CID-O-INFO: VERSION:1.1.12,REQID:41567b72-2fbf-4fb2-b20e-61b23ea05248,IP:0,U RL:0,TC:0,Content:-25,EDM:0,RT:0,SF:0,FILE:0,BULK:0,RULE:Release_Ham,ACTIO N:release,TS:-25 X-CID-META: VersionHash:62cd327,CLOUDID:bd4e7f2f-2938-482e-aafd-98d66723b8a9,B ulkID:nil,BulkQuantity:0,Recheck:0,SF:102,TC:nil,Content:0,EDM:-3,IP:nil,U RL:0,File:nil,Bulk:nil,QS:nil,BEC:nil,COL:1 X-UUID: bfb1bc07acb44716b9bd6c3fa823d591-20221122 Received: from mtkmbs13n2.mediatek.inc [(172.21.101.108)] by mailgw02.mediatek.com (envelope-from ) (Generic MTA with TLSv1.2 ECDHE-RSA-AES256-GCM-SHA384 256/256) with ESMTP id 430801496; Tue, 22 Nov 2022 19:18:47 +0800 Received: from mtkmbs11n2.mediatek.inc (172.21.101.187) by mtkmbs13n2.mediatek.inc (172.21.101.108) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.792.15; Tue, 22 Nov 2022 19:18:45 +0800 Received: from mcddlt001.gcn.mediatek.inc (10.19.240.15) by mtkmbs11n2.mediatek.inc (172.21.101.73) with Microsoft SMTP Server id 15.2.792.15 via Frontend Transport; Tue, 22 Nov 2022 19:18:43 +0800 From: Yanchao Yang To: Loic Poulain , Sergey Ryazanov , Johannes Berg , "David S . Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , netdev ML , kernel ML CC: MTK ML , Liang Lu , Haijun Liu , Hua Yang , Ting Wang , Felix Chen , Mingliang Xu , Min Dong , Aiden Wang , Guohao Zhang , Chris Feng , Yanchao Yang , Lambert Wang , Mingchuang Qiao , Xiayu Zhang , Haozhe Chang , MediaTek Corporation Subject: [PATCH net-next v1 06/13] net: wwan: tmi: Add FSM thread Date: Tue, 22 Nov 2022 19:11:45 +0800 Message-ID: <20221122111152.160377-7-yanchao.yang@mediatek.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20221122111152.160377-1-yanchao.yang@mediatek.com> References: <20221122111152.160377-1-yanchao.yang@mediatek.com> MIME-Version: 1.0 X-MTK: N X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_MSPIKE_H2,SPF_HELO_PASS, SPF_PASS,UNPARSEABLE_RELAY autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1750196035243505244?= X-GMAIL-MSGID: =?utf-8?q?1750196035243505244?= From: MediaTek Corporation The FSM (Finite-state Machine) thread is responsible for synchronizing the actions of different modules. The asynchronous events from the device or the OS will trigger a state transition. The FSM thread will append it to the event queue when an event arrives. It handles the events sequentially. After processing the event, the FSM thread notifies other modules before and after the state transition. Seven FSM states are defined. They can transition from one state to another, self-transition in some states, and transition in some sub-states. Signed-off-by: Mingliang Xu Signed-off-by: MediaTek Corporation --- drivers/net/wwan/mediatek/Makefile | 3 +- drivers/net/wwan/mediatek/mtk_cldma.c | 35 + drivers/net/wwan/mediatek/mtk_cldma.h | 2 + drivers/net/wwan/mediatek/mtk_ctrl_plane.c | 267 +++- drivers/net/wwan/mediatek/mtk_ctrl_plane.h | 9 + drivers/net/wwan/mediatek/mtk_dev.c | 12 + drivers/net/wwan/mediatek/mtk_dev.h | 2 + drivers/net/wwan/mediatek/mtk_fsm.c | 1310 +++++++++++++++++ drivers/net/wwan/mediatek/mtk_fsm.h | 178 +++ drivers/net/wwan/mediatek/mtk_port.c | 319 +++- drivers/net/wwan/mediatek/mtk_port.h | 7 + drivers/net/wwan/mediatek/mtk_port_io.c | 5 +- .../wwan/mediatek/pcie/mtk_cldma_drv_t800.c | 45 + .../wwan/mediatek/pcie/mtk_cldma_drv_t800.h | 2 + drivers/net/wwan/mediatek/pcie/mtk_pci.c | 1 + drivers/net/wwan/mediatek/pcie/mtk_reg.h | 11 + 16 files changed, 2177 insertions(+), 31 deletions(-) create mode 100644 drivers/net/wwan/mediatek/mtk_fsm.c create mode 100644 drivers/net/wwan/mediatek/mtk_fsm.h diff --git a/drivers/net/wwan/mediatek/Makefile b/drivers/net/wwan/mediatek/Makefile index 177211b92826..60a32d46183b 100644 --- a/drivers/net/wwan/mediatek/Makefile +++ b/drivers/net/wwan/mediatek/Makefile @@ -10,7 +10,8 @@ mtk_tmi-y = \ mtk_cldma.o \ pcie/mtk_cldma_drv_t800.o \ mtk_port.o \ - mtk_port_io.o + mtk_port_io.o \ + mtk_fsm.o ccflags-y += -I$(srctree)/$(src)/ ccflags-y += -I$(srctree)/$(src)/pcie/ diff --git a/drivers/net/wwan/mediatek/mtk_cldma.c b/drivers/net/wwan/mediatek/mtk_cldma.c index dc1713307797..723237547650 100644 --- a/drivers/net/wwan/mediatek/mtk_cldma.c +++ b/drivers/net/wwan/mediatek/mtk_cldma.c @@ -35,6 +35,7 @@ static int mtk_cldma_init(struct mtk_ctrl_trans *trans) cd->hw_ops.txq_free = mtk_cldma_txq_free_t800; cd->hw_ops.rxq_free = mtk_cldma_rxq_free_t800; cd->hw_ops.start_xfer = mtk_cldma_start_xfer_t800; + cd->hw_ops.fsm_state_listener = mtk_cldma_fsm_state_listener_t800; trans->dev[CLDMA_CLASS_ID] = cd; @@ -250,9 +251,43 @@ static int mtk_cldma_trb_process(void *dev, struct sk_buff *skb) return err; } +static void mtk_cldma_fsm_state_listener(struct mtk_fsm_param *param, struct mtk_ctrl_trans *trans) +{ + struct cldma_dev *cd = trans->dev[CLDMA_CLASS_ID]; + struct cldma_hw *hw; + int i; + + switch (param->to) { + case FSM_STATE_POSTDUMP: + cd->hw_ops.init(cd, CLDMA0); + break; + case FSM_STATE_DOWNLOAD: + if (param->fsm_flag & FSM_F_DL_PORT_CREATE) + cd->hw_ops.init(cd, CLDMA0); + break; + case FSM_STATE_BOOTUP: + for (i = 0; i < NR_CLDMA; i++) + cd->hw_ops.init(cd, i); + break; + case FSM_STATE_OFF: + for (i = 0; i < NR_CLDMA; i++) + cd->hw_ops.exit(cd, i); + break; + case FSM_STATE_MDEE: + if (param->fsm_flag & FSM_F_MDEE_INIT) + cd->hw_ops.init(cd, CLDMA1); + hw = cd->cldma_hw[CLDMA1 & HIF_ID_BITMASK]; + cd->hw_ops.fsm_state_listener(param, hw); + break; + default: + break; + } +} + struct hif_ops cldma_ops = { .init = mtk_cldma_init, .exit = mtk_cldma_exit, .trb_process = mtk_cldma_trb_process, .submit_tx = mtk_cldma_submit_tx, + .fsm_state_listener = mtk_cldma_fsm_state_listener, }; diff --git a/drivers/net/wwan/mediatek/mtk_cldma.h b/drivers/net/wwan/mediatek/mtk_cldma.h index 4fd5f826bcf6..c9656aa31455 100644 --- a/drivers/net/wwan/mediatek/mtk_cldma.h +++ b/drivers/net/wwan/mediatek/mtk_cldma.h @@ -10,6 +10,7 @@ #include "mtk_ctrl_plane.h" #include "mtk_dev.h" +#include "mtk_fsm.h" #define HW_QUEUE_NUM 8 #define ALLQ (0XFF) @@ -134,6 +135,7 @@ struct cldma_hw_ops { int (*txq_free)(struct cldma_hw *hw, int vqno); int (*rxq_free)(struct cldma_hw *hw, int vqno); int (*start_xfer)(struct cldma_hw *hw, int qno); + void (*fsm_state_listener)(struct mtk_fsm_param *param, struct cldma_hw *hw); }; struct cldma_hw { diff --git a/drivers/net/wwan/mediatek/mtk_ctrl_plane.c b/drivers/net/wwan/mediatek/mtk_ctrl_plane.c index 74845f8afa3e..12d9c30f2380 100644 --- a/drivers/net/wwan/mediatek/mtk_ctrl_plane.c +++ b/drivers/net/wwan/mediatek/mtk_ctrl_plane.c @@ -15,6 +15,11 @@ #include "mtk_ctrl_plane.h" #include "mtk_port.h" +static const struct virtq vq_tbl[] = { + {VQ(0), CLDMA0, TXQ(0), RXQ(0), VQ_MTU_3_5K, VQ_MTU_3_5K, TX_REQ_NUM, RX_REQ_NUM}, + {VQ(1), CLDMA1, TXQ(0), RXQ(0), VQ_MTU_3_5K, VQ_MTU_3_5K, TX_REQ_NUM, RX_REQ_NUM}, +}; + static int mtk_ctrl_get_hif_id(unsigned char peer_id) { if (peer_id == MTK_PEER_ID_SAP) @@ -96,6 +101,160 @@ int mtk_ctrl_vq_color_cleanup(struct mtk_ctrl_blk *ctrl_blk, unsigned char peer_ return 0; } +static bool mtk_ctrl_vqs_is_empty(struct trb_srv *srv) +{ + int i; + + for (i = srv->vq_start; i < srv->vq_cnt; i++) { + if (!skb_queue_empty(&srv->trans->skb_list[i])) + return false; + } + + return true; +} + +static void mtk_ctrl_vq_flush(struct trb_srv *srv, int vqno) +{ + struct mtk_ctrl_trans *trans = srv->trans; + struct sk_buff *skb; + struct trb *trb; + + while (!skb_queue_empty(&trans->skb_list[vqno])) { + skb = skb_dequeue(&trans->skb_list[vqno]); + trb = (struct trb *)skb->cb; + trb->status = -EIO; + trb->trb_complete(skb); + } +} + +static void mtk_ctrl_vqs_flush(struct trb_srv *srv) +{ + int i; + + for (i = srv->vq_start; i < srv->vq_cnt; i++) + mtk_ctrl_vq_flush(srv, i); +} + +static void mtk_ctrl_trb_process(struct trb_srv *srv) +{ + struct mtk_ctrl_trans *trans = srv->trans; + struct sk_buff *skb, *skb_next; + struct trb *trb, *trb_next; + int tx_burst_cnt = 0; + struct virtq *vq; + int loop; + int idx; + int err; + int i; + + for (i = srv->vq_start; i < srv->vq_cnt; i++) { + loop = 0; + do { + if (skb_queue_empty(&trans->skb_list[i])) + break; + + skb = skb_peek(&trans->skb_list[i]); + trb = (struct trb *)skb->cb; + vq = trans->vq_tbl + trb->vqno; + idx = (vq->hif_id >> HIF_CLASS_SHIFT) & (HIF_CLASS_WIDTH - 1); + if (idx < 0 || idx >= HIF_CLASS_NUM) + break; + + switch (trb->cmd) { + case TRB_CMD_ENABLE: + case TRB_CMD_DISABLE: + skb_unlink(skb, &trans->skb_list[i]); + err = mtk_port_mngr_vq_status_check(skb); + if (!err && trb->cmd == TRB_CMD_DISABLE) + mtk_ctrl_vq_flush(srv, i); + break; + case TRB_CMD_TX: + mtk_port_add_header(skb); + err = trans->ops[idx]->submit_tx(trans->dev[idx], skb); + if (err) + break; + + tx_burst_cnt++; + if (tx_burst_cnt >= TX_BURST_MAX_CNT || + skb_queue_is_last(&trans->skb_list[i], skb)) { + tx_burst_cnt = 0; + } else { + skb_next = skb_peek_next(skb, &trans->skb_list[i]); + trb_next = (struct trb *)skb_next->cb; + if (trb_next->cmd != TRB_CMD_TX) + tx_burst_cnt = 0; + } + + skb_unlink(skb, &trans->skb_list[i]); + err = tx_burst_cnt; + break; + default: + err = -EFAULT; + } + + if (!err) + trans->ops[idx]->trb_process(trans->dev[idx], skb); + + loop++; + } while (loop < TRB_NUM_PER_ROUND); + } +} + +static int mtk_ctrl_trb_thread(void *args) +{ + struct trb_srv *srv = args; + + while (!kthread_should_stop()) { + if (mtk_ctrl_vqs_is_empty(srv)) + wait_event_freezable(srv->trb_waitq, + !mtk_ctrl_vqs_is_empty(srv) || + kthread_should_stop() || kthread_should_park()); + + if (kthread_should_stop()) + break; + + if (kthread_should_park()) + kthread_parkme(); + + do { + mtk_ctrl_trb_process(srv); + if (need_resched()) + cond_resched(); + } while (!mtk_ctrl_vqs_is_empty(srv) && !kthread_should_stop() && + !kthread_should_park()); + } + mtk_ctrl_vqs_flush(srv); + return 0; +} + +static int mtk_ctrl_trb_srv_init(struct mtk_ctrl_trans *trans) +{ + struct trb_srv *srv; + + srv = devm_kzalloc(trans->mdev->dev, sizeof(*srv), GFP_KERNEL); + if (!srv) + return -ENOMEM; + + srv->trans = trans; + srv->vq_start = 0; + srv->vq_cnt = VQ_NUM; + + init_waitqueue_head(&srv->trb_waitq); + srv->trb_thread = kthread_run(mtk_ctrl_trb_thread, srv, "mtk_trb_srv_%s", + trans->mdev->dev_str); + trans->trb_srv = srv; + + return 0; +} + +static void mtk_ctrl_trb_srv_exit(struct mtk_ctrl_trans *trans) +{ + struct trb_srv *srv = trans->trb_srv; + + kthread_stop(srv->trb_thread); + devm_kfree(trans->mdev->dev, srv); +} + int mtk_ctrl_trb_submit(struct mtk_ctrl_blk *blk, struct sk_buff *skb) { struct mtk_ctrl_trans *trans = blk->trans; @@ -113,12 +272,108 @@ int mtk_ctrl_trb_submit(struct mtk_ctrl_blk *blk, struct sk_buff *skb) if (VQ_LIST_FULL(trans, vqno) && trb->cmd != TRB_CMD_DISABLE) return -EAGAIN; - /* This function will implement in next patch */ + if (trb->cmd == TRB_CMD_DISABLE) + skb_queue_head(&trans->skb_list[vqno], skb); + else + skb_queue_tail(&trans->skb_list[vqno], skb); + wake_up(&trans->trb_srv->trb_waitq); return 0; } +static int mtk_ctrl_trans_init(struct mtk_ctrl_blk *ctrl_blk) +{ + struct mtk_ctrl_trans *trans; + int err; + int i; + + trans = devm_kzalloc(ctrl_blk->mdev->dev, sizeof(*trans), GFP_KERNEL); + if (!trans) + return -ENOMEM; + + trans->ctrl_blk = ctrl_blk; + trans->vq_tbl = (struct virtq *)vq_tbl; + trans->ops[CLDMA_CLASS_ID] = &cldma_ops; + trans->mdev = ctrl_blk->mdev; + + for (i = 0; i < VQ_NUM; i++) + skb_queue_head_init(&trans->skb_list[i]); + + for (i = 0; i < HIF_CLASS_NUM; i++) { + err = trans->ops[i]->init(trans); + if (err) + goto err_exit; + } + + err = mtk_ctrl_trb_srv_init(trans); + if (err) + goto err_exit; + + ctrl_blk->trans = trans; + atomic_set(&trans->available, 1); + + return 0; + +err_exit: + for (i--; i >= 0; i--) + trans->ops[i]->exit(trans); + + devm_kfree(ctrl_blk->mdev->dev, trans); + return err; +} + +static int mtk_ctrl_trans_exit(struct mtk_ctrl_blk *ctrl_blk) +{ + struct mtk_ctrl_trans *trans = ctrl_blk->trans; + int i; + + atomic_set(&trans->available, 0); + mtk_ctrl_trb_srv_exit(trans); + + for (i = 0; i < HIF_CLASS_NUM; i++) + trans->ops[i]->exit(trans); + + devm_kfree(ctrl_blk->mdev->dev, trans); + return 0; +} + +static void mtk_ctrl_trans_fsm_state_handler(struct mtk_fsm_param *param, + struct mtk_ctrl_blk *ctrl_blk) +{ + int i; + + switch (param->to) { + case FSM_STATE_OFF: + for (i = 0; i < HIF_CLASS_NUM; i++) + ctrl_blk->trans->ops[i]->fsm_state_listener(param, ctrl_blk->trans); + mtk_ctrl_trans_exit(ctrl_blk); + break; + case FSM_STATE_ON: + mtk_ctrl_trans_init(ctrl_blk); + fallthrough; + default: + for (i = 0; i < HIF_CLASS_NUM; i++) + ctrl_blk->trans->ops[i]->fsm_state_listener(param, ctrl_blk->trans); + } +} + +static void mtk_ctrl_fsm_state_listener(struct mtk_fsm_param *param, void *data) +{ + struct mtk_ctrl_blk *ctrl_blk = data; + + if ((param->to == FSM_STATE_MDEE && param->fsm_flag & FSM_F_MDEE_ALLQ_RESET) || + (param->to == FSM_STATE_MDEE && param->fsm_flag & FSM_F_MDEE_INIT) || + param->to == FSM_STATE_POSTDUMP || param->to == FSM_STATE_DOWNLOAD || + param->to == FSM_STATE_BOOTUP) { + mtk_ctrl_trans_fsm_state_handler(param, ctrl_blk); + mtk_port_mngr_fsm_state_handler(param, ctrl_blk->port_mngr); + } else { + mtk_port_mngr_fsm_state_handler(param, ctrl_blk->port_mngr); + mtk_ctrl_trans_fsm_state_handler(param, ctrl_blk); + } +} + int mtk_ctrl_init(struct mtk_md_dev *mdev) { struct mtk_ctrl_blk *ctrl_blk; @@ -150,8 +405,17 @@ int mtk_ctrl_init(struct mtk_md_dev *mdev) if (err) goto err_destroy_pool_63K; + err = mtk_fsm_notifier_register(mdev, MTK_USER_CTRL, mtk_ctrl_fsm_state_listener, + ctrl_blk, FSM_PRIO_1, false); + if (err) { + dev_err(mdev->dev, "Fail to register fsm notification(ret = %d)\n", err); + goto err_port_exit; + } + return 0; +err_port_exit: + mtk_port_mngr_exit(ctrl_blk); err_destroy_pool_63K: mtk_bm_pool_destroy(mdev, ctrl_blk->bm_pool_63K); err_destroy_pool: @@ -166,6 +430,7 @@ int mtk_ctrl_exit(struct mtk_md_dev *mdev) { struct mtk_ctrl_blk *ctrl_blk = mdev->ctrl_blk; + mtk_fsm_notifier_unregister(mdev, MTK_USER_CTRL); mtk_port_mngr_exit(ctrl_blk); mtk_bm_pool_destroy(mdev, ctrl_blk->bm_pool); mtk_bm_pool_destroy(mdev, ctrl_blk->bm_pool_63K); diff --git a/drivers/net/wwan/mediatek/mtk_ctrl_plane.h b/drivers/net/wwan/mediatek/mtk_ctrl_plane.h index 38574dc21455..40c72b032413 100644 --- a/drivers/net/wwan/mediatek/mtk_ctrl_plane.h +++ b/drivers/net/wwan/mediatek/mtk_ctrl_plane.h @@ -10,19 +10,27 @@ #include #include "mtk_dev.h" +#include "mtk_fsm.h" #define VQ(N) (N) #define VQ_NUM (2) +#define TX_REQ_NUM (16) +#define RX_REQ_NUM (TX_REQ_NUM) +#define VQ_MTU_2K (0x800) #define VQ_MTU_3_5K (0xE00) +#define VQ_MTU_7K (0x1C00) #define VQ_MTU_63K (0xFC00) +#define TRB_NUM_PER_ROUND (16) #define SKB_LIST_MAX_LEN (16) +#define TX_BURST_MAX_CNT (5) #define BUFF_3_5K_MAX_CNT (100) #define BUFF_63K_MAX_CNT (64) #define HIF_CLASS_NUM (1) #define HIF_CLASS_SHIFT (8) +#define HIF_CLASS_WIDTH (8) #define HIF_ID_BITMASK (0x01) #define VQ_LIST_FULL(trans, vqno) ((trans)->skb_list[vqno].qlen >= SKB_LIST_MAX_LEN) @@ -72,6 +80,7 @@ struct hif_ops { int (*exit)(struct mtk_ctrl_trans *trans); int (*submit_tx)(void *dev, struct sk_buff *skb); int (*trb_process)(void *dev, struct sk_buff *skb); + void (*fsm_state_listener)(struct mtk_fsm_param *param, struct mtk_ctrl_trans *trans); }; struct mtk_ctrl_trans { diff --git a/drivers/net/wwan/mediatek/mtk_dev.c b/drivers/net/wwan/mediatek/mtk_dev.c index 96b111be206a..3bdd2888e072 100644 --- a/drivers/net/wwan/mediatek/mtk_dev.c +++ b/drivers/net/wwan/mediatek/mtk_dev.c @@ -6,11 +6,16 @@ #include "mtk_bm.h" #include "mtk_ctrl_plane.h" #include "mtk_dev.h" +#include "mtk_fsm.h" int mtk_dev_init(struct mtk_md_dev *mdev) { int ret; + ret = mtk_fsm_init(mdev); + if (ret) + goto err_fsm_init; + ret = mtk_bm_init(mdev); if (ret) goto err_bm_init; @@ -22,17 +27,24 @@ int mtk_dev_init(struct mtk_md_dev *mdev) err_ctrl_init: mtk_bm_exit(mdev); err_bm_init: + mtk_fsm_exit(mdev); +err_fsm_init: return ret; } void mtk_dev_exit(struct mtk_md_dev *mdev) { + mtk_fsm_evt_submit(mdev, FSM_EVT_DEV_RM, 0, NULL, 0, + EVT_MODE_BLOCKING | EVT_MODE_TOHEAD); mtk_ctrl_exit(mdev); mtk_bm_exit(mdev); + mtk_fsm_exit(mdev); } int mtk_dev_start(struct mtk_md_dev *mdev) { + mtk_fsm_evt_submit(mdev, FSM_EVT_DEV_ADD, 0, NULL, 0, 0); + mtk_fsm_start(mdev); return 0; } diff --git a/drivers/net/wwan/mediatek/mtk_dev.h b/drivers/net/wwan/mediatek/mtk_dev.h index d6e8e9b2e52a..26f0c87079cb 100644 --- a/drivers/net/wwan/mediatek/mtk_dev.h +++ b/drivers/net/wwan/mediatek/mtk_dev.h @@ -130,6 +130,8 @@ struct mtk_md_dev { u32 hw_ver; int msi_nvecs; char dev_str[MTK_DEV_STR_LEN]; + + struct mtk_md_fsm *fsm; void *ctrl_blk; struct mtk_bm_ctrl *bm_ctrl; }; diff --git a/drivers/net/wwan/mediatek/mtk_fsm.c b/drivers/net/wwan/mediatek/mtk_fsm.c new file mode 100644 index 000000000000..790d070fc2ec --- /dev/null +++ b/drivers/net/wwan/mediatek/mtk_fsm.c @@ -0,0 +1,1310 @@ +// SPDX-License-Identifier: BSD-3-Clause-Clear +/* + * Copyright (c) 2022, MediaTek Inc. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "mtk_common.h" +#include "mtk_fsm.h" +#include "mtk_port.h" +#include "mtk_port_io.h" +#include "mtk_reg.h" + +#define EVT_TF_PAUSE (0) +#define EVT_TF_GATECLOSED (1) + +#define FSM_HS_START_MASK (FSM_F_SAP_HS_START | FSM_F_MD_HS_START) +#define FSM_HS2_DONE_MASK (FSM_F_SAP_HS2_DONE | FSM_F_MD_HS2_DONE) + +#define EXT_EVT_D2H_MDEE_MASK (EXT_EVT_D2H_EXCEPT_INIT | EXT_EVT_D2H_EXCEPT_INIT_DONE |\ + EXT_EVT_D2H_EXCEPT_CLEARQ_DONE | EXT_EVT_D2H_EXCEPT_ALLQ_RESET) +#define RTFT_DATA_SIZE (3 * 1024) + +#define MDEE_CHK_ID 0x45584350 +#define MDEE_REC_OK_CHK_ID 0x45524543 + +#define REGION_BITMASK 0xF +#define BROM_EVT_SHIFT 4 +#define LK_EVT_SHIFT 8 +#define DEVICE_CFG_SHIFT 24 +#define DEVICE_CFG_REGION_MASK 0x3 + +#define HOST_EVT_SHIFT 28 +#define HOST_REGION_BITMASK 0xF0000000 + +enum host_event { + HOST_EVT_INIT = 0, + HOST_ENTER_DA = 2, +}; + +enum brom_event { + BROM_EVT_NORMAL = 0, + BROM_EVT_JUMP_BL, + BROM_EVT_TIME_OUT, + BROM_EVT_JUMP_DA, + BROM_EVT_START_DL, +}; + +enum lk_event { + LK_EVT_NORMAL = 0, + LK_EVT_CREATE_PD_PORT, +}; + +enum device_stage { + DEV_STAGE_INIT = 0, + DEV_STAGE_BROM1, + DEV_STAGE_BROM2, + DEV_STAGE_LK, + DEV_STAGE_LINUX, + DEV_STAGE_MAX +}; + +enum device_cfg { + DEV_CFG_NORMAL = 0, + DEV_CFG_MD_ONLY, +}; + +enum runtime_feature_support_type { + RTFT_TYPE_NOT_EXIST = 0, + RTFT_TYPE_NOT_SUPPORT = 1, + RTFT_TYPE_MUST_SUPPORT = 2, + RTFT_TYPE_OPTIONAL_SUPPORT = 3, + RTFT_TYPE_SUPPORT_BACKWARD_COMPAT = 4, +}; + +enum runtime_feature_id { + RTFT_ID_MD_PORT_ENUM = 0, + RTFT_ID_SAP_PORT_ENUM = 1, + RTFT_ID_MD_PORT_CFG = 2, + RTFT_ID_MAX +}; + +enum ctrl_msg_id { + CTRL_MSG_HS1 = 0, + CTRL_MSG_HS2 = 1, + CTRL_MSG_HS3 = 2, + CTRL_MSG_MDEE = 4, + CTRL_MSG_MDEE_REC_OK = 6, + CTRL_MSG_MDEE_PASS = 8, +}; + +struct ctrl_msg_header { + __le32 id; + __le32 ex_msg; + __le32 data_len; + u8 reserved[]; +} __packed; + +struct runtime_feature_entry { + u8 feature_id; + struct runtime_feature_info support_info; + u8 reserved[2]; + __le32 data_len; + u8 data[]; +}; + +struct feature_query { + __le32 head_pattern; + struct runtime_feature_info ft_set[FEATURE_CNT]; + __le32 tail_pattern; +}; + +static int mtk_fsm_send_hs1_msg(struct fsm_hs_info *hs_info) +{ + struct mtk_md_fsm *fsm = container_of(hs_info, struct mtk_md_fsm, hs_info[hs_info->id]); + struct ctrl_msg_header *ctrl_msg_h; + struct feature_query *ft_query; + struct sk_buff *skb; + int ret, msg_size; + + msg_size = sizeof(*ctrl_msg_h) + sizeof(*ft_query); + skb = __dev_alloc_skb(msg_size, GFP_KERNEL); + if (!skb) { + ret = -ENOMEM; + goto hs_err; + } + + skb_put(skb, msg_size); + /* fill control message header */ + ctrl_msg_h = (struct ctrl_msg_header *)skb->data; + ctrl_msg_h->id = cpu_to_le32(CTRL_MSG_HS1); + ctrl_msg_h->ex_msg = 0; + ctrl_msg_h->data_len = cpu_to_le32(sizeof(*ft_query)); + + /* fill feature query structure */ + ft_query = (struct feature_query *)(skb->data + sizeof(*ctrl_msg_h)); + ft_query->head_pattern = cpu_to_le32(FEATURE_QUERY_PATTERN); + memcpy(ft_query->ft_set, hs_info->query_ft_set, sizeof(hs_info->query_ft_set)); + ft_query->tail_pattern = cpu_to_le32(FEATURE_QUERY_PATTERN); + + /* send handshake1 message to device */ + ret = mtk_port_internal_write(hs_info->ctrl_port, skb); + if (ret > 0) + return 0; +hs_err: + dev_err(fsm->mdev->dev, "Failed to send handshake1 message,ret=%d\n", ret); + return ret; +} + +static int mtk_fsm_feature_set_match(enum runtime_feature_support_type *cur_ft_spt, + struct runtime_feature_info rtft_info_st, + struct runtime_feature_info rtft_info_cfg) +{ + int ret = 0; + + switch (FIELD_GET(FEATURE_TYPE, rtft_info_st.feature)) { + case RTFT_TYPE_NOT_EXIST: + fallthrough; + case RTFT_TYPE_NOT_SUPPORT: + *cur_ft_spt = RTFT_TYPE_NOT_EXIST; + break; + case RTFT_TYPE_MUST_SUPPORT: + if (FIELD_GET(FEATURE_TYPE, rtft_info_cfg.feature) == RTFT_TYPE_NOT_EXIST || + FIELD_GET(FEATURE_TYPE, rtft_info_cfg.feature) == RTFT_TYPE_NOT_SUPPORT) + ret = -EPROTO; + else + *cur_ft_spt = RTFT_TYPE_MUST_SUPPORT; + break; + case RTFT_TYPE_OPTIONAL_SUPPORT: + if (FIELD_GET(FEATURE_TYPE, rtft_info_cfg.feature) == RTFT_TYPE_NOT_EXIST || + FIELD_GET(FEATURE_TYPE, rtft_info_cfg.feature) == RTFT_TYPE_NOT_SUPPORT) { + *cur_ft_spt = RTFT_TYPE_NOT_SUPPORT; + } else { + if (FIELD_GET(FEATURE_VER, rtft_info_st.feature) == + FIELD_GET(FEATURE_VER, rtft_info_cfg.feature)) + *cur_ft_spt = RTFT_TYPE_MUST_SUPPORT; + else + *cur_ft_spt = RTFT_TYPE_NOT_SUPPORT; + } + break; + case RTFT_TYPE_SUPPORT_BACKWARD_COMPAT: + if (FIELD_GET(FEATURE_VER, rtft_info_st.feature) >= + FIELD_GET(FEATURE_VER, rtft_info_cfg.feature)) + *cur_ft_spt = RTFT_TYPE_MUST_SUPPORT; + else + *cur_ft_spt = RTFT_TYPE_NOT_EXIST; + break; + default: + ret = -EPROTO; + } + + return ret; +} + +static int (*rtft_action[FEATURE_CNT])(struct mtk_md_dev *mdev, void *rt_data) = { + [RTFT_ID_MD_PORT_ENUM] = mtk_port_status_update, + [RTFT_ID_SAP_PORT_ENUM] = mtk_port_status_update, +}; + +static int mtk_fsm_parse_hs2_msg(struct fsm_hs_info *hs_info) +{ + struct mtk_md_fsm *fsm = container_of(hs_info, struct mtk_md_fsm, hs_info[hs_info->id]); + char *rt_data = ((struct sk_buff *)hs_info->rt_data)->data; + enum runtime_feature_support_type cur_ft_spt; + struct runtime_feature_entry *rtft_entry; + int ft_id, ret = 0, offset; + + offset = sizeof(struct feature_query); + for (ft_id = 0; ft_id < FEATURE_CNT && offset < hs_info->rt_data_len; ft_id++) { + rtft_entry = (struct runtime_feature_entry *)(rt_data + offset); + ret = mtk_fsm_feature_set_match(&cur_ft_spt, + rtft_entry->support_info, + hs_info->query_ft_set[ft_id]); + if (ret < 0) + break; + + if (cur_ft_spt == RTFT_TYPE_MUST_SUPPORT) + if (rtft_action[ft_id]) + ret = rtft_action[ft_id](fsm->mdev, rtft_entry->data); + if (ret < 0) + break; + + offset += sizeof(rtft_entry) + le32_to_cpu(rtft_entry->data_len); + } + + if (ft_id != FEATURE_CNT) { + dev_err(fsm->mdev->dev, "Unable to handle mistake hs2 message,fd_id=%d\n", ft_id); + ret = -EPROTO; + } + + return ret; +} + +static int mtk_fsm_append_rtft_entries(struct mtk_md_dev *mdev, void *feature_data, + unsigned int *len, struct fsm_hs_info *hs_info) +{ + char *rt_data = ((struct sk_buff *)hs_info->rt_data)->data; + struct runtime_feature_entry *rtft_entry; + int ft_id, ret = 0, rtdata_len = 0; + struct feature_query *ft_query; + u8 version; + + ft_query = (struct feature_query *)rt_data; + if (le32_to_cpu(ft_query->head_pattern) != FEATURE_QUERY_PATTERN || + le32_to_cpu(ft_query->tail_pattern) != FEATURE_QUERY_PATTERN) { + dev_err(mdev->dev, + "Failed to match ft_query pattern: head=0x%x,tail=0x%x\n", + le32_to_cpu(ft_query->head_pattern), le32_to_cpu(ft_query->tail_pattern)); + ret = -EPROTO; + goto hs_err; + } + + /* parse runtime feature query and fill runtime feature entry */ + rtft_entry = feature_data; + for (ft_id = 0; ft_id < FEATURE_CNT && rtdata_len < RTFT_DATA_SIZE; ft_id++) { + rtft_entry->feature_id = ft_id; + rtft_entry->data_len = 0; + + switch (FIELD_GET(FEATURE_TYPE, ft_query->ft_set[ft_id].feature)) { + case RTFT_TYPE_NOT_EXIST: + fallthrough; + case RTFT_TYPE_NOT_SUPPORT: + fallthrough; + case RTFT_TYPE_MUST_SUPPORT: + rtft_entry->support_info = ft_query->ft_set[ft_id]; + break; + case RTFT_TYPE_OPTIONAL_SUPPORT: + if (FIELD_GET(FEATURE_VER, ft_query->ft_set[ft_id].feature) == + FIELD_GET(FEATURE_VER, hs_info->supported_ft_set[ft_id].feature) && + FIELD_GET(FEATURE_TYPE, hs_info->supported_ft_set[ft_id].feature) >= + RTFT_TYPE_MUST_SUPPORT) + rtft_entry->support_info.feature = + FIELD_PREP(FEATURE_TYPE, RTFT_TYPE_MUST_SUPPORT); + else + rtft_entry->support_info.feature = + FIELD_PREP(FEATURE_TYPE, RTFT_TYPE_NOT_SUPPORT); + version = FIELD_GET(FEATURE_VER, hs_info->supported_ft_set[ft_id].feature); + rtft_entry->support_info.feature |= FIELD_PREP(FEATURE_VER, version); + break; + case RTFT_TYPE_SUPPORT_BACKWARD_COMPAT: + if (FIELD_GET(FEATURE_VER, ft_query->ft_set[ft_id].feature) >= + FIELD_GET(FEATURE_VER, hs_info->supported_ft_set[ft_id].feature)) + rtft_entry->support_info.feature = + FIELD_PREP(FEATURE_TYPE, RTFT_TYPE_MUST_SUPPORT); + else + rtft_entry->support_info.feature = + FIELD_PREP(FEATURE_TYPE, RTFT_TYPE_NOT_SUPPORT); + version = FIELD_GET(FEATURE_VER, hs_info->supported_ft_set[ft_id].feature); + rtft_entry->support_info.feature |= FIELD_PREP(FEATURE_VER, version); + break; + } + + if (FIELD_GET(FEATURE_TYPE, rtft_entry->support_info.feature) == + RTFT_TYPE_MUST_SUPPORT) { + if (rtft_action[ft_id]) { + ret = rtft_action[ft_id](mdev, rtft_entry->data); + if (ret < 0) + goto hs_err; + } + } + + rtdata_len += sizeof(*rtft_entry) + le32_to_cpu(rtft_entry->data_len); + rtft_entry = (struct runtime_feature_entry *)(feature_data + rtdata_len); + } + *len = rtdata_len; + return 0; +hs_err: + *len = 0; + return ret; +} + +static int mtk_fsm_send_hs3_msg(struct fsm_hs_info *hs_info) +{ + struct mtk_md_fsm *fsm = container_of(hs_info, struct mtk_md_fsm, hs_info[hs_info->id]); + unsigned int data_len, msg_size = 0; + struct ctrl_msg_header *ctrl_msg_h; + struct sk_buff *skb; + int ret; + + skb = __dev_alloc_skb(RTFT_DATA_SIZE, GFP_KERNEL); + if (!skb) { + ret = -ENOMEM; + goto hs_err; + } + /* fill control message header */ + msg_size += sizeof(*ctrl_msg_h); + ctrl_msg_h = (struct ctrl_msg_header *)skb->data; + ctrl_msg_h->id = cpu_to_le32(CTRL_MSG_HS3); + ctrl_msg_h->ex_msg = 0; + ret = mtk_fsm_append_rtft_entries(fsm->mdev, + skb->data + sizeof(*ctrl_msg_h), + &data_len, hs_info); + if (ret) + goto hs_err; + + ctrl_msg_h->data_len = cpu_to_le32(data_len); + msg_size += data_len; + skb_put(skb, msg_size); + /* send handshake3 message to device */ + ret = mtk_port_internal_write(hs_info->ctrl_port, skb); + if (ret > 0) + return 0; +hs_err: + dev_err(fsm->mdev->dev, "Failed to send handshake3 message:ret=%d\n", ret); + return ret; +} + +static int mtk_fsm_sap_ctrl_msg_handler(void *__fsm, struct sk_buff *skb) +{ + struct ctrl_msg_header *ctrl_msg_h; + struct mtk_md_fsm *fsm = __fsm; + struct fsm_hs_info *hs_info; + + ctrl_msg_h = (struct ctrl_msg_header *)skb->data; + skb_pull(skb, sizeof(*ctrl_msg_h)); + + hs_info = &fsm->hs_info[HS_ID_SAP]; + if (le32_to_cpu(ctrl_msg_h->id) != CTRL_MSG_HS2) + return -EPROTO; + + hs_info->rt_data = skb; + hs_info->rt_data_len = skb->len; + mtk_fsm_evt_submit(fsm->mdev, FSM_EVT_STARTUP, + hs_info->fsm_flag_hs2, hs_info, sizeof(*hs_info), 0); + + return 0; +} + +static int mtk_fsm_md_ctrl_msg_handler(void *__fsm, struct sk_buff *skb) +{ + struct ctrl_msg_header *ctrl_msg_h; + struct mtk_md_fsm *fsm = __fsm; + struct fsm_hs_info *hs_info; + bool need_free_data = true; + int ret = 0; + u32 ex_msg; + + ctrl_msg_h = (struct ctrl_msg_header *)skb->data; + ex_msg = le32_to_cpu(ctrl_msg_h->ex_msg); + hs_info = &fsm->hs_info[HS_ID_MD]; + switch (le32_to_cpu(ctrl_msg_h->id)) { + case CTRL_MSG_HS2: + need_free_data = false; + skb_pull(skb, sizeof(*ctrl_msg_h)); + hs_info->rt_data = skb; + hs_info->rt_data_len = skb->len; + mtk_fsm_evt_submit(fsm->mdev, FSM_EVT_STARTUP, + hs_info->fsm_flag_hs2, hs_info, sizeof(*hs_info), 0); + break; + case CTRL_MSG_MDEE: + if (ex_msg != MDEE_CHK_ID) + dev_err(fsm->mdev->dev, "Unable to match MDEE packet(0x%x)\n", + ex_msg); + else + mtk_fsm_evt_submit(fsm->mdev, FSM_EVT_MDEE, + FSM_F_MDEE_MSG, hs_info, sizeof(*hs_info), 0); + break; + case CTRL_MSG_MDEE_REC_OK: + if (ex_msg != MDEE_REC_OK_CHK_ID) + dev_err(fsm->mdev->dev, "Unable to match MDEE REC OK packet(0x%x)\n", + ex_msg); + else + mtk_fsm_evt_submit(fsm->mdev, FSM_EVT_MDEE, + FSM_F_MDEE_RECV_OK, NULL, 0, 0); + break; + case CTRL_MSG_MDEE_PASS: + mtk_fsm_evt_submit(fsm->mdev, FSM_EVT_MDEE, FSM_F_MDEE_PASS, NULL, 0, 0); + break; + default: + dev_err(fsm->mdev->dev, "Invalid control message id\n"); + } + + if (need_free_data) + dev_kfree_skb(skb); + + return ret; +} + +static int (*ctrl_msg_handler[HS_ID_MAX])(void *__fsm, struct sk_buff *skb) = { + [HS_ID_MD] = mtk_fsm_md_ctrl_msg_handler, + [HS_ID_SAP] = mtk_fsm_sap_ctrl_msg_handler, +}; + +static void mtk_fsm_host_evt_ack(struct mtk_md_dev *mdev, enum host_event id) +{ + u32 dev_state; + + dev_state = mtk_hw_get_dev_state(mdev); + dev_state &= ~HOST_REGION_BITMASK; + dev_state |= id << HOST_EVT_SHIFT; + mtk_hw_ack_dev_state(mdev, dev_state); +} + +static void mtk_fsm_brom_evt_handler(struct mtk_md_dev *mdev, u32 dev_state) +{ + u32 brom_evt = dev_state >> BROM_EVT_SHIFT & REGION_BITMASK; + + switch (brom_evt) { + case BROM_EVT_JUMP_BL: + mtk_fsm_evt_submit(mdev, FSM_EVT_DOWNLOAD, FSM_F_DL_JUMPBL, NULL, 0, 0); + break; + case BROM_EVT_TIME_OUT: + mtk_fsm_evt_submit(mdev, FSM_EVT_DOWNLOAD, FSM_F_DL_TIMEOUT, NULL, 0, 0); + break; + case BROM_EVT_JUMP_DA: + mtk_fsm_host_evt_ack(mdev, HOST_ENTER_DA); + mtk_fsm_evt_submit(mdev, FSM_EVT_DOWNLOAD, FSM_F_DL_DA, NULL, 0, 0); + break; + case BROM_EVT_START_DL: + mtk_fsm_evt_submit(mdev, FSM_EVT_DOWNLOAD, FSM_F_DL_PORT_CREATE, NULL, 0, 0); + break; + default: + dev_err(mdev->dev, "Invalid brom event, value = 0x%x\n", dev_state); + } +} + +static void mtk_fsm_lk_evt_handler(struct mtk_md_dev *mdev, u32 dev_state) +{ + u32 lk_evt = dev_state >> LK_EVT_SHIFT & REGION_BITMASK; + + if (lk_evt != LK_EVT_CREATE_PD_PORT) { + dev_err(mdev->dev, "Invalid LK event, value = 0x%x\n", dev_state); + return; + } + + mtk_fsm_evt_submit(mdev, FSM_EVT_POSTDUMP, FSM_F_DFLT, NULL, 0, 0); +} + +static void mtk_fsm_linux_evt_handler(struct mtk_md_dev *mdev, + u32 dev_state, struct mtk_md_fsm *fsm) +{ + u32 dev_cfg = dev_state >> DEVICE_CFG_SHIFT & DEVICE_CFG_REGION_MASK; + int hs_id; + + if (dev_cfg == DEV_CFG_MD_ONLY) + fsm->hs_done_flag = FSM_F_MD_HS_START | FSM_F_MD_HS2_DONE; + else + fsm->hs_done_flag = FSM_HS_START_MASK | FSM_HS2_DONE_MASK; + + for (hs_id = 0; hs_id < HS_ID_MAX; hs_id++) + mtk_hw_unmask_ext_evt(mdev, fsm->hs_info[hs_id].mhccif_ch); + + mtk_hw_unmask_ext_evt(mdev, EXT_EVT_D2H_MDEE_MASK); +} + +static int mtk_fsm_early_bootup_handler(u32 status, void *__fsm) +{ + struct mtk_md_fsm *fsm = __fsm; + struct mtk_md_dev *mdev; + u32 dev_state, dev_stage; + + mdev = fsm->mdev; + mtk_hw_mask_ext_evt(mdev, status); + mtk_hw_clear_ext_evt(mdev, status); + + dev_state = mtk_hw_get_dev_state(mdev); + dev_stage = dev_state & REGION_BITMASK; + if (dev_stage >= DEV_STAGE_MAX) { + dev_err(mdev->dev, "Invalid dev state 0x%x\n", dev_state); + return -ENXIO; + } + + if (dev_state == fsm->last_dev_state) + goto exit; + dev_info(mdev->dev, "Device stage change 0x%x->0x%x\n", fsm->last_dev_state, dev_state); + fsm->last_dev_state = dev_state; + + switch (dev_stage) { + case DEV_STAGE_BROM1: + fallthrough; + case DEV_STAGE_BROM2: + mtk_fsm_brom_evt_handler(mdev, dev_state); + break; + case DEV_STAGE_LK: + mtk_fsm_lk_evt_handler(mdev, dev_state); + break; + case DEV_STAGE_LINUX: + mtk_fsm_linux_evt_handler(mdev, dev_state, fsm); + break; + } + +exit: + if (dev_stage != DEV_STAGE_LINUX) + mtk_hw_unmask_ext_evt(mdev, EXT_EVT_D2H_BOOT_FLOW_SYNC); + + return 0; +} + +static int mtk_fsm_ctrl_ch_start(struct mtk_md_fsm *fsm, struct fsm_hs_info *hs_info) +{ + if (!hs_info->ctrl_port) { + hs_info->ctrl_port = mtk_port_internal_open(fsm->mdev, hs_info->port_name, 0); + if (!hs_info->ctrl_port) { + dev_err(fsm->mdev->dev, "Failed to open ctrl port(%s)\n", + hs_info->port_name); + return -ENODEV; + } + mtk_port_internal_recv_register(hs_info->ctrl_port, + ctrl_msg_handler[hs_info->id], fsm); + } + + return 0; +} + +static void mtk_fsm_ctrl_ch_stop(struct mtk_md_fsm *fsm) +{ + struct fsm_hs_info *hs_info; + int hs_id; + + for (hs_id = 0; hs_id < HS_ID_MAX; hs_id++) { + hs_info = &fsm->hs_info[hs_id]; + mtk_port_internal_close(hs_info->ctrl_port); + } +} + +static void mtk_fsm_switch_state(struct mtk_md_fsm *fsm, + enum mtk_fsm_state to_state, + struct mtk_fsm_evt *event) +{ + char uevent_info[MTK_UEVENT_INFO_LEN]; + struct mtk_fsm_notifier *nt; + struct mtk_fsm_param param; + + param.from = fsm->state; + param.to = to_state; + param.evt_id = event->id; + param.fsm_flag = event->fsm_flag; + + list_for_each_entry(nt, &fsm->pre_notifiers, entry) + nt->cb(¶m, nt->data); + + fsm->state = to_state; + fsm->fsm_flag |= event->fsm_flag; + dev_info(fsm->mdev->dev, "FSM transited to state=%d, fsm_flag=0x%x\n", + to_state, fsm->fsm_flag); + + snprintf(uevent_info, MTK_UEVENT_INFO_LEN, + "state=%d, fsm_flag=0x%x", to_state, fsm->fsm_flag); + mtk_uevent_notify(fsm->mdev->dev, MTK_UEVENT_FSM, uevent_info); + + list_for_each_entry(nt, &fsm->post_notifiers, entry) + nt->cb(¶m, nt->data); +} + +static int mtk_fsm_startup_act(struct mtk_md_fsm *fsm, struct mtk_fsm_evt *event) +{ + enum mtk_fsm_state to_state = FSM_STATE_BOOTUP; + struct fsm_hs_info *hs_info = event->data; + struct mtk_md_dev *mdev = fsm->mdev; + int ret = 0; + + if (fsm->state != FSM_STATE_ON && fsm->state != FSM_STATE_DOWNLOAD && + fsm->state != FSM_STATE_BOOTUP) { + ret = -EPROTO; + goto hs_err; + } + + if (event->fsm_flag & FSM_HS_START_MASK) { + mtk_fsm_switch_state(fsm, to_state, event); + + ret = mtk_fsm_ctrl_ch_start(fsm, hs_info); + if (!ret) + ret = mtk_fsm_send_hs1_msg(hs_info); + if (ret) + goto hs_err; + } else if (event->fsm_flag & FSM_HS2_DONE_MASK) { + ret = mtk_fsm_parse_hs2_msg(hs_info); + if (!ret) { + mtk_fsm_switch_state(fsm, to_state, event); + ret = mtk_fsm_send_hs3_msg(hs_info); + } + dev_kfree_skb(hs_info->rt_data); + if (ret) + goto hs_err; + } + + if (((fsm->fsm_flag | event->fsm_flag) & fsm->hs_done_flag) == fsm->hs_done_flag) { + to_state = FSM_STATE_READY; + mtk_fsm_switch_state(fsm, to_state, event); + } + + return 0; +hs_err: + dev_err(mdev->dev, "Failed to handshake with device %d:0x%x", fsm->state, fsm->fsm_flag); + return ret; +} + +static int mtk_fsm_dev_add_act(struct mtk_md_fsm *fsm, struct mtk_fsm_evt *event) +{ + struct mtk_md_dev *mdev = fsm->mdev; + + if (fsm->state != FSM_STATE_OFF && fsm->state != FSM_STATE_INVALID) { + dev_err(mdev->dev, "Unable to handle the event in the state %d\n", fsm->state); + return -EPROTO; + } + + mtk_fsm_switch_state(fsm, FSM_STATE_ON, event); + mtk_hw_unmask_ext_evt(mdev, EXT_EVT_D2H_BOOT_FLOW_SYNC); + + return 0; +} + +static int mtk_fsm_mdee_act(struct mtk_md_fsm *fsm, struct mtk_fsm_evt *event) +{ + struct mtk_md_dev *mdev = fsm->mdev; + struct ctrl_msg_header *ctrl_msg_h; + struct fsm_hs_info *hs_info; + struct sk_buff *ctrl_msg; + int ret; + + if (fsm->state != FSM_STATE_ON && fsm->state != FSM_STATE_BOOTUP && + fsm->state != FSM_STATE_READY && fsm->state != FSM_STATE_MDEE) { + dev_err(mdev->dev, "Unable to handle the event in the state %d\n", fsm->state); + return -EPROTO; + } + + mtk_fsm_switch_state(fsm, FSM_STATE_MDEE, event); + + switch (event->fsm_flag) { + case FSM_F_MDEE_INIT: + mtk_hw_send_ext_evt(mdev, EXT_EVT_H2D_EXCEPT_ACK); + mtk_fsm_ctrl_ch_start(fsm, &fsm->hs_info[HS_ID_MD]); + break; + case FSM_F_MDEE_CLEARQ_DONE: + mtk_hw_send_ext_evt(mdev, EXT_EVT_H2D_EXCEPT_CLEARQ_ACK); + break; + case FSM_F_MDEE_MSG: + hs_info = event->data; + ctrl_msg = __dev_alloc_skb(sizeof(*ctrl_msg), GFP_KERNEL); + if (!ctrl_msg) { + dev_err(mdev->dev, "Unable to alloc ctrl message packet\n"); + return -ENOMEM; + } + skb_put(ctrl_msg, sizeof(*ctrl_msg_h)); + /* fill control message header */ + ctrl_msg_h = (struct ctrl_msg_header *)ctrl_msg->data; + ctrl_msg_h->id = cpu_to_le32(CTRL_MSG_MDEE); + ctrl_msg_h->ex_msg = cpu_to_le32(MDEE_CHK_ID); + ctrl_msg_h->data_len = 0; + + ret = mtk_port_internal_write(hs_info->ctrl_port, ctrl_msg); + if (ret <= 0) { + dev_err(mdev->dev, "Unable to send MDEE message\n"); + return -EPROTO; + } + break; + case FSM_F_MDEE_RECV_OK: + dev_info(mdev->dev, "MDEE handshake1 successfully\n"); + break; + case FSM_F_MDEE_PASS: + dev_info(mdev->dev, "MDEE handshake2 successfully\n"); + break; + } + + return 0; +} + +static int mtk_fsm_download_act(struct mtk_md_fsm *fsm, struct mtk_fsm_evt *event) +{ + struct mtk_md_dev *mdev = fsm->mdev; + + if (fsm->state != FSM_STATE_ON && fsm->state != FSM_STATE_DOWNLOAD) { + dev_err(mdev->dev, "Unable to handle the event in the state %d\n", fsm->state); + return -EPROTO; + } + + mtk_fsm_switch_state(fsm, FSM_STATE_DOWNLOAD, event); + + return 0; +} + +static int mtk_fsm_postdump_act(struct mtk_md_fsm *fsm, struct mtk_fsm_evt *event) +{ + struct mtk_md_dev *mdev = fsm->mdev; + + if (fsm->state != FSM_STATE_ON && fsm->state != FSM_STATE_DOWNLOAD) { + dev_err(mdev->dev, "Unable to handle the event in the state %d\n", fsm->state); + return -EPROTO; + } + + mtk_fsm_switch_state(fsm, FSM_STATE_POSTDUMP, event); + + return 0; +} + +static void mtk_fsm_evt_release(struct kref *kref) +{ + struct mtk_fsm_evt *event = container_of(kref, struct mtk_fsm_evt, kref); + + devm_kfree(event->mdev->dev, event); +} + +static void mtk_fsm_evt_put(struct mtk_fsm_evt *event) +{ + kref_put(&event->kref, mtk_fsm_evt_release); +} + +static void mtk_fsm_evt_finish(struct mtk_md_fsm *fsm, + struct mtk_fsm_evt *event, int retval) +{ + if (event->mode & EVT_MODE_BLOCKING) { + event->status = retval; + wake_up_interruptible(&fsm->evt_waitq); + } + mtk_fsm_evt_put(event); +} + +static void mtk_fsm_evt_cleanup(struct mtk_md_fsm *fsm, struct list_head *evtq) +{ + struct mtk_fsm_evt *event, *tmp; + + list_for_each_entry_safe(event, tmp, evtq, entry) { + mtk_fsm_evt_finish(fsm, event, FSM_EVT_RET_FAIL); + list_del(&event->entry); + } +} + +static int mtk_fsm_enter_off_state(struct mtk_md_fsm *fsm, struct mtk_fsm_evt *event) +{ + struct mtk_md_dev *mdev = fsm->mdev; + + if (fsm->state == FSM_STATE_OFF || fsm->state == FSM_STATE_INVALID) { + dev_err(mdev->dev, "Unable to handle the event in the state %d\n", fsm->state); + return -EPROTO; + } + + mtk_fsm_ctrl_ch_stop(fsm); + mtk_fsm_switch_state(fsm, FSM_STATE_OFF, event); + + return 0; +} + +static int mtk_fsm_dev_rm_act(struct mtk_md_fsm *fsm, struct mtk_fsm_evt *event) +{ + unsigned long flags; + + spin_lock_irqsave(&fsm->evtq_lock, flags); + set_bit(EVT_TF_GATECLOSED, &fsm->t_flag); + mtk_fsm_evt_cleanup(fsm, &fsm->evtq); + spin_unlock_irqrestore(&fsm->evtq_lock, flags); + + return mtk_fsm_enter_off_state(fsm, event); +} + +static int mtk_fsm_hs1_handler(u32 status, void *__hs_info) +{ + struct fsm_hs_info *hs_info = __hs_info; + struct mtk_md_dev *mdev; + struct mtk_md_fsm *fsm; + + fsm = container_of(hs_info, struct mtk_md_fsm, hs_info[hs_info->id]); + mdev = fsm->mdev; + mtk_fsm_evt_submit(mdev, FSM_EVT_STARTUP, + hs_info->fsm_flag_hs1, hs_info, sizeof(*hs_info), 0); + mtk_hw_mask_ext_evt(mdev, hs_info->mhccif_ch); + mtk_hw_clear_ext_evt(mdev, hs_info->mhccif_ch); + + return 0; +} + +static int mtk_fsm_mdee_handler(u32 status, void *__fsm) +{ + u32 handled_mdee_mhccif_ch = 0; + struct mtk_md_fsm *fsm = __fsm; + struct mtk_md_dev *mdev; + + mdev = fsm->mdev; + if (status & EXT_EVT_D2H_EXCEPT_INIT) { + mtk_fsm_evt_submit(mdev, FSM_EVT_MDEE, + FSM_F_MDEE_INIT, NULL, 0, 0); + handled_mdee_mhccif_ch |= EXT_EVT_D2H_EXCEPT_INIT; + } + + if (status & EXT_EVT_D2H_EXCEPT_INIT_DONE) { + mtk_fsm_evt_submit(mdev, FSM_EVT_MDEE, + FSM_F_MDEE_INIT_DONE, NULL, 0, 0); + handled_mdee_mhccif_ch |= EXT_EVT_D2H_EXCEPT_INIT_DONE; + } + + if (status & EXT_EVT_D2H_EXCEPT_CLEARQ_DONE) { + mtk_fsm_evt_submit(mdev, FSM_EVT_MDEE, + FSM_F_MDEE_CLEARQ_DONE, NULL, 0, 0); + handled_mdee_mhccif_ch |= EXT_EVT_D2H_EXCEPT_CLEARQ_DONE; + } + + if (status & EXT_EVT_D2H_EXCEPT_ALLQ_RESET) { + mtk_fsm_evt_submit(mdev, FSM_EVT_MDEE, + FSM_F_MDEE_ALLQ_RESET, NULL, 0, 0); + handled_mdee_mhccif_ch |= EXT_EVT_D2H_EXCEPT_ALLQ_RESET; + } + + mtk_hw_mask_ext_evt(mdev, handled_mdee_mhccif_ch); + mtk_hw_clear_ext_evt(mdev, handled_mdee_mhccif_ch); + + return 0; +} + +static void mtk_fsm_hs_info_init(struct mtk_md_fsm *fsm) +{ + struct mtk_md_dev *mdev = fsm->mdev; + struct fsm_hs_info *hs_info; + int hs_id; + + for (hs_id = 0; hs_id < HS_ID_MAX; hs_id++) { + hs_info = &fsm->hs_info[hs_id]; + hs_info->id = hs_id; + hs_info->ctrl_port = NULL; + switch (hs_id) { + case HS_ID_MD: + snprintf(hs_info->port_name, PORT_NAME_LEN, "MDCTRL"); + hs_info->mhccif_ch = EXT_EVT_D2H_ASYNC_HS_NOTIFY_MD; + hs_info->fsm_flag_hs1 = FSM_F_MD_HS_START; + hs_info->fsm_flag_hs2 = FSM_F_MD_HS2_DONE; + hs_info->query_ft_set[RTFT_ID_MD_PORT_ENUM].feature = + FIELD_PREP(FEATURE_TYPE, RTFT_TYPE_MUST_SUPPORT); + hs_info->query_ft_set[RTFT_ID_MD_PORT_ENUM].feature |= + FIELD_PREP(FEATURE_VER, 0); + hs_info->query_ft_set[RTFT_ID_MD_PORT_CFG].feature = + FIELD_PREP(FEATURE_TYPE, RTFT_TYPE_OPTIONAL_SUPPORT); + hs_info->query_ft_set[RTFT_ID_MD_PORT_CFG].feature |= + FIELD_PREP(FEATURE_VER, 0); + break; + case HS_ID_SAP: + snprintf(hs_info->port_name, PORT_NAME_LEN, "SAPCTRL"); + hs_info->mhccif_ch = EXT_EVT_D2H_ASYNC_HS_NOTIFY_SAP; + hs_info->fsm_flag_hs1 = FSM_F_SAP_HS_START; + hs_info->fsm_flag_hs2 = FSM_F_SAP_HS2_DONE; + hs_info->query_ft_set[RTFT_ID_SAP_PORT_ENUM].feature = + FIELD_PREP(FEATURE_TYPE, RTFT_TYPE_MUST_SUPPORT); + hs_info->query_ft_set[RTFT_ID_SAP_PORT_ENUM].feature |= + FIELD_PREP(FEATURE_VER, 0); + break; + } + mtk_hw_register_ext_evt(mdev, hs_info->mhccif_ch, + mtk_fsm_hs1_handler, hs_info); + } + + mtk_hw_register_ext_evt(mdev, EXT_EVT_D2H_MDEE_MASK, mtk_fsm_mdee_handler, fsm); +} + +static void mtk_fsm_hs_info_exit(struct mtk_md_fsm *fsm) +{ + struct mtk_md_dev *mdev = fsm->mdev; + struct fsm_hs_info *hs_info; + int hs_id; + + mtk_hw_unregister_ext_evt(mdev, EXT_EVT_D2H_MDEE_MASK); + for (hs_id = 0; hs_id < HS_ID_MAX; hs_id++) { + hs_info = &fsm->hs_info[hs_id]; + mtk_hw_unregister_ext_evt(mdev, hs_info->mhccif_ch); + } +} + +static void mtk_fsm_reset(struct mtk_md_fsm *fsm) +{ + unsigned long flags; + + fsm->t_flag = 0; + reinit_completion(&fsm->paused); + fsm->last_dev_state = 0; + + fsm->state = FSM_STATE_INVALID; + fsm->fsm_flag = FSM_F_DFLT; + + spin_lock_irqsave(&fsm->evtq_lock, flags); + if (!list_empty(&fsm->evtq)) + mtk_fsm_evt_cleanup(fsm, &fsm->evtq); + spin_unlock_irqrestore(&fsm->evtq_lock, flags); + + mtk_fsm_hs_info_init(fsm); + mtk_hw_register_ext_evt(fsm->mdev, EXT_EVT_D2H_BOOT_FLOW_SYNC, + mtk_fsm_early_bootup_handler, fsm); +} + +static int mtk_fsm_dev_reinit_act(struct mtk_md_fsm *fsm, struct mtk_fsm_evt *event) +{ + struct mtk_md_dev *mdev = fsm->mdev; + + if (fsm->state != FSM_STATE_OFF) { + dev_err(mdev->dev, "Unable to handle the event in state %d\n", fsm->state); + return -EPROTO; + } + + if (event->fsm_flag == FSM_F_FULL_REINIT) { + mtk_hw_reinit(mdev, REINIT_TYPE_EXP); + event->fsm_flag = 0; + } else { + mtk_hw_reinit(mdev, REINIT_TYPE_RESUME); + } + + mtk_fsm_reset(fsm); + mtk_fsm_switch_state(fsm, FSM_STATE_ON, event); + mtk_hw_unmask_ext_evt(mdev, EXT_EVT_D2H_BOOT_FLOW_SYNC); + + return 0; +} + +static int (*evts_act_tbl[FSM_EVT_MAX])(struct mtk_md_fsm *__fsm, struct mtk_fsm_evt *event) = { + [FSM_EVT_DOWNLOAD] = mtk_fsm_download_act, + [FSM_EVT_POSTDUMP] = mtk_fsm_postdump_act, + [FSM_EVT_STARTUP] = mtk_fsm_startup_act, + [FSM_EVT_LINKDOWN] = mtk_fsm_enter_off_state, + [FSM_EVT_AER] = mtk_fsm_enter_off_state, + [FSM_EVT_COLD_RESUME] = mtk_fsm_enter_off_state, + [FSM_EVT_REINIT] = mtk_fsm_dev_reinit_act, + [FSM_EVT_MDEE] = mtk_fsm_mdee_act, + [FSM_EVT_DEV_RESET_REQ] = mtk_fsm_enter_off_state, + [FSM_EVT_DEV_RM] = mtk_fsm_dev_rm_act, + [FSM_EVT_DEV_ADD] = mtk_fsm_dev_add_act, +}; + +/* mtk_fsm_start - start FSM service + * + * @mdev: mdev pointer to mtk_md_dev + * + * This function start a fsm service to handle fsm event. + * + * Return: return value is 0 on success, a negative error + * code on failure. + */ +int mtk_fsm_start(struct mtk_md_dev *mdev) +{ + struct mtk_md_fsm *fsm = mdev->fsm; + + if (!fsm) + return -EINVAL; + + dev_info(mdev->dev, "Start fsm by %ps!\n", __builtin_return_address(0)); + clear_bit(EVT_TF_PAUSE, &fsm->t_flag); + if (!fsm->fsm_handler) + return -EFAULT; + + wake_up_process(fsm->fsm_handler); + return 0; +} + +/* mtk_fsm_pause - pause fsm service + * @mdev: pointer to mtk_md_dev. + * + * If the function is called in irq context, it is able to + * be paused, or it will return as soon. It can only work + * in process context. + * + * Return: + * 0: the fsm handler thread is paused. + * <0: fail to pause fsm handler thread. + */ +int mtk_fsm_pause(struct mtk_md_dev *mdev) +{ + struct mtk_md_fsm *fsm = mdev->fsm; + + if (!fsm) + return -EINVAL; + + dev_info(mdev->dev, "Pause fsm by %ps!\n", __builtin_return_address(0)); + if (!test_and_set_bit(EVT_TF_PAUSE, &fsm->t_flag)) { + reinit_completion(&fsm->paused); + wake_up_process(fsm->fsm_handler); + } + + wait_for_completion(&fsm->paused); + return 0; +} + +static void mkt_fsm_notifier_cleanup(struct mtk_md_dev *mdev, struct list_head *ntq) +{ + struct mtk_fsm_notifier *nt, *tmp; + + list_for_each_entry_safe(nt, tmp, ntq, entry) { + list_del(&nt->entry); + devm_kfree(mdev->dev, nt); + } +} + +static void mtk_fsm_notifier_insert(struct mtk_fsm_notifier *notifier, struct list_head *head) +{ + struct mtk_fsm_notifier *nt; + + list_for_each_entry(nt, head, entry) { + if (notifier->prio > nt->prio) { + list_add(¬ifier->entry, nt->entry.prev); + return; + } + } + list_add_tail(¬ifier->entry, head); +} + +/* mtk_fsm_notify_register - register notifier callback + * @mdev: pointer to mtk_md_dev + * @id: user id + * @cb: pointer to notification callback provided by user + * @data: pointer to user data if any + * @prio: PRIO_0, PRIO_1 + * @is_pre: 1: pre switch, 0: post switch + * + * Return: return value is 0 on success, a negative error + * code on failure. + */ +int mtk_fsm_notifier_register(struct mtk_md_dev *mdev, + enum mtk_user_id id, + void (*cb)(struct mtk_fsm_param *, void *data), + void *data, + enum mtk_fsm_prio prio, + bool is_pre) +{ + struct mtk_md_fsm *fsm = mdev->fsm; + struct mtk_fsm_notifier *notifier; + + if (!fsm) + return -EINVAL; + + if (id >= MTK_USER_MAX || !cb || prio >= FSM_PRIO_MAX) + return -EINVAL; + + notifier = devm_kzalloc(mdev->dev, sizeof(*notifier), GFP_KERNEL); + if (!notifier) + return -ENOMEM; + + INIT_LIST_HEAD(¬ifier->entry); + notifier->id = id; + notifier->cb = cb; + notifier->data = data; + notifier->prio = prio; + + if (is_pre) + mtk_fsm_notifier_insert(notifier, &fsm->pre_notifiers); + else + mtk_fsm_notifier_insert(notifier, &fsm->post_notifiers); + + return 0; +} + +/* mtk_fsm_notify_unregister - unregister notifier callback + * + * @mdev: pointer to mtk_md_dev + * @id: user id + * + * Return: return value is 0 on success, a negative error + * code on failure. + */ +int mtk_fsm_notifier_unregister(struct mtk_md_dev *mdev, enum mtk_user_id id) +{ + struct mtk_md_fsm *fsm = mdev->fsm; + struct mtk_fsm_notifier *nt, *tmp; + + if (!fsm) + return -EINVAL; + + list_for_each_entry_safe(nt, tmp, &fsm->pre_notifiers, entry) { + if (nt->id == id) { + list_del(&nt->entry); + devm_kfree(mdev->dev, nt); + break; + } + } + list_for_each_entry_safe(nt, tmp, &fsm->post_notifiers, entry) { + if (nt->id == id) { + list_del(&nt->entry); + devm_kfree(mdev->dev, nt); + break; + } + } + return 0; +} + +/* mtk_fsm_evt_submit - submit event + * + * @mdev: pointer to mtk_md_dev + * @id: event id + * @flag: state flag + * @data: user data + * @len: data length + * @mode: EVT_MODE_BLOCKING(1<<0) means that submit + * blocking until event is handled, EVT_MODE_TOHEAD + * (1<<1) means the event will be handled in high + * priority. + * + * Return: 0 will be returned, if the event is appended (non-blocking) + * or event is completed(blocking), -1 will be returned if timeout, 1 + * will be returned if it is finished. + */ +int mtk_fsm_evt_submit(struct mtk_md_dev *mdev, + enum mtk_fsm_evt_id id, + enum mtk_fsm_flag flag, + void *data, unsigned int len, + unsigned char mode) +{ + struct mtk_md_fsm *fsm = mdev->fsm; + struct mtk_fsm_evt *event; + unsigned long flags; + int ret = 0; + + if (!fsm || id >= FSM_EVT_MAX) + return FSM_EVT_RET_FAIL; + + event = devm_kzalloc(mdev->dev, sizeof(*event), + (in_irq() || in_softirq() || irqs_disabled()) ? + GFP_ATOMIC : GFP_KERNEL); + if (!event) + return FSM_EVT_RET_FAIL; + + kref_init(&event->kref); + event->mdev = mdev; + event->id = id; + event->fsm_flag = flag; + event->status = FSM_EVT_RET_ONGOING; + event->data = data; + event->len = len; + event->mode = mode; + dev_info(mdev->dev, "Event%d(with mode 0x%x) is appended by %ps\n", + event->id, event->mode, __builtin_return_address(0)); + + spin_lock_irqsave(&fsm->evtq_lock, flags); + if (!test_bit(EVT_TF_GATECLOSED, &fsm->t_flag)) { + if (mode & EVT_MODE_TOHEAD) + list_add(&event->entry, &fsm->evtq); + else + list_add_tail(&event->entry, &fsm->evtq); + spin_unlock_irqrestore(&fsm->evtq_lock, flags); + } else { + spin_unlock_irqrestore(&fsm->evtq_lock, flags); + mtk_fsm_evt_put(event); + dev_err(mdev->dev, "Failed to add event, fsm dev has been removed!\n"); + return FSM_EVT_RET_FAIL; + } + + wake_up_process(fsm->fsm_handler); + if (mode & EVT_MODE_BLOCKING) { + kref_get(&event->kref); + wait_event_interruptible(fsm->evt_waitq, (event->status != 0)); + ret = event->status; + mtk_fsm_evt_put(event); + } + return ret; +} + +static int mtk_fsm_evt_handler(void *__fsm) +{ + struct mtk_md_fsm *fsm = __fsm; + struct mtk_fsm_evt *event; + unsigned long flags; + +wake_up: + while (!kthread_should_stop() && + !test_bit(EVT_TF_PAUSE, &fsm->t_flag) && !list_empty(&fsm->evtq)) { + spin_lock_irqsave(&fsm->evtq_lock, flags); + event = list_first_entry(&fsm->evtq, struct mtk_fsm_evt, entry); + list_del(&event->entry); + spin_unlock_irqrestore(&fsm->evtq_lock, flags); + + dev_info(fsm->mdev->dev, "Event%d(0x%x) is under handling\n", + event->id, event->fsm_flag); + if (event->id < FSM_EVT_MAX && evts_act_tbl[event->id](fsm, event) != 0) + mtk_fsm_evt_finish(fsm, event, FSM_EVT_RET_FAIL); + else + mtk_fsm_evt_finish(fsm, event, FSM_EVT_RET_DONE); + } + + if (kthread_should_stop()) + return 0; + + if (test_bit(EVT_TF_PAUSE, &fsm->t_flag)) + complete_all(&fsm->paused); + + set_current_state(TASK_INTERRUPTIBLE); + schedule(); + + if (fatal_signal_pending(current)) { + /* event handler thread is killed by fatal signal, + * all the waiters will be waken up. + */ + complete_all(&fsm->paused); + mtk_fsm_evt_cleanup(fsm, &fsm->evtq); + return -ERESTARTSYS; + } + goto wake_up; +} + +/* mtk_fsm_init - allocate FSM control block and initialize it + * @mdev: pointer to mtk_md_dev + * + * This function creates a mtk_md_fsm structure dynamically and hook + * it up to mtk_md_dev. When you are finished with this structure, + * call mtk_fsm_exit() and the structure will be dynamically freed. + * + * Return: return value is 0 on success, a negative error + * code on failure. + */ +int mtk_fsm_init(struct mtk_md_dev *mdev) +{ + struct mtk_md_fsm *fsm; + int ret; + + fsm = devm_kzalloc(mdev->dev, sizeof(*fsm), GFP_KERNEL); + if (!fsm) + return -ENOMEM; + + fsm->fsm_handler = kthread_create(mtk_fsm_evt_handler, fsm, "fsm_evt_thread%d_%s", + mdev->hw_ver, mdev->dev_str); + if (IS_ERR(fsm->fsm_handler)) { + ret = PTR_ERR(fsm->fsm_handler); + goto err_create; + } + + fsm->mdev = mdev; + init_completion(&fsm->paused); + fsm->state = FSM_STATE_INVALID; + fsm->fsm_flag = FSM_F_DFLT; + + INIT_LIST_HEAD(&fsm->evtq); + spin_lock_init(&fsm->evtq_lock); + init_waitqueue_head(&fsm->evt_waitq); + + INIT_LIST_HEAD(&fsm->pre_notifiers); + INIT_LIST_HEAD(&fsm->post_notifiers); + + mtk_fsm_hs_info_init(fsm); + mtk_hw_register_ext_evt(mdev, EXT_EVT_D2H_BOOT_FLOW_SYNC, + mtk_fsm_early_bootup_handler, fsm); + mdev->fsm = fsm; + return 0; +err_create: + devm_kfree(mdev->dev, fsm); + return ret; +} + +/* mtk_fsm_exit - free FSM control block + * @mdev: pointer to mtk_md_dev + * + * Return: return value is 0 on success, a negative error + * code on failure. + */ +int mtk_fsm_exit(struct mtk_md_dev *mdev) +{ + struct mtk_md_fsm *fsm = mdev->fsm; + unsigned long flags; + + if (!fsm) + return -EINVAL; + + if (fsm->fsm_handler) { + kthread_stop(fsm->fsm_handler); + fsm->fsm_handler = NULL; + } + complete_all(&fsm->paused); + + spin_lock_irqsave(&fsm->evtq_lock, flags); + if (WARN_ON(!list_empty(&fsm->evtq))) + mtk_fsm_evt_cleanup(fsm, &fsm->evtq); + spin_unlock_irqrestore(&fsm->evtq_lock, flags); + + mkt_fsm_notifier_cleanup(mdev, &fsm->pre_notifiers); + mkt_fsm_notifier_cleanup(mdev, &fsm->post_notifiers); + + mtk_hw_unregister_ext_evt(mdev, EXT_EVT_D2H_BOOT_FLOW_SYNC); + mtk_fsm_hs_info_exit(fsm); + + devm_kfree(mdev->dev, fsm); + return 0; +} diff --git a/drivers/net/wwan/mediatek/mtk_fsm.h b/drivers/net/wwan/mediatek/mtk_fsm.h new file mode 100644 index 000000000000..3d2594b26a34 --- /dev/null +++ b/drivers/net/wwan/mediatek/mtk_fsm.h @@ -0,0 +1,178 @@ +/* SPDX-License-Identifier: BSD-3-Clause-Clear + * + * Copyright (c) 2022, MediaTek Inc. + */ + +#ifndef __MTK_FSM_H__ +#define __MTK_FSM_H__ + +#include "mtk_dev.h" + +#define FEATURE_CNT (64) +#define FEATURE_QUERY_PATTERN (0x49434343) + +#define FEATURE_TYPE GENMASK(3, 0) +#define FEATURE_VER GENMASK(7, 4) + +#define EVT_HANDLER_TIMEOUT (5 * HZ) +#define EVT_MODE_BLOCKING (0x01) +#define EVT_MODE_TOHEAD (0x02) + +#define FSM_EVT_RET_FAIL (-1) +#define FSM_EVT_RET_ONGOING (0) +#define FSM_EVT_RET_DONE (1) + +enum mtk_fsm_flag { + FSM_F_DFLT = 0, + FSM_F_DL_PORT_CREATE = BIT(0), + FSM_F_DL_DA = BIT(1), + FSM_F_DL_JUMPBL = BIT(2), + FSM_F_DL_TIMEOUT = BIT(3), + FSM_F_SAP_HS_START = BIT(4), + FSM_F_SAP_HS2_DONE = BIT(5), + FSM_F_MD_HS_START = BIT(6), + FSM_F_MD_HS2_DONE = BIT(7), + FSM_F_MDEE_INIT = BIT(8), + FSM_F_MDEE_INIT_DONE = BIT(9), + FSM_F_MDEE_CLEARQ_DONE = BIT(10), + FSM_F_MDEE_ALLQ_RESET = BIT(11), + FSM_F_MDEE_MSG = BIT(12), + FSM_F_MDEE_RECV_OK = BIT(13), + FSM_F_MDEE_PASS = BIT(14), + FSM_F_FULL_REINIT = BIT(15), +}; + +enum mtk_fsm_state { + FSM_STATE_INVALID = 0, + FSM_STATE_OFF, + FSM_STATE_ON, + FSM_STATE_POSTDUMP, + FSM_STATE_DOWNLOAD, + FSM_STATE_BOOTUP, + FSM_STATE_READY, + FSM_STATE_MDEE, +}; + +enum mtk_fsm_evt_id { + FSM_EVT_DOWNLOAD = 0, + FSM_EVT_POSTDUMP, + FSM_EVT_STARTUP, + FSM_EVT_LINKDOWN, + FSM_EVT_AER, + FSM_EVT_COLD_RESUME, + FSM_EVT_REINIT, + FSM_EVT_MDEE, + FSM_EVT_DEV_RESET_REQ, + FSM_EVT_DEV_RM, + FSM_EVT_DEV_ADD, + FSM_EVT_MAX +}; + +enum mtk_fsm_prio { + FSM_PRIO_0 = 0, + FSM_PRIO_1 = 1, + FSM_PRIO_MAX +}; + +struct mtk_fsm_param { + enum mtk_fsm_state from; + enum mtk_fsm_state to; + enum mtk_fsm_evt_id evt_id; + enum mtk_fsm_flag fsm_flag; +}; + +#define PORT_NAME_LEN 20 + +enum handshake_info_id { + HS_ID_MD = 0, + HS_ID_SAP, + HS_ID_MAX +}; + +struct runtime_feature_info { + u8 feature; +}; + +struct fsm_hs_info { + unsigned char id; + void *ctrl_port; + char port_name[PORT_NAME_LEN]; + unsigned int mhccif_ch; + unsigned int fsm_flag_hs1; + unsigned int fsm_flag_hs2; + /* the feature that the device should support */ + struct runtime_feature_info query_ft_set[FEATURE_CNT]; + /* the feature that the host has supported */ + struct runtime_feature_info supported_ft_set[FEATURE_CNT]; + /* runtime data from device need to be parsed by host */ + void *rt_data; + unsigned int rt_data_len; +}; + +struct mtk_md_fsm { + struct mtk_md_dev *mdev; + struct task_struct *fsm_handler; + struct fsm_hs_info hs_info[HS_ID_MAX]; + unsigned int hs_done_flag; + unsigned long t_flag; + /* completion for event thread paused */ + struct completion paused; + u32 last_dev_state; + /* fsm current state & flag */ + enum mtk_fsm_state state; + unsigned int fsm_flag; + struct list_head evtq; + /* protect evtq */ + spinlock_t evtq_lock; + /* waitq for fsm blocking submit */ + wait_queue_head_t evt_waitq; + /* notifiers before state transition */ + struct list_head pre_notifiers; + /* notifiers after state transition */ + struct list_head post_notifiers; +}; + +struct mtk_fsm_evt { + struct list_head entry; + struct kref kref; + struct mtk_md_dev *mdev; + enum mtk_fsm_evt_id id; + unsigned int fsm_flag; + /* event handling status + * -1: fail, + * 0: on-going, + * 1: successfully + */ + int status; + unsigned char mode; + unsigned int len; + void *data; +}; + +struct mtk_fsm_notifier { + struct list_head entry; + enum mtk_user_id id; + void (*cb)(struct mtk_fsm_param *param, void *data); + void *data; + enum mtk_fsm_prio prio; +}; + +int mtk_fsm_init(struct mtk_md_dev *mdev); +int mtk_fsm_exit(struct mtk_md_dev *mdev); +int mtk_fsm_start(struct mtk_md_dev *mdev); +int mtk_fsm_pause(struct mtk_md_dev *mdev); +int mtk_fsm_notifier_register(struct mtk_md_dev *mdev, + enum mtk_user_id id, + void (*cb)(struct mtk_fsm_param *, void *data), + void *data, + enum mtk_fsm_prio prio, + bool is_pre); +int mtk_fsm_notifier_unregister(struct mtk_md_dev *mdev, + enum mtk_user_id id); +int mtk_fsm_evt_submit(struct mtk_md_dev *mdev, + enum mtk_fsm_evt_id id, + enum mtk_fsm_flag flag, + void *data, unsigned int len, + unsigned char mode); + +#endif /* __MTK_FSM_H__ */ diff --git a/drivers/net/wwan/mediatek/mtk_port.c b/drivers/net/wwan/mediatek/mtk_port.c index b9bf2a57f763..6d82fb3c4cdc 100644 --- a/drivers/net/wwan/mediatek/mtk_port.c +++ b/drivers/net/wwan/mediatek/mtk_port.c @@ -14,6 +14,10 @@ #include "mtk_port.h" #include "mtk_port_io.h" +#define MTK_PORT_ENUM_VER (0) +/* this is an empirical value, negotiate with device designer */ +#define MTK_PORT_ENUM_HEAD_PATTERN (0x5a5a5a5a) +#define MTK_PORT_ENUM_TAIL_PATTERN (0xa5a5a5a5) #define MTK_DFLT_TRB_TIMEOUT (5 * HZ) #define MTK_DFLT_TRB_STATUS (0x1) #define MTK_CHECK_RX_SEQ_MASK (0x7fff) @@ -462,8 +466,10 @@ static int mtk_port_open_trb_complete(struct sk_buff *skb) port->rx_mtu = port_mngr->vq_info[trb->vqno].rx_mtu; /* Minus the len of the header */ - port->tx_mtu -= MTK_CCCI_H_ELEN; - port->rx_mtu -= MTK_CCCI_H_ELEN; + if (!(port->info.flags & PORT_F_RAW_DATA)) { + port->tx_mtu -= MTK_CCCI_H_ELEN; + port->rx_mtu -= MTK_CCCI_H_ELEN; + } out: wake_up_interruptible_all(&port->trb_wq); @@ -640,42 +646,102 @@ static int mtk_port_rx_dispatch(struct sk_buff *skb, int len, void *priv) skb_reset_tail_pointer(skb); skb_put(skb, len); - ccci_h = mtk_port_strip_header(skb); - if (unlikely(!ccci_h)) { - dev_warn(port_mngr->ctrl_blk->mdev->dev, - "Unsupported: skb length(%d) is less than ccci header\n", - skb->len); - goto drop_data; - } + /* If ccci header field has been loaded in skb data, + * the data should be dispatched by port manager + */ + if (!(port->info.flags & PORT_F_RAW_DATA)) { + ccci_h = mtk_port_strip_header(skb); + if (unlikely(!ccci_h)) { + dev_warn(port_mngr->ctrl_blk->mdev->dev, + "Unsupported: skb length(%d) is less than ccci header\n", + skb->len); + goto drop_data; + } - dev_dbg(port_mngr->ctrl_blk->mdev->dev, - "RX header:%08x %08x\n", ccci_h->packet_len, ccci_h->status); + dev_dbg(port_mngr->ctrl_blk->mdev->dev, + "RX header:%08x %08x\n", ccci_h->packet_len, ccci_h->status); - channel = FIELD_GET(MTK_HDR_FLD_CHN, le32_to_cpu(ccci_h->status)); - port = mtk_port_search_by_id(port_mngr, channel); - if (unlikely(!port)) { - dev_warn(port_mngr->ctrl_blk->mdev->dev, - "Failed to find port by channel:%d\n", channel); - goto drop_data; - } + channel = FIELD_GET(MTK_HDR_FLD_CHN, le32_to_cpu(ccci_h->status)); + port = mtk_port_search_by_id(port_mngr, channel); + if (unlikely(!port)) { + dev_warn(port_mngr->ctrl_blk->mdev->dev, + "Failed to find port by channel:%d\n", channel); + goto drop_data; + } - /* The sequence number must be continuous */ - ret = mtk_port_check_rx_seq(port, ccci_h); - if (unlikely(ret)) - goto drop_data; + /* The sequence number must be continuous */ + ret = mtk_port_check_rx_seq(port, ccci_h); + if (unlikely(ret)) + goto drop_data; - port->rx_seq = FIELD_GET(MTK_HDR_FLD_SEQ, le32_to_cpu(ccci_h->status)); + port->rx_seq = FIELD_GET(MTK_HDR_FLD_SEQ, le32_to_cpu(ccci_h->status)); + } ret = ports_ops[port->info.type]->recv(port, skb); return ret; drop_data: - dev_kfree_skb_any(skb); + mtk_port_free_rx_skb(port, skb); err_done: return ret; } +static void mtk_port_reset(struct mtk_port_mngr *port_mngr) +{ + int tbl_type = PORT_TBL_SAP; + struct radix_tree_iter iter; + struct mtk_port *port; + void __rcu **slot; + + do { + radix_tree_for_each_slot(slot, &port_mngr->port_tbl[tbl_type], &iter, 0) { + MTK_PORT_SEARCH_FROM_RADIX_TREE(port, slot); + MTK_PORT_INTERNAL_NODE_CHECK(port, slot, iter); + port->enable = false; + ports_ops[port->info.type]->reset(port); + } + tbl_type++; + } while (tbl_type < PORT_TBL_MAX); +} + +static int mtk_port_enable(struct mtk_port_mngr *port_mngr) +{ + int tbl_type = PORT_TBL_SAP; + struct radix_tree_iter iter; + struct mtk_port *port; + void __rcu **slot; + + do { + radix_tree_for_each_slot(slot, &port_mngr->port_tbl[tbl_type], &iter, 0) { + MTK_PORT_SEARCH_FROM_RADIX_TREE(port, slot); + MTK_PORT_INTERNAL_NODE_CHECK(port, slot, iter); + if (port->enable) + ports_ops[port->info.type]->enable(port); + } + tbl_type++; + } while (tbl_type < PORT_TBL_MAX); + return 0; +} + +static void mtk_port_disable(struct mtk_port_mngr *port_mngr) +{ + int tbl_type = PORT_TBL_SAP; + struct radix_tree_iter iter; + struct mtk_port *port; + void __rcu **slot; + + do { + radix_tree_for_each_slot(slot, &port_mngr->port_tbl[tbl_type], &iter, 0) { + MTK_PORT_SEARCH_FROM_RADIX_TREE(port, slot); + MTK_PORT_INTERNAL_NODE_CHECK(port, slot, iter); + port->enable = false; + ports_ops[port->info.type]->disable(port); + } + tbl_type++; + } while (tbl_type < PORT_TBL_MAX); +} + /* mtk_port_add_header() - Add mtk_ccci_header to TX packet. * @skb: pointer to socket buffer * @@ -740,6 +806,70 @@ struct mtk_ccci_header *mtk_port_strip_header(struct sk_buff *skb) return ccci_h; } +/* mtk_port_status_update() - Update ports enumeration information. + * @mdev: pointer to mtk_md_dev. + * @data: pointer to mtk_port_enum_msg, which brings enumeration information. + * + * This function called when host driver is doing handshake. + * Structure mtk_port_enum_msg brings ports' enumeration information + * from modem, and this function handles it and set "enable" of mtk_port + * to "true" or "false". + * + * This function can sleep or can be called from interrupt context. + * + * Return: + * 0: success to update ports' status + * -EINVAL: input parameter or members in input structure is illegal + */ +int mtk_port_status_update(struct mtk_md_dev *mdev, void *data) +{ + struct mtk_port_enum_msg *msg = data; + struct mtk_port_info *port_info; + struct mtk_port_mngr *port_mngr; + struct mtk_ctrl_blk *ctrl_blk; + struct mtk_port *port; + int port_id; + int ret = 0; + u16 ch_id; + + if (unlikely(!mdev || !msg)) { + ret = -EINVAL; + goto err; + } + + ctrl_blk = mdev->ctrl_blk; + port_mngr = ctrl_blk->port_mngr; + if (le16_to_cpu(msg->version) != MTK_PORT_ENUM_VER) { + ret = -EPROTO; + goto err; + } + + if (le32_to_cpu(msg->head_pattern) != MTK_PORT_ENUM_HEAD_PATTERN || + le32_to_cpu(msg->tail_pattern) != MTK_PORT_ENUM_TAIL_PATTERN) { + ret = -EPROTO; + goto err; + } + + for (port_id = 0; port_id < le16_to_cpu(msg->port_cnt); port_id++) { + port_info = (struct mtk_port_info *)(msg->data + + (sizeof(*port_info) * port_id)); + if (!port_info) { + dev_err(mdev->dev, "Invalid port info, the index %d\n", port_id); + ret = -EINVAL; + goto err; + } + ch_id = FIELD_GET(MTK_INFO_FLD_CHID, le16_to_cpu(port_info->channel)); + port = mtk_port_search_by_id(port_mngr, ch_id); + if (!port) { + dev_err(mdev->dev, "Failed to find the port 0x%x\n", ch_id); + continue; + } + port->enable = FIELD_GET(MTK_INFO_FLD_EN, le16_to_cpu(port_info->channel)); + } +err: + return ret; +} + /* mtk_port_mngr_vq_status_check() - Checking VQ status before enable or disable VQ. * @skb: pointer to socket buffer * @@ -897,6 +1027,145 @@ int mtk_port_vq_disable(struct mtk_port *port) return ret; } +static void mtk_port_mngr_vqs_enable(struct mtk_port_mngr *port_mngr) +{ + int tbl_type = PORT_TBL_SAP; + struct radix_tree_iter iter; + struct mtk_port *port; + void __rcu **slot; + + do { + radix_tree_for_each_slot(slot, &port_mngr->port_tbl[tbl_type], &iter, 0) { + MTK_PORT_SEARCH_FROM_RADIX_TREE(port, slot); + MTK_PORT_INTERNAL_NODE_CHECK(port, slot, iter); + port->tx_seq = 0; + /* After MDEE, cldma reset rx_seq start at 1, not 0 */ + port->rx_seq = 0; + + if (!port->enable) + continue; + + mtk_port_vq_enable(port); + set_bit(PORT_S_RDWR, &port->status); + } + tbl_type++; + } while (tbl_type < PORT_TBL_MAX); +} + +static void mtk_port_mngr_vqs_disable(struct mtk_port_mngr *port_mngr) +{ + int tbl_type = PORT_TBL_SAP; + struct radix_tree_iter iter; + struct mtk_port *port; + void __rcu **slot; + + do { + radix_tree_for_each_slot(slot, &port_mngr->port_tbl[tbl_type], &iter, 0) { + MTK_PORT_SEARCH_FROM_RADIX_TREE(port, slot); + MTK_PORT_INTERNAL_NODE_CHECK(port, slot, iter); + if (!port->enable) + continue; + + /* Disable R/W after VQ close because device is removed suddenly + * or start to sleep. + */ + mutex_lock(&port->write_lock); + clear_bit(PORT_S_RDWR, &port->status); + mutex_unlock(&port->write_lock); + mtk_port_vq_disable(port); + } + tbl_type++; + } while (tbl_type < PORT_TBL_MAX); +} + +/* mtk_port_mngr_fsm_state_handler() - Handle fsm event after state has been changed. + * @fsm_param: pointer to mtk_fsm_param, which including fsm state and event. + * @arg: fsm will pass mtk_port_mngr structure back by using this parameter. + * + * This function will be registered to fsm by control block. If registered successful, + * after fsm state has been changed, the fsm will call this function. + * + * This function can sleep or can be called from interrupt context. + * + * Return: No return value. + */ +void mtk_port_mngr_fsm_state_handler(struct mtk_fsm_param *fsm_param, void *arg) +{ + struct mtk_port_mngr *port_mngr; + struct mtk_port *port; + int evt_id; + int flag; + + if (!fsm_param || !arg) { + pr_err("[TMI] Invalid input fsm_param or arg\n"); + return; + } + + port_mngr = arg; + evt_id = fsm_param->evt_id; + flag = fsm_param->fsm_flag; + + dev_info(port_mngr->ctrl_blk->mdev->dev, "Fsm state %d & fsm flag 0x%x\n", + fsm_param->to, flag); + + switch (fsm_param->to) { + case FSM_STATE_ON: + if (evt_id == FSM_EVT_REINIT) + mtk_port_reset(port_mngr); + break; + case FSM_STATE_BOOTUP: + if (flag & FSM_F_MD_HS_START) { + port = mtk_port_search_by_id(port_mngr, CCCI_CONTROL_RX); + if (!port) { + dev_err(port_mngr->ctrl_blk->mdev->dev, + "Failed to find MD ctrl port\n"); + goto err; + } + ports_ops[port->info.type]->enable(port); + } else if (flag & FSM_F_SAP_HS_START) { + port = mtk_port_search_by_id(port_mngr, CCCI_SAP_CONTROL_RX); + if (!port) { + dev_err(port_mngr->ctrl_blk->mdev->dev, + "Failed to find sAP ctrl port\n"); + goto err; + } + ports_ops[port->info.type]->enable(port); + } + break; + case FSM_STATE_READY: + mtk_port_enable(port_mngr); + break; + case FSM_STATE_OFF: + mtk_port_disable(port_mngr); + break; + case FSM_STATE_MDEE: + if (flag & FSM_F_MDEE_INIT) { + port = mtk_port_search_by_id(port_mngr, CCCI_CONTROL_RX); + if (!port) { + dev_err(port_mngr->ctrl_blk->mdev->dev, "Failed to find MD ctrl port\n"); + goto err; + } + port->enable = true; + ports_ops[port->info.type]->enable(port); + } else if (flag & FSM_F_MDEE_CLEARQ_DONE) { + /* the time 2000ms recommended by device-end + * it's for wait device prepares the data + */ + msleep(2000); + mtk_port_mngr_vqs_disable(port_mngr); + } else if (flag & FSM_F_MDEE_ALLQ_RESET) { + mtk_port_mngr_vqs_enable(port_mngr); + } + break; + default: + dev_warn(port_mngr->ctrl_blk->mdev->dev, + "Unsupported fsm state %d & fsm flag 0x%x\n", fsm_param->to, flag); + break; + } +err: + return; +} + /* mtk_port_mngr_init() - Initialize mtk_port_mngr and mtk_stale_list. * @ctrl_blk: pointer to mtk_ctrl_blk. * @@ -905,7 +1174,7 @@ int mtk_port_vq_disable(struct mtk_port *port) * and this function alloc memory for it. * If port manager can't find stale list in stale list group by * using dev_str, it will also alloc memory for structure mtk_stale_list. - * And then it will initialize port table. + * And then it will initialize port table and register fsm callback. * * Return: * 0: -success to initialize mtk_port_mngr diff --git a/drivers/net/wwan/mediatek/mtk_port.h b/drivers/net/wwan/mediatek/mtk_port.h index 56ed82c41cc2..4f6d2ddd63f0 100644 --- a/drivers/net/wwan/mediatek/mtk_port.h +++ b/drivers/net/wwan/mediatek/mtk_port.h @@ -14,6 +14,7 @@ #include "mtk_ctrl_plane.h" #include "mtk_dev.h" +#include "mtk_fsm.h" #define MTK_PEER_ID_MASK (0xF000) #define MTK_PEER_ID_SHIFT (12) @@ -22,6 +23,7 @@ #define MTK_PEER_ID_MD (0x2) #define MTK_CH_ID_MASK (0x0FFF) #define MTK_CH_ID(ch) ((ch) & MTK_CH_ID_MASK) +#define MTK_PORT_NAME_HDR "wwanD" #define MTK_DFLT_MAX_DEV_CNT (10) #define MTK_DFLT_PORT_NAME_LEN (20) @@ -66,6 +68,7 @@ enum mtk_ccci_ch { enum mtk_port_flag { PORT_F_DFLT = 0, + PORT_F_RAW_DATA = BIT(0), PORT_F_BLOCKING = BIT(1), PORT_F_ALLOW_DROP = BIT(2), }; @@ -87,9 +90,11 @@ struct mtk_internal_port { }; /* union mtk_port_priv - Contains private data for different type of ports. + * @cdev: private data for character device port. * @i_priv: private data for internal other user. */ union mtk_port_priv { + struct cdev *cdev; struct mtk_internal_port i_priv; }; @@ -213,8 +218,10 @@ void mtk_port_stale_list_grp_cleanup(void); int mtk_port_add_header(struct sk_buff *skb); struct mtk_ccci_header *mtk_port_strip_header(struct sk_buff *skb); int mtk_port_send_data(struct mtk_port *port, void *data); +int mtk_port_status_update(struct mtk_md_dev *mdev, void *data); int mtk_port_vq_enable(struct mtk_port *port); int mtk_port_vq_disable(struct mtk_port *port); +void mtk_port_mngr_fsm_state_handler(struct mtk_fsm_param *fsm_param, void *arg); int mtk_port_mngr_vq_status_check(struct sk_buff *skb); int mtk_port_mngr_init(struct mtk_ctrl_blk *ctrl_blk); void mtk_port_mngr_exit(struct mtk_ctrl_blk *ctrl_blk); diff --git a/drivers/net/wwan/mediatek/mtk_port_io.c b/drivers/net/wwan/mediatek/mtk_port_io.c index efbbe97c50dd..baa0fad5d40b 100644 --- a/drivers/net/wwan/mediatek/mtk_port_io.c +++ b/drivers/net/wwan/mediatek/mtk_port_io.c @@ -204,10 +204,7 @@ void *mtk_port_internal_open(struct mtk_md_dev *mdev, char *name, int flag) goto err; } - if (flag & O_NONBLOCK) - port->info.flags &= ~PORT_F_BLOCKING; - else - port->info.flags |= PORT_F_BLOCKING; + port->info.flags |= PORT_F_BLOCKING; err: return port; } diff --git a/drivers/net/wwan/mediatek/pcie/mtk_cldma_drv_t800.c b/drivers/net/wwan/mediatek/pcie/mtk_cldma_drv_t800.c index d2e682453b57..42b4358f2653 100644 --- a/drivers/net/wwan/mediatek/pcie/mtk_cldma_drv_t800.c +++ b/drivers/net/wwan/mediatek/pcie/mtk_cldma_drv_t800.c @@ -946,3 +946,48 @@ int mtk_cldma_start_xfer_t800(struct cldma_hw *hw, int qno) return 0; } + +static void mtk_cldma_hw_reset(struct mtk_md_dev *mdev, int hif_id) +{ + u32 val = mtk_hw_read32(mdev, REG_DEV_INFRA_BASE + REG_INFRA_RST0_SET); + + val |= (REG_CLDMA0_RST_SET_BIT + hif_id); + mtk_hw_write32(mdev, REG_DEV_INFRA_BASE + REG_INFRA_RST0_SET, val); + + val = mtk_hw_read32(mdev, REG_DEV_INFRA_BASE + REG_INFRA_RST0_CLR); + val |= (REG_CLDMA0_RST_CLR_BIT + hif_id); + mtk_hw_write32(mdev, REG_DEV_INFRA_BASE + REG_INFRA_RST0_CLR, val); +} + +void mtk_cldma_fsm_state_listener_t800(struct mtk_fsm_param *param, struct cldma_hw *hw) +{ + struct txq *txq; + int i; + + if (!param || !hw) + return; + + if (param->to == FSM_STATE_MDEE) { + if (param->fsm_flag & FSM_F_MDEE_INIT) { + mtk_cldma_stop_queue(hw->mdev, hw->base_addr, DIR_TX, ALLQ); + for (i = 0; i < HW_QUEUE_NUM; i++) { + txq = hw->txq[i]; + if (txq) + txq->is_stopping = true; + } + } else if (param->fsm_flag & FSM_F_MDEE_CLEARQ_DONE) { + mtk_cldma_hw_reset(hw->mdev, hw->hif_id); + } else if (param->fsm_flag & FSM_F_MDEE_ALLQ_RESET) { + mtk_cldma_hw_init(hw->mdev, hw->base_addr); + for (i = 0; i < HW_QUEUE_NUM; i++) { + txq = hw->txq[i]; + if (txq) + txq->is_stopping = false; + } + /* After leaving lowpower L2 states, PCIe will reset, + * so CLDMA L1 register needs to be set again. + */ + mtk_hw_unmask_irq(hw->mdev, hw->pci_ext_irq_id); + } + } +} diff --git a/drivers/net/wwan/mediatek/pcie/mtk_cldma_drv_t800.h b/drivers/net/wwan/mediatek/pcie/mtk_cldma_drv_t800.h index b89d45a81c4f..470a40015f77 100644 --- a/drivers/net/wwan/mediatek/pcie/mtk_cldma_drv_t800.h +++ b/drivers/net/wwan/mediatek/pcie/mtk_cldma_drv_t800.h @@ -9,6 +9,7 @@ #include #include "mtk_cldma.h" +#include "mtk_fsm.h" int mtk_cldma_hw_init_t800(struct cldma_dev *cd, int hif_id); int mtk_cldma_hw_exit_t800(struct cldma_dev *cd, int hif_id); @@ -17,4 +18,5 @@ int mtk_cldma_txq_free_t800(struct cldma_hw *hw, int vqno); struct rxq *mtk_cldma_rxq_alloc_t800(struct cldma_hw *hw, struct sk_buff *skb); int mtk_cldma_rxq_free_t800(struct cldma_hw *hw, int vqno); int mtk_cldma_start_xfer_t800(struct cldma_hw *hw, int qno); +void mtk_cldma_fsm_state_listener_t800(struct mtk_fsm_param *param, struct cldma_hw *hw); #endif diff --git a/drivers/net/wwan/mediatek/pcie/mtk_pci.c b/drivers/net/wwan/mediatek/pcie/mtk_pci.c index 49cd98627410..34426e099d19 100644 --- a/drivers/net/wwan/mediatek/pcie/mtk_pci.c +++ b/drivers/net/wwan/mediatek/pcie/mtk_pci.c @@ -11,6 +11,7 @@ #include #include +#include "mtk_fsm.h" #include "mtk_pci.h" #include "mtk_port_io.h" #include "mtk_reg.h" diff --git a/drivers/net/wwan/mediatek/pcie/mtk_reg.h b/drivers/net/wwan/mediatek/pcie/mtk_reg.h index 23fa7fd9518e..1159c29685c5 100644 --- a/drivers/net/wwan/mediatek/pcie/mtk_reg.h +++ b/drivers/net/wwan/mediatek/pcie/mtk_reg.h @@ -18,6 +18,17 @@ enum mtk_ext_evt_h2d { EXT_EVT_H2D_DEVICE_RESET = 1 << 13, }; +enum mtk_ext_evt_d2h { + EXT_EVT_D2H_PCIE_DS_LOCK_ACK = 1 << 0, + EXT_EVT_D2H_EXCEPT_INIT = 1 << 1, + EXT_EVT_D2H_EXCEPT_INIT_DONE = 1 << 2, + EXT_EVT_D2H_EXCEPT_CLEARQ_DONE = 1 << 3, + EXT_EVT_D2H_EXCEPT_ALLQ_RESET = 1 << 4, + EXT_EVT_D2H_BOOT_FLOW_SYNC = 1 << 5, + EXT_EVT_D2H_ASYNC_HS_NOTIFY_SAP = 1 << 15, + EXT_EVT_D2H_ASYNC_HS_NOTIFY_MD = 1 << 16, +}; + #define REG_PCIE_SW_TRIG_INT 0x00BC #define REG_PCIE_LTSSM_STATUS 0x0150 #define REG_IMASK_LOCAL 0x0180 From patchwork Tue Nov 22 11:11:46 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?WWFuY2hhbyBZYW5nICjmnajlvabotoUp?= X-Patchwork-Id: 24302 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp2147362wrr; Tue, 22 Nov 2022 03:38:51 -0800 (PST) X-Google-Smtp-Source: AA0mqf5XF0a8K9ZDmn2HrCDJOsr2SZXg1sDHqdBPKBTGDcg2kx11NT2ITPFdWF+CZuw800jTCmLD X-Received: by 2002:a17:90b:2686:b0:218:bb0a:e295 with SMTP id pl6-20020a17090b268600b00218bb0ae295mr8150879pjb.80.1669117131187; Tue, 22 Nov 2022 03:38:51 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1669117131; cv=none; d=google.com; s=arc-20160816; b=0fyaypGGDZfnD7ReDmTNmqay6tzvlc5h7uXUf5Pzmfpb8yf02E+VZA321wcdS6IxAm +Oj+hYDbzUMpAIqv0PsXhZi6R3ccJkd56cLyJCwuf32YnP/gDJRXnh0jLJh/8VrlPrpJ C3GVODOA68JFWvLh0fiTh3bncewBQ09YDgAaGGyslnrcO+IMKc2iXEq0+7/gAZxdpbBt Zz3I7hGkH7OKvNKmHzpMAyzejisNNbNHsPv/927h8t8A715Ajm8orlKl6PpCmhZt+IVu 0S8IM4/sYnBT4kfPAxt/eiJ6Y88l0Yv59T6l4xREdYkj3k8ZmR8ffIFgUm0eugvR5WrX zMdg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=A8k7hjeHoveNCl54TvCDMjQQBy0zuovPKKEAS/niE+M=; b=ajnjT+ndg4DcEMegPv7LsujnmxQCzb0gF21o6oobckntOnI4LC9Di0Ggj81mFDj9F7 W7cq4v+1C3yQcwwyKYPRr8JmXVD4uM7wKAmbNpGnZQLVzG1qXd0zt72Q8/U162l5VFDM z9xaP/13ozBRcCmhGuOIxl8aDIb9kIdoZdHTk2ZVd+s9tUCR3FOhuPaTc0kn6j2XOSjH oVXMxqWgfsL/oiebXEt2mBpIZVQTA1BTuRn9zf/2BPQy/bv94hnVtoh4ZjcfSxnc3bg/ PvXCSpEWeaPctCyU2XnwhJWEzhJYeyfTfCx1M0vMTlw2PsrwkqLbFh3neqzmx3TpuZRx sIqg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@mediatek.com header.s=dk header.b=mS+aliSU; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=mediatek.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id o25-20020a634e59000000b0046ae2a8ea9csi13235196pgl.733.2022.11.22.03.38.36; Tue, 22 Nov 2022 03:38:51 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@mediatek.com header.s=dk header.b=mS+aliSU; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=mediatek.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233425AbiKVL0Y (ORCPT + 99 others); Tue, 22 Nov 2022 06:26:24 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45652 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233188AbiKVLZ7 (ORCPT ); Tue, 22 Nov 2022 06:25:59 -0500 Received: from mailgw01.mediatek.com (unknown [60.244.123.138]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 625BF60E8A; Tue, 22 Nov 2022 03:19:52 -0800 (PST) X-UUID: d51047fcf105465a83cffe2c1f2b9397-20221122 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=mediatek.com; s=dk; h=Content-Type:MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:CC:To:From; bh=A8k7hjeHoveNCl54TvCDMjQQBy0zuovPKKEAS/niE+M=; b=mS+aliSUe3tLFiUFgdEXpAFPDXcelYBYGP+rzDxU5fzcOIV3OWLjfngb9ymWkkOu3e/QRMWgQ2ZyowKU/VFU7dFTaGnQBJuNFqJ7Dpgm7FkfjNJAoqTG3SuRMgKr62No4LugLouC7H8WNz4HpWkjYUILe/RYxyojg39Nh/4vdN8=; X-CID-P-RULE: Release_Ham X-CID-O-INFO: VERSION:1.1.13,REQID:b71a2be9-aa58-4329-8082-72073131f79a,IP:0,U RL:0,TC:0,Content:-25,EDM:0,RT:0,SF:95,FILE:0,BULK:0,RULE:Release_Ham,ACTI ON:release,TS:70 X-CID-INFO: VERSION:1.1.13,REQID:b71a2be9-aa58-4329-8082-72073131f79a,IP:0,URL :0,TC:0,Content:-25,EDM:0,RT:0,SF:95,FILE:0,BULK:0,RULE:Spam_GS981B3D,ACTI ON:quarantine,TS:70 X-CID-META: VersionHash:d12e911,CLOUDID:3800eadb-6ad4-42ff-91f3-18e0272db660,B ulkID:221122191948CO6K73OI,BulkQuantity:0,Recheck:0,SF:38|28|17|19|48,TC:n il,Content:0,EDM:-3,IP:nil,URL:11|1,File:nil,Bulk:nil,QS:nil,BEC:nil,COL:0 X-UUID: d51047fcf105465a83cffe2c1f2b9397-20221122 Received: from mtkmbs13n1.mediatek.inc [(172.21.101.193)] by mailgw01.mediatek.com (envelope-from ) (Generic MTA with TLSv1.2 ECDHE-RSA-AES256-GCM-SHA384 256/256) with ESMTP id 1160350436; Tue, 22 Nov 2022 19:19:46 +0800 Received: from mtkmbs11n2.mediatek.inc (172.21.101.187) by mtkmbs10n2.mediatek.inc (172.21.101.183) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.792.3; Tue, 22 Nov 2022 19:19:44 +0800 Received: from mcddlt001.gcn.mediatek.inc (10.19.240.15) by mtkmbs11n2.mediatek.inc (172.21.101.73) with Microsoft SMTP Server id 15.2.792.15 via Frontend Transport; Tue, 22 Nov 2022 19:19:42 +0800 From: Yanchao Yang To: Loic Poulain , Sergey Ryazanov , Johannes Berg , "David S . Miller" , Eric Dumazet , "Jakub Kicinski" , Paolo Abeni , netdev ML , kernel ML CC: MTK ML , Liang Lu , Haijun Liu , Hua Yang , Ting Wang , Felix Chen , Mingliang Xu , Min Dong , Aiden Wang , Guohao Zhang , Chris Feng , "Yanchao Yang" , Lambert Wang , Mingchuang Qiao , Xiayu Zhang , Haozhe Chang , MediaTek Corporation Subject: [PATCH net-next v1 07/13] net: wwan: tmi: Add AT & MBIM WWAN ports Date: Tue, 22 Nov 2022 19:11:46 +0800 Message-ID: <20221122111152.160377-8-yanchao.yang@mediatek.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20221122111152.160377-1-yanchao.yang@mediatek.com> References: <20221122111152.160377-1-yanchao.yang@mediatek.com> MIME-Version: 1.0 X-MTK: N X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_MSPIKE_H2,SPF_HELO_PASS, SPF_PASS,UNPARSEABLE_RELAY autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1750196164808603154?= X-GMAIL-MSGID: =?utf-8?q?1750196164808603154?= From: MediaTek Corporation Adds AT & MBIM ports to the port infrastructure. The WWAN initialization method is responsible for creating the corresponding ports using the WWAN framework infrastructure. The implemented WWAN port operations are start, stop, tx, tx_blocking and tx_poll. Adds Modem Logging (MDLog) port to collect modem logs for debugging purposes. MDLog is supported by the RelayFs interface. MDLog allows user-space APPs to control logging via MBIM command and to collect logs via the RelayFs interface, while port infrastructure facilitates communication between the driver and the modem. Signed-off-by: Felix Chen Signed-off-by: MediaTek Corporation --- drivers/net/wwan/mediatek/mtk_ctrl_plane.c | 3 + drivers/net/wwan/mediatek/mtk_ctrl_plane.h | 2 +- drivers/net/wwan/mediatek/mtk_fsm.c | 9 + drivers/net/wwan/mediatek/mtk_port.c | 107 ++++- drivers/net/wwan/mediatek/mtk_port.h | 80 +++- drivers/net/wwan/mediatek/mtk_port_io.c | 473 ++++++++++++++++++++- drivers/net/wwan/mediatek/mtk_port_io.h | 41 ++ drivers/net/wwan/mediatek/pcie/mtk_pci.c | 18 +- 8 files changed, 723 insertions(+), 10 deletions(-) diff --git a/drivers/net/wwan/mediatek/mtk_ctrl_plane.c b/drivers/net/wwan/mediatek/mtk_ctrl_plane.c index 12d9c30f2380..fb1597a22bc7 100644 --- a/drivers/net/wwan/mediatek/mtk_ctrl_plane.c +++ b/drivers/net/wwan/mediatek/mtk_ctrl_plane.c @@ -18,6 +18,9 @@ static const struct virtq vq_tbl[] = { {VQ(0), CLDMA0, TXQ(0), RXQ(0), VQ_MTU_3_5K, VQ_MTU_3_5K, TX_REQ_NUM, RX_REQ_NUM}, {VQ(1), CLDMA1, TXQ(0), RXQ(0), VQ_MTU_3_5K, VQ_MTU_3_5K, TX_REQ_NUM, RX_REQ_NUM}, + {VQ(2), CLDMA1, TXQ(2), RXQ(2), VQ_MTU_3_5K, VQ_MTU_3_5K, TX_REQ_NUM, RX_REQ_NUM}, + {VQ(3), CLDMA1, TXQ(5), RXQ(5), VQ_MTU_3_5K, VQ_MTU_3_5K, TX_REQ_NUM, RX_REQ_NUM}, + {VQ(4), CLDMA1, TXQ(7), RXQ(7), VQ_MTU_3_5K, VQ_MTU_63K, TX_REQ_NUM, RX_REQ_NUM}, }; static int mtk_ctrl_get_hif_id(unsigned char peer_id) diff --git a/drivers/net/wwan/mediatek/mtk_ctrl_plane.h b/drivers/net/wwan/mediatek/mtk_ctrl_plane.h index 40c72b032413..87f2f9b5f481 100644 --- a/drivers/net/wwan/mediatek/mtk_ctrl_plane.h +++ b/drivers/net/wwan/mediatek/mtk_ctrl_plane.h @@ -13,7 +13,7 @@ #include "mtk_fsm.h" #define VQ(N) (N) -#define VQ_NUM (2) +#define VQ_NUM (5) #define TX_REQ_NUM (16) #define RX_REQ_NUM (TX_REQ_NUM) diff --git a/drivers/net/wwan/mediatek/mtk_fsm.c b/drivers/net/wwan/mediatek/mtk_fsm.c index 790d070fc2ec..d754a34ade6c 100644 --- a/drivers/net/wwan/mediatek/mtk_fsm.c +++ b/drivers/net/wwan/mediatek/mtk_fsm.c @@ -97,6 +97,7 @@ enum ctrl_msg_id { CTRL_MSG_MDEE = 4, CTRL_MSG_MDEE_REC_OK = 6, CTRL_MSG_MDEE_PASS = 8, + CTRL_MSG_UNIFIED_PORT_CFG = 11, }; struct ctrl_msg_header { @@ -416,6 +417,14 @@ static int mtk_fsm_md_ctrl_msg_handler(void *__fsm, struct sk_buff *skb) case CTRL_MSG_MDEE_PASS: mtk_fsm_evt_submit(fsm->mdev, FSM_EVT_MDEE, FSM_F_MDEE_PASS, NULL, 0, 0); break; + case CTRL_MSG_UNIFIED_PORT_CFG: + mtk_port_tbl_update(fsm->mdev, skb->data + sizeof(*ctrl_msg_h)); + ret = mtk_port_internal_write(hs_info->ctrl_port, skb); + if (ret <= 0) + dev_err(fsm->mdev->dev, "Unable to send port config ack message.\n"); + else + need_free_data = false; + break; default: dev_err(fsm->mdev->dev, "Invalid control message id\n"); } diff --git a/drivers/net/wwan/mediatek/mtk_port.c b/drivers/net/wwan/mediatek/mtk_port.c index 6d82fb3c4cdc..2096d2eb4ffd 100644 --- a/drivers/net/wwan/mediatek/mtk_port.c +++ b/drivers/net/wwan/mediatek/mtk_port.c @@ -45,6 +45,9 @@ DEFINE_MUTEX(port_mngr_grp_mtx); static DEFINE_IDA(ccci_dev_ids); static const struct mtk_port_cfg port_cfg[] = { + {CCCI_UART2_TX, CCCI_UART2_RX, VQ(3), PORT_TYPE_WWAN, "AT", PORT_F_ALLOW_DROP}, + {CCCI_MD_LOG_TX, CCCI_MD_LOG_RX, VQ(4), PORT_TYPE_RELAYFS, "MDLog", PORT_F_DFLT}, + {CCCI_MBIM_TX, CCCI_MBIM_RX, VQ(2), PORT_TYPE_WWAN, "MBIM", PORT_F_ALLOW_DROP}, {CCCI_CONTROL_TX, CCCI_CONTROL_RX, VQ(1), PORT_TYPE_INTERNAL, "MDCTRL", PORT_F_ALLOW_DROP}, {CCCI_SAP_CONTROL_TX, CCCI_SAP_CONTROL_RX, VQ(0), PORT_TYPE_INTERNAL, "SAPCTRL", PORT_F_ALLOW_DROP}, @@ -302,11 +305,100 @@ static void mtk_port_tbl_destroy(struct mtk_port_mngr *port_mngr, struct mtk_sta } while (tbl_type < PORT_TBL_MAX); } +/* mtk_port_tbl_update() - Update port radix tree table. + * @mdev: pointer to mtk_md_dev. + * @data: pointer to config data from device. + * + * This function called when host driver received a control message from device. + * + * Return: 0 on success and failure value on error. + */ +int mtk_port_tbl_update(struct mtk_md_dev *mdev, void *data) +{ + struct mtk_port_cfg_header *cfg_hdr = data; + struct mtk_port_cfg_hif_info *hif_info; + struct mtk_port_cfg_ch_info *ch_info; + struct mtk_port_mngr *port_mngr; + struct mtk_ctrl_blk *ctrl_blk; + int parsed_data_len = 0; + struct mtk_port *port; + int ret = 0; + + if (unlikely(!mdev || !cfg_hdr)) { + ret = -EINVAL; + goto end; + } + + ctrl_blk = mdev->ctrl_blk; + port_mngr = ctrl_blk->port_mngr; + + if (cfg_hdr->msg_type != PORT_CFG_MSG_REQUEST) { + dev_warn(mdev->dev, "Invalid msg_type: %d\n", cfg_hdr->msg_type); + ret = -EPROTO; + goto end; + } + + if (cfg_hdr->is_enable != 1) { + dev_warn(mdev->dev, "Invalid enable flag: %d\n", cfg_hdr->is_enable); + ret = -EPROTO; + goto end; + } + switch (cfg_hdr->cfg_type) { + case PORT_CFG_CH_INFO: + while (parsed_data_len < le16_to_cpu(cfg_hdr->port_config_len)) { + ch_info = (struct mtk_port_cfg_ch_info *)(cfg_hdr->data + parsed_data_len); + parsed_data_len += sizeof(*ch_info); + + port = mtk_port_search_by_id(port_mngr, le16_to_cpu(ch_info->dl_ch_id)); + if (port) { + continue; + } else { + dev_warn(mdev->dev, + "It's not supported the extended port(%s),ch: 0x%x\n", + ch_info->port_name, le16_to_cpu(ch_info->dl_ch_id)); + } + } + cfg_hdr->msg_type = PORT_CFG_MSG_RESPONSE; + break; + case PORT_CFG_HIF_INFO: + hif_info = (struct mtk_port_cfg_hif_info *)cfg_hdr->data; + /* Clean up all the mark of the vqs before next paint, because if + * clean up at end of case PORT_CFG_CH_INFO, the ch_info may be + * NULL when cfg_hdr->port_config_len is 0, that will lead to can + * not get peer_id. + */ + mtk_ctrl_vq_color_cleanup(port_mngr->ctrl_blk, hif_info->peer_id); + + while (parsed_data_len < le16_to_cpu(cfg_hdr->port_config_len)) { + hif_info = (struct mtk_port_cfg_hif_info *) + (cfg_hdr->data + parsed_data_len); + parsed_data_len += sizeof(*hif_info); + /* Color vq means that mark the vq to configure to the port */ + mtk_ctrl_vq_color_paint(port_mngr->ctrl_blk, + hif_info->peer_id, + hif_info->ul_hw_queue_id, + hif_info->dl_hw_queue_id, + le32_to_cpu(hif_info->ul_hw_queue_mtu), + le32_to_cpu(hif_info->dl_hw_queue_mtu)); + } + cfg_hdr->msg_type = PORT_CFG_MSG_RESPONSE; + break; + default: + dev_warn(mdev->dev, "Unsupported cfg_type: %d\n", cfg_hdr->cfg_type); + cfg_hdr->is_enable = 0; + ret = -EPROTO; + break; + } + +end: + return ret; +} + static struct mtk_stale_list *mtk_port_stale_list_create(struct mtk_port_mngr *port_mngr) { struct mtk_stale_list *s_list; - /* cannot use devm_kzalloc here, because should pair with the free operation which + /* can not use devm_kzalloc here, because should pair with the free operation which * may be no dev pointer. */ s_list = kzalloc(sizeof(*s_list), GFP_KERNEL); @@ -439,7 +531,7 @@ static void mtk_port_trb_free(struct kref *trb_kref) skb = container_of((char *)trb, struct sk_buff, cb[0]); if (trb->cmd == TRB_CMD_TX) - dev_kfree_skb_any(skb); + mtk_port_free_tx_skb(port, skb); else mtk_bm_free(port->port_mngr->ctrl_blk->bm_pool, skb); } @@ -513,7 +605,7 @@ static int mtk_port_tx_complete(struct sk_buff *skb) return 0; } -static int mtk_port_status_check(struct mtk_port *port) +int mtk_port_status_check(struct mtk_port *port) { /* If port is enable, it must on port_mngr's port_tbl, so the mdev must exist. */ if (!test_bit(PORT_S_ENABLE, &port->status)) { @@ -576,7 +668,7 @@ int mtk_port_send_data(struct mtk_port *port, void *data) dev_warn(port_mngr->ctrl_blk->mdev->dev, "Failed to submit trb for port(%s), ret=%d\n", port->info.name, ret); kref_put(&trb->kref, mtk_port_trb_free); /* kref count 2->1 */ - dev_kfree_skb_any(skb); + mtk_port_free_tx_skb(port, skb); goto end; } @@ -1147,6 +1239,13 @@ void mtk_port_mngr_fsm_state_handler(struct mtk_fsm_param *fsm_param, void *arg) } port->enable = true; ports_ops[port->info.type]->enable(port); + port = mtk_port_search_by_id(port_mngr, CCCI_MD_LOG_RX); + if (!port) { + dev_err(port_mngr->ctrl_blk->mdev->dev, "Failed to find MD LOG port\n"); + goto err; + } + port->enable = true; + ports_ops[port->info.type]->enable(port); } else if (flag & FSM_F_MDEE_CLEARQ_DONE) { /* the time 2000ms recommended by device-end * it's for wait device prepares the data diff --git a/drivers/net/wwan/mediatek/mtk_port.h b/drivers/net/wwan/mediatek/mtk_port.h index 4f6d2ddd63f0..9636247524e7 100644 --- a/drivers/net/wwan/mediatek/mtk_port.h +++ b/drivers/net/wwan/mediatek/mtk_port.h @@ -26,6 +26,7 @@ #define MTK_PORT_NAME_HDR "wwanD" #define MTK_DFLT_MAX_DEV_CNT (10) #define MTK_DFLT_PORT_NAME_LEN (20) +#define MTK_DFLT_FULL_NAME_LEN (50) /* Mapping MTK_PEER_ID and mtk_port_tbl index */ #define MTK_PORT_TBL_TYPE(ch) (MTK_PEER_ID(ch) - 1) @@ -64,6 +65,12 @@ enum mtk_ccci_ch { /* to MD */ CCCI_CONTROL_RX = 0x2000, CCCI_CONTROL_TX = 0x2001, + CCCI_UART2_RX = 0x200A, + CCCI_UART2_TX = 0x200C, + CCCI_MD_LOG_RX = 0x202A, + CCCI_MD_LOG_TX = 0x202B, + CCCI_MBIM_RX = 0x20D0, + CCCI_MBIM_TX = 0x20D1, }; enum mtk_port_flag { @@ -81,6 +88,8 @@ enum mtk_port_tbl { enum mtk_port_type { PORT_TYPE_INTERNAL, + PORT_TYPE_WWAN, + PORT_TYPE_RELAYFS, PORT_TYPE_MAX }; @@ -89,13 +98,30 @@ struct mtk_internal_port { int (*recv_cb)(void *arg, struct sk_buff *skb); }; +struct mtk_wwan_port { + /* w_lock Protect wwan_port when recv data and disable port at the same time */ + struct mutex w_lock; + int w_type; + void *w_port; +}; + +struct mtk_relayfs_port { + struct dentry *ctrl_file; + struct dentry *d_wwan; + struct rchan *rc; + atomic_t is_full; + char ctrl_file_name[MTK_DFLT_FULL_NAME_LEN]; +}; + /* union mtk_port_priv - Contains private data for different type of ports. - * @cdev: private data for character device port. * @i_priv: private data for internal other user. + * @w_priv: private data for wwan port. + * @rf_priv: private data for relayfs port */ union mtk_port_priv { - struct cdev *cdev; struct mtk_internal_port i_priv; + struct mtk_wwan_port w_priv; + struct mtk_relayfs_port rf_priv; }; /* struct mtk_port_cfg - Contains port's basic configuration. @@ -203,6 +229,54 @@ struct mtk_port_enum_msg { u8 data[]; } __packed; +enum mtk_port_cfg_type { + PORT_CFG_CH_INFO = 4, + PORT_CFG_HIF_INFO, +}; + +enum mtk_port_cfg_msg_type { + PORT_CFG_MSG_REQUEST = 1, + PORT_CFG_MSG_RESPONSE, +}; + +struct mtk_port_cfg_ch_info { + __le16 dl_ch_id; + u8 dl_hw_queue_id; + u8 ul_hw_queue_id; + u8 reserve[2]; + u8 peer_id; + u8 reserved; + u8 port_name_len; + char port_name[20]; +} __packed; + +struct mtk_port_cfg_hif_info { + u8 dl_hw_queue_id; + u8 ul_hw_queue_id; + u8 peer_id; + u8 reserved; + __le32 dl_hw_queue_mtu; + __le32 ul_hw_queue_mtu; +} __packed; + +/* struct mtk_port_cfg_header - Message from device to configure unified port + * @port_config_len: data length. + * @cfg_type: 4:Channel info/ 5:Hif info + * @msg_type: 1:request/ 2:response + * @is_enable: 0:disable/ 1:enable + * @reserve: reserve bytes. + * @data: the data is channel config information @ref mtk_port_cfg_ch_info or + * hif config information @ref mtk_port_cfg_hif_info, following the cfg_type value. + */ +struct mtk_port_cfg_header { + __le16 port_config_len; + u8 cfg_type; + u8 msg_type; + u8 is_enable; + u8 reserve[3]; + u8 data[]; +} __packed; + struct mtk_ccci_header { __le32 packet_header; __le32 packet_len; @@ -217,8 +291,10 @@ struct mtk_port *mtk_port_search_by_name(struct mtk_port_mngr *port_mngr, char * void mtk_port_stale_list_grp_cleanup(void); int mtk_port_add_header(struct sk_buff *skb); struct mtk_ccci_header *mtk_port_strip_header(struct sk_buff *skb); +int mtk_port_status_check(struct mtk_port *port); int mtk_port_send_data(struct mtk_port *port, void *data); int mtk_port_status_update(struct mtk_md_dev *mdev, void *data); +int mtk_port_tbl_update(struct mtk_md_dev *mdev, void *data); int mtk_port_vq_enable(struct mtk_port *port); int mtk_port_vq_disable(struct mtk_port *port); void mtk_port_mngr_fsm_state_handler(struct mtk_fsm_param *fsm_param, void *arg); diff --git a/drivers/net/wwan/mediatek/mtk_port_io.c b/drivers/net/wwan/mediatek/mtk_port_io.c index baa0fad5d40b..8aea88246506 100644 --- a/drivers/net/wwan/mediatek/mtk_port_io.c +++ b/drivers/net/wwan/mediatek/mtk_port_io.c @@ -3,9 +3,25 @@ * Copyright (c) 2022, MediaTek Inc. */ +#ifdef CONFIG_COMPAT +#include +#endif +#include +#include +#include +#include +#include +#include +#include + #include "mtk_port_io.h" +#define MTK_CCCI_CLASS_NAME "ccci_node" #define MTK_DFLT_READ_TIMEOUT (1 * HZ) +#define MTK_RELAYFS_N_SUB_BUFF 16 +#define MTK_RELAYFS_CTRL_FILE_PERM 0600 + +static void *ccci_class; static int mtk_port_get_locked(struct mtk_port *port) { @@ -34,6 +50,29 @@ static void mtk_port_put_locked(struct mtk_port *port) mutex_unlock(&port_mngr_grp_mtx); } +/* mtk_port_io_init() - Function for initialize device driver. + * Create ccci_class and register each type of device driver into kernel. + * + * This function called at driver module initialize. + */ +int mtk_port_io_init(void) +{ + ccci_class = class_create(THIS_MODULE, MTK_CCCI_CLASS_NAME); + if (IS_ERR(ccci_class)) + return PTR_ERR(ccci_class); + return 0; +} + +/* mtk_port_io_exit() - Function for delete device driver. + * Unregister each type of device driver from kernel, and destroyccci_class. + * + * This function called at driver module exit. + */ +void mtk_port_io_exit(void) +{ + class_destroy(ccci_class); +} + static void mtk_port_struct_init(struct mtk_port *port) { port->tx_seq = 0; @@ -45,6 +84,23 @@ static void mtk_port_struct_init(struct mtk_port *port) init_waitqueue_head(&port->trb_wq); init_waitqueue_head(&port->rx_wq); mutex_init(&port->read_buf_lock); + mutex_init(&port->write_lock); +} + +static int mtk_port_copy_data_from(void *to, union user_buf from, unsigned int len, + unsigned int offset, bool from_user_space) +{ + int ret = 0; + + if (from_user_space) { + ret = copy_from_user(to, from.ubuf + offset, len); + if (ret) + ret = -EFAULT; + } else { + memcpy(to, from.kbuf + offset, len); + } + + return ret; } static int mtk_port_internal_init(struct mtk_port *port) @@ -77,7 +133,7 @@ static int mtk_port_internal_enable(struct mtk_port *port) if (test_bit(PORT_S_ENABLE, &port->status)) { dev_info(port->port_mngr->ctrl_blk->mdev->dev, - "Skip to enable port( %s )\n", port->info.name); + "Skip to enable port(%s)\n", port->info.name); return 0; } @@ -171,6 +227,56 @@ static void mtk_port_common_close(struct mtk_port *port) skb_queue_purge(&port->rx_skb_list); } +static int mtk_port_common_write(struct mtk_port *port, union user_buf buf, unsigned int len, + bool from_user_space) +{ + unsigned int tx_cnt, left_cnt = len; + struct sk_buff *skb; + int ret; + +start_write: + ret = mtk_port_status_check(port); + if (ret) + goto end_write; + + skb = mtk_port_alloc_tx_skb(port); + if (!skb) { + dev_err(port->port_mngr->ctrl_blk->mdev->dev, + "Failed to alloc skb for port(%s)\n", port->info.name); + ret = -ENOMEM; + goto end_write; + } + + if (!(port->info.flags & PORT_F_RAW_DATA)) { + /* Reserve enough buf len for ccci header */ + skb_reserve(skb, sizeof(struct mtk_ccci_header)); + } + + tx_cnt = min(left_cnt, port->tx_mtu); + ret = mtk_port_copy_data_from(skb_put(skb, tx_cnt), buf, tx_cnt, len - left_cnt, + from_user_space); + if (ret) { + dev_err(port->port_mngr->ctrl_blk->mdev->dev, + "Failed to copy data for port(%s)\n", port->info.name); + mtk_port_free_tx_skb(port, skb); + goto end_write; + } + + ret = mtk_port_send_data(port, skb); + if (ret < 0) + goto end_write; + + left_cnt -= ret; + if (left_cnt) { + dev_dbg(port->port_mngr->ctrl_blk->mdev->dev, + "Port(%s) send %dBytes, but still left %dBytes to send\n", + port->info.name, ret, left_cnt); + goto start_write; + } +end_write: + return (len > left_cnt) ? (len - left_cnt) : ret; +} + /* mtk_port_internal_open() - Function for open internal port. * @mdev: pointer to mtk_md_dev. * @name: the name of port will be opened. @@ -204,7 +310,10 @@ void *mtk_port_internal_open(struct mtk_md_dev *mdev, char *name, int flag) goto err; } - port->info.flags |= PORT_F_BLOCKING; + if (flag & O_NONBLOCK) + port->info.flags &= ~PORT_F_BLOCKING; + else + port->info.flags |= PORT_F_BLOCKING; err: return port; } @@ -284,6 +393,346 @@ void mtk_port_internal_recv_register(void *i_port, priv->recv_cb = cb; } +static int mtk_port_wwan_open(struct wwan_port *w_port) +{ + struct mtk_port *port; + int ret; + + port = wwan_port_get_drvdata(w_port); + ret = mtk_port_get_locked(port); + if (ret) + return ret; + + ret = mtk_port_common_open(port); + if (ret) + mtk_port_put_locked(port); + + return ret; +} + +static void mtk_port_wwan_close(struct wwan_port *w_port) +{ + struct mtk_port *port = wwan_port_get_drvdata(w_port); + + mtk_port_common_close(port); + mtk_port_put_locked(port); +} + +static int mtk_port_wwan_write(struct wwan_port *w_port, struct sk_buff *skb) +{ + struct mtk_port *port = wwan_port_get_drvdata(w_port); + union user_buf user_buf; + + port->info.flags &= ~PORT_F_BLOCKING; + user_buf.kbuf = (void *)skb->data; + return mtk_port_common_write(port, user_buf, skb->len, false); +} + +static int mtk_port_wwan_write_blocking(struct wwan_port *w_port, struct sk_buff *skb) +{ + struct mtk_port *port = wwan_port_get_drvdata(w_port); + union user_buf user_buf; + + port->info.flags |= PORT_F_BLOCKING; + user_buf.kbuf = (void *)skb->data; + return mtk_port_common_write(port, user_buf, skb->len, false); +} + +static __poll_t mtk_port_wwan_poll(struct wwan_port *w_port, struct file *file, + struct poll_table_struct *poll) +{ + struct mtk_port *port = wwan_port_get_drvdata(w_port); + struct mtk_ctrl_blk *ctrl_blk; + __poll_t mask = 0; + + if (mtk_port_status_check(port)) + goto end_poll; + + ctrl_blk = port->port_mngr->ctrl_blk; + poll_wait(file, &port->trb_wq, poll); + if (!VQ_LIST_FULL(ctrl_blk->trans, port->info.vq_id)) + mask |= EPOLLOUT | EPOLLWRNORM; + else + dev_info(ctrl_blk->mdev->dev, "VQ(%d) skb_list_len is %d\n", + port->info.vq_id, ctrl_blk->trans->skb_list[port->info.vq_id].qlen); + +end_poll: + return mask; +} + +static const struct wwan_port_ops wwan_ops = { + .start = mtk_port_wwan_open, + .stop = mtk_port_wwan_close, + .tx = mtk_port_wwan_write, + .tx_blocking = mtk_port_wwan_write_blocking, + .tx_poll = mtk_port_wwan_poll, +}; + +static int mtk_port_wwan_init(struct mtk_port *port) +{ + mtk_port_struct_init(port); + port->enable = false; + + mutex_init(&port->priv.w_priv.w_lock); + + switch (port->info.rx_ch) { + case CCCI_MBIM_RX: + port->priv.w_priv.w_type = WWAN_PORT_MBIM; + break; + case CCCI_UART2_RX: + port->priv.w_priv.w_type = WWAN_PORT_AT; + break; + default: + port->priv.w_priv.w_type = WWAN_PORT_UNKNOWN; + break; + } + + return 0; +} + +static int mtk_port_wwan_exit(struct mtk_port *port) +{ + if (test_bit(PORT_S_ENABLE, &port->status)) + ports_ops[port->info.type]->disable(port); + + pr_err("[TMI] WWAN port(%s) exit is complete\n", port->info.name); + + return 0; +} + +static int mtk_port_wwan_enable(struct mtk_port *port) +{ + struct mtk_port_mngr *port_mngr; + int ret = 0; + + port_mngr = port->port_mngr; + + if (test_bit(PORT_S_ENABLE, &port->status)) { + dev_err(port_mngr->ctrl_blk->mdev->dev, + "Skip to enable port( %s )\n", port->info.name); + goto end; + } + + ret = mtk_port_vq_enable(port); + if (ret && ret != -EBUSY) + goto end; + + port->priv.w_priv.w_port = wwan_create_port(port_mngr->ctrl_blk->mdev->dev, + port->priv.w_priv.w_type, &wwan_ops, port); + if (IS_ERR(port->priv.w_priv.w_port)) { + dev_err(port_mngr->ctrl_blk->mdev->dev, + "Failed to create wwan port for (%s)\n", port->info.name); + ret = PTR_ERR(port->priv.w_priv.w_port); + goto end; + } + + set_bit(PORT_S_RDWR, &port->status); + set_bit(PORT_S_ENABLE, &port->status); + dev_info(port_mngr->ctrl_blk->mdev->dev, + "Port(%s) enable is complete\n", port->info.name); + + return 0; +end: + return ret; +} + +static int mtk_port_wwan_disable(struct mtk_port *port) +{ + struct wwan_port *w_port; + + if (!test_and_clear_bit(PORT_S_ENABLE, &port->status)) { + dev_info(port->port_mngr->ctrl_blk->mdev->dev, + "Skip to disable port(%s)\n", port->info.name); + return 0; + } + + clear_bit(PORT_S_RDWR, &port->status); + w_port = port->priv.w_priv.w_port; + /* When the port is being disabled, port manager may receive RX data + * and try to call wwan_port_rx(). So the w_lock is to protect w_port + * from using by disable flow and receive flow at the same time. + */ + mutex_lock(&port->priv.w_priv.w_lock); + port->priv.w_priv.w_port = NULL; + mutex_unlock(&port->priv.w_priv.w_lock); + + wwan_remove_port(w_port); + + mtk_port_vq_disable(port); + + dev_info(port->port_mngr->ctrl_blk->mdev->dev, + "Port(%s) disable is complete\n", port->info.name); + + return 0; +} + +static int mtk_port_wwan_recv(struct mtk_port *port, struct sk_buff *skb) +{ + if (!test_bit(PORT_S_OPEN, &port->status)) { + /* If current port is not opened by any user, the received data will be dropped */ + dev_warn_ratelimited(port->port_mngr->ctrl_blk->mdev->dev, + "Unabled to recv: (%s) not opened\n", port->info.name); + goto drop_data; + } + + /* Protect w_port from using by disable flow and receive flow at the same time. */ + mutex_lock(&port->priv.w_priv.w_lock); + if (!port->priv.w_priv.w_port) { + mutex_unlock(&port->priv.w_priv.w_lock); + dev_warn_ratelimited(port->port_mngr->ctrl_blk->mdev->dev, + "Invalid (%s) wwan_port, drop packet\n", port->info.name); + goto drop_data; + } + + wwan_port_rx(port->priv.w_priv.w_port, skb); + mutex_unlock(&port->priv.w_priv.w_lock); + return 0; + +drop_data: + mtk_port_free_rx_skb(port, skb); + return -ENXIO; +} + +static struct dentry *trace_create_buf_file_handler(const char *filename, struct dentry *parent, + umode_t mode, struct rchan_buf *buf, + int *is_global) +{ + *is_global = 1; + return debugfs_create_file(filename, mode, parent, buf, &relay_file_operations); +} + +static int trace_remove_buf_file_handler(struct dentry *dentry) +{ + debugfs_remove_recursive(dentry); + return 0; +} + +static int trace_subbuf_start_handler(struct rchan_buf *buf, void *subbuf, + void *prev_subbuf, size_t prev_padding) +{ + struct mtk_port *port = buf->chan->private_data; + + if (relay_buf_full(buf)) { + pr_err_ratelimited("Failed to write relayfs buffer"); + atomic_set(&port->priv.rf_priv.is_full, 1); + return 0; + } + atomic_set(&port->priv.rf_priv.is_full, 0); + return 1; +} + +static struct rchan_callbacks relay_callbacks = { + .subbuf_start = trace_subbuf_start_handler, + .create_buf_file = trace_create_buf_file_handler, + .remove_buf_file = trace_remove_buf_file_handler, +}; + +static int mtk_port_relayfs_enable(struct mtk_port *port) +{ + struct dentry *debugfs_pdev = wwan_get_debugfs_dir(port->port_mngr->ctrl_blk->mdev->dev); + int ret; + + if (IS_ERR_OR_NULL(debugfs_pdev)) { + dev_err(port->port_mngr->ctrl_blk->mdev->dev, + "Failed to get wwan debugfs dentry port(%s)\n", port->info.name); + return 0; + } + port->priv.rf_priv.d_wwan = debugfs_pdev; + + if (test_bit(PORT_S_ENABLE, &port->status)) { + dev_info(port->port_mngr->ctrl_blk->mdev->dev, + "Skip to enable port( %s )\n", port->info.name); + return 0; + } + + ret = mtk_port_vq_enable(port); + if (ret && ret != -EBUSY) + goto err_open_vq; + + dev_info(port->port_mngr->ctrl_blk->mdev->dev, + "Port(%s) enable is complete, rx_buf_size: %d * %d\n", + port->info.name, port->rx_mtu, MTK_RELAYFS_N_SUB_BUFF); + port->priv.rf_priv.rc = relay_open(port->info.name, + debugfs_pdev, + port->rx_mtu, + MTK_RELAYFS_N_SUB_BUFF, + &relay_callbacks, port); + if (!port->priv.rf_priv.rc) + goto err_open_relay; + + set_bit(PORT_S_RDWR, &port->status); + set_bit(PORT_S_ENABLE, &port->status); + /* Open port and allow to receive data */ + ret = mtk_port_common_open(port); + if (ret) + goto err_open_port; + port->info.flags &= ~PORT_F_BLOCKING; + return 0; + +err_open_port: + relay_close(port->priv.rf_priv.rc); +err_open_relay: + mtk_port_vq_disable(port); +err_open_vq: + wwan_put_debugfs_dir(port->priv.rf_priv.d_wwan); + return ret; +} + +static int mtk_port_relayfs_disable(struct mtk_port *port) +{ + if (!test_and_clear_bit(PORT_S_ENABLE, &port->status)) { + dev_info(port->port_mngr->ctrl_blk->mdev->dev, + "Skip to disable port(%s)\n", port->info.name); + goto out; + } + clear_bit(PORT_S_RDWR, &port->status); + mtk_port_common_close(port); + + relay_close(port->priv.rf_priv.rc); + wwan_put_debugfs_dir(port->priv.rf_priv.d_wwan); + mtk_port_vq_disable(port); + dev_info(port->port_mngr->ctrl_blk->mdev->dev, + "Port(%s) disable is complete\n", port->info.name); +out: + return 0; +} + +static int mtk_port_relayfs_recv(struct mtk_port *port, struct sk_buff *skb) +{ + struct mtk_relayfs_port *relayfs_port = &port->priv.rf_priv; + + while (test_bit(PORT_S_OPEN, &port->status) && test_bit(PORT_S_ENABLE, &port->status)) { + __relay_write(relayfs_port->rc, skb->data, skb->len); + if (atomic_read(&port->priv.rf_priv.is_full)) { + msleep(20); + continue; + } else { + break; + } + } + + mtk_port_free_rx_skb(port, skb); + return 0; +} + +static int mtk_port_relayfs_init(struct mtk_port *port) +{ + mtk_port_struct_init(port); + port->enable = false; + atomic_set(&port->priv.rf_priv.is_full, 0); + + return 0; +} + +static int mtk_port_relayfs_exit(struct mtk_port *port) +{ + if (test_bit(PORT_S_ENABLE, &port->status)) + ports_ops[port->info.type]->disable(port); + + pr_err("[TMI] RelayFS Port(%s) exit is complete\n", port->info.name); + return 0; +} + static const struct port_ops port_internal_ops = { .init = mtk_port_internal_init, .exit = mtk_port_internal_exit, @@ -293,6 +742,26 @@ static const struct port_ops port_internal_ops = { .recv = mtk_port_internal_recv, }; +static const struct port_ops port_wwan_ops = { + .init = mtk_port_wwan_init, + .exit = mtk_port_wwan_exit, + .reset = mtk_port_reset, + .enable = mtk_port_wwan_enable, + .disable = mtk_port_wwan_disable, + .recv = mtk_port_wwan_recv, +}; + +static const struct port_ops port_relayfs_ops = { + .init = mtk_port_relayfs_init, + .exit = mtk_port_relayfs_exit, + .reset = mtk_port_reset, + .enable = mtk_port_relayfs_enable, + .disable = mtk_port_relayfs_disable, + .recv = mtk_port_relayfs_recv, +}; + const struct port_ops *ports_ops[PORT_TYPE_MAX] = { &port_internal_ops, + &port_wwan_ops, + &port_relayfs_ops }; diff --git a/drivers/net/wwan/mediatek/mtk_port_io.h b/drivers/net/wwan/mediatek/mtk_port_io.h index 859ade43d923..06a0d225a5df 100644 --- a/drivers/net/wwan/mediatek/mtk_port_io.h +++ b/drivers/net/wwan/mediatek/mtk_port_io.h @@ -10,9 +10,12 @@ #include #include "mtk_bm.h" +#include "mtk_ctrl_plane.h" +#include "mtk_dev.h" #include "mtk_port.h" #define MTK_RX_BUF_SIZE (1024 * 1024) +#define MTK_RX_BUF_MAX_SIZE (2 * 1024 * 1024) extern struct mutex port_mngr_grp_mtx; @@ -25,6 +28,14 @@ struct port_ops { int (*recv)(struct mtk_port *port, struct sk_buff *skb); }; +union user_buf { + void __user *ubuf; + void *kbuf; +}; + +int mtk_port_io_init(void); +void mtk_port_io_exit(void); + void *mtk_port_internal_open(struct mtk_md_dev *mdev, char *name, int flag); int mtk_port_internal_close(void *i_port); int mtk_port_internal_write(void *i_port, struct sk_buff *skb); @@ -32,6 +43,36 @@ void mtk_port_internal_recv_register(void *i_port, int (*cb)(void *priv, struct sk_buff *skb), void *arg); +static inline struct sk_buff *mtk_port_alloc_tx_skb(struct mtk_port *port) +{ + struct mtk_ctrl_blk *ctrl_blk = port->port_mngr->ctrl_blk; + struct sk_buff *skb; + + if (port->tx_mtu > VQ_MTU_3_5K) + skb = mtk_bm_alloc(ctrl_blk->bm_pool_63K); + else + skb = mtk_bm_alloc(ctrl_blk->bm_pool); + + return skb; +} + +static inline void mtk_port_free_tx_skb(struct mtk_port *port, struct sk_buff *skb) +{ + switch (port->info.type) { + case PORT_TYPE_WWAN: + case PORT_TYPE_RELAYFS: + if (port->tx_mtu > VQ_MTU_3_5K) + mtk_bm_free(port->port_mngr->ctrl_blk->bm_pool_63K, skb); + else + mtk_bm_free(port->port_mngr->ctrl_blk->bm_pool, skb); + break; + case PORT_TYPE_INTERNAL: + default: + dev_kfree_skb_any(skb); + break; + } +} + static inline void mtk_port_free_rx_skb(struct mtk_port *port, struct sk_buff *skb) { if (!port) diff --git a/drivers/net/wwan/mediatek/pcie/mtk_pci.c b/drivers/net/wwan/mediatek/pcie/mtk_pci.c index 34426e099d19..e80d65588101 100644 --- a/drivers/net/wwan/mediatek/pcie/mtk_pci.c +++ b/drivers/net/wwan/mediatek/pcie/mtk_pci.c @@ -1153,13 +1153,29 @@ static struct pci_driver mtk_pci_drv = { static int __init mtk_drv_init(void) { - return pci_register_driver(&mtk_pci_drv); + int ret; + + ret = mtk_port_io_init(); + if (ret) + goto err_init_devid; + + ret = pci_register_driver(&mtk_pci_drv); + if (ret) + goto err_pci_drv; + + return 0; +err_pci_drv: + mtk_port_io_exit(); +err_init_devid: + + return ret; } module_init(mtk_drv_init); static void __exit mtk_drv_exit(void) { pci_unregister_driver(&mtk_pci_drv); + mtk_port_io_exit(); mtk_port_stale_list_grp_cleanup(); } module_exit(mtk_drv_exit); From patchwork Tue Nov 22 11:11:47 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?WWFuY2hhbyBZYW5nICjmnajlvabotoUp?= X-Patchwork-Id: 24303 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp2147769wrr; Tue, 22 Nov 2022 03:39:48 -0800 (PST) X-Google-Smtp-Source: AA0mqf523Z0oiL816q0YofrR9A/oRQzV8HFOlB30xRc6vh+LP7CtiNx3d0tEVL8iiZ95ZPRsv1fd X-Received: by 2002:a17:907:904f:b0:78d:85fe:4951 with SMTP id az15-20020a170907904f00b0078d85fe4951mr7616461ejc.593.1669117188180; Tue, 22 Nov 2022 03:39:48 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1669117188; cv=none; d=google.com; s=arc-20160816; b=cieqoEDBBgpTtYt55nrhh6YMcNIE2LDM3oCqmAG3DGQ38XqCc+bJ4POLJOCa4fUVOc dIYdjuxQgVqIRNgMnbtIfwwZ3w6JZ4DalIVyq6ZYLMu7+440X1mgPNT+N5xU+MkoMYui CijPmwOQDwcRWpRTlvU/xBzLsXesn+td1K3IO6VnuuQ1Mr05BnK0NP0thodT6nwfvI/J zj9ba0iuLq/WOBKc7+bxPJMaGuNM/1EEpy4442SWWn1tJHVjRVpZgnyMjHBKWavvu0+u +U3kPalQZausvzrclvIwXa41eUoM023HOX0HbR3tPXj+x6aAKngR0mdG7mk2LqfV/Ma7 idEg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=nlut0vPunJctsXNvAzV4Ly5FwHWTU6pgOvdME9LSSAI=; b=wAt6yK3Flkh2UgYldCOQuhG/EEk3q5c9kDWwGLpr7T4QNvQDSPKglmrWBYuAw9XbJW 66ODIa+vceIxS0vlBi9qdZvrY+JqH/3KLeVf5Ti3deaNGOeFnd4HSlAB2gzyL5fjOSa6 fD/vmqdvfjuMlMr2TgzlNgSie7nSysvpNXs6qyhSs4dyVMsOpXVcS5TAgB9Na0+SLOdQ pg9HZaV5XF/JD+FaMe+hMCC5oHV48eDiZVlTb6XswBQ6G7d6nFGvq7xbB7PzwPoSEBRM EoG7ktHhr8NS3LjcfQN1gVzU+sAzeYXVbX/pWANOaK6WjZPiIDn8ktOlQXtzj2EGma+Z giuA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@mediatek.com header.s=dk header.b="O/ZGLdVO"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=mediatek.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id j11-20020a05640211cb00b0043e5ca9a0e2si6390572edw.628.2022.11.22.03.39.24; Tue, 22 Nov 2022 03:39:48 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@mediatek.com header.s=dk header.b="O/ZGLdVO"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=mediatek.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232831AbiKVL1z (ORCPT + 99 others); Tue, 22 Nov 2022 06:27:55 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45794 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233567AbiKVL1E (ORCPT ); Tue, 22 Nov 2022 06:27:04 -0500 Received: from mailgw02.mediatek.com (unknown [210.61.82.184]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C0EC1532E3; Tue, 22 Nov 2022 03:20:51 -0800 (PST) X-UUID: 98be42061ed74817ba6814c26aa202f0-20221122 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=mediatek.com; s=dk; h=Content-Type:MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:CC:To:From; bh=nlut0vPunJctsXNvAzV4Ly5FwHWTU6pgOvdME9LSSAI=; b=O/ZGLdVO4lRhPt2kUoTTi3J+HdHC9VUtdoK4WAsXhRy4bW8uzP7RBNj19//PG0rKoXuwBgnbuA9wK+U45OsszkAafIR9Hu4PRk6PWM78Y6e/CHMzFzMn1VKOE8MMwLBj/hx7t+CkpOj/zqFtas3GvQbPLEk88UT/i9I5traA/Ls=; X-CID-P-RULE: Release_Ham X-CID-O-INFO: VERSION:1.1.12,REQID:97ed6e0a-391e-4f56-b025-afa72d4533b2,IP:0,U RL:0,TC:0,Content:-25,EDM:0,RT:0,SF:95,FILE:0,BULK:0,RULE:Release_Ham,ACTI ON:release,TS:70 X-CID-INFO: VERSION:1.1.12,REQID:97ed6e0a-391e-4f56-b025-afa72d4533b2,IP:0,URL :0,TC:0,Content:-25,EDM:0,RT:0,SF:95,FILE:0,BULK:0,RULE:Spam_GS981B3D,ACTI ON:quarantine,TS:70 X-CID-META: VersionHash:62cd327,CLOUDID:97577f2f-2938-482e-aafd-98d66723b8a9,B ulkID:2211221920474XUY4O9W,BulkQuantity:0,Recheck:0,SF:38|28|17|19|48,TC:n il,Content:0,EDM:-3,IP:nil,URL:0,File:nil,Bulk:nil,QS:nil,BEC:nil,COL:1 X-UUID: 98be42061ed74817ba6814c26aa202f0-20221122 Received: from mtkmbs13n1.mediatek.inc [(172.21.101.193)] by mailgw02.mediatek.com (envelope-from ) (Generic MTA with TLSv1.2 ECDHE-RSA-AES256-GCM-SHA384 256/256) with ESMTP id 422360825; Tue, 22 Nov 2022 19:20:44 +0800 Received: from mtkmbs11n2.mediatek.inc (172.21.101.187) by mtkmbs10n2.mediatek.inc (172.21.101.183) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.792.3; Tue, 22 Nov 2022 19:20:42 +0800 Received: from mcddlt001.gcn.mediatek.inc (10.19.240.15) by mtkmbs11n2.mediatek.inc (172.21.101.73) with Microsoft SMTP Server id 15.2.792.15 via Frontend Transport; Tue, 22 Nov 2022 19:20:40 +0800 From: Yanchao Yang To: Loic Poulain , Sergey Ryazanov , Johannes Berg , "David S . Miller" , Eric Dumazet , "Jakub Kicinski" , Paolo Abeni , netdev ML , kernel ML CC: MTK ML , Liang Lu , Haijun Liu , Hua Yang , Ting Wang , Felix Chen , Mingliang Xu , Min Dong , Aiden Wang , Guohao Zhang , Chris Feng , "Yanchao Yang" , Lambert Wang , Mingchuang Qiao , Xiayu Zhang , Haozhe Chang , MediaTek Corporation Subject: [PATCH net-next v1 08/13] net: wwan: tmi: Introduce data plane hardware interface Date: Tue, 22 Nov 2022 19:11:47 +0800 Message-ID: <20221122111152.160377-9-yanchao.yang@mediatek.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20221122111152.160377-1-yanchao.yang@mediatek.com> References: <20221122111152.160377-1-yanchao.yang@mediatek.com> MIME-Version: 1.0 X-MTK: N X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_MSPIKE_H2,SPF_HELO_PASS, SPF_PASS,UNPARSEABLE_RELAY autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1750196225144197218?= X-GMAIL-MSGID: =?utf-8?q?1750196225144197218?= From: MediaTek Corporation Data Plane Modem AP Interface (DPMAIF) hardware layer provides hardware abstraction for the upper layer (DPMAIF HIF). It implements functions to do the data plane hardware's configuration, TX/RX control and interrupt handling. Signed-off-by: Hua Yang Signed-off-by: MediaTek Corporation --- drivers/net/wwan/mediatek/Makefile | 1 + drivers/net/wwan/mediatek/mtk_dpmaif_drv.h | 277 +++ .../wwan/mediatek/pcie/mtk_dpmaif_drv_t800.c | 2115 +++++++++++++++++ .../wwan/mediatek/pcie/mtk_dpmaif_reg_t800.h | 368 +++ 4 files changed, 2761 insertions(+) create mode 100644 drivers/net/wwan/mediatek/mtk_dpmaif_drv.h create mode 100644 drivers/net/wwan/mediatek/pcie/mtk_dpmaif_drv_t800.c create mode 100644 drivers/net/wwan/mediatek/pcie/mtk_dpmaif_reg_t800.h diff --git a/drivers/net/wwan/mediatek/Makefile b/drivers/net/wwan/mediatek/Makefile index 60a32d46183b..662594e1ad95 100644 --- a/drivers/net/wwan/mediatek/Makefile +++ b/drivers/net/wwan/mediatek/Makefile @@ -9,6 +9,7 @@ mtk_tmi-y = \ mtk_ctrl_plane.o \ mtk_cldma.o \ pcie/mtk_cldma_drv_t800.o \ + pcie/mtk_dpmaif_drv_t800.o \ mtk_port.o \ mtk_port_io.o \ mtk_fsm.o diff --git a/drivers/net/wwan/mediatek/mtk_dpmaif_drv.h b/drivers/net/wwan/mediatek/mtk_dpmaif_drv.h new file mode 100644 index 000000000000..29b6c99bba42 --- /dev/null +++ b/drivers/net/wwan/mediatek/mtk_dpmaif_drv.h @@ -0,0 +1,277 @@ +/* SPDX-License-Identifier: BSD-3-Clause-Clear + * + * Copyright (c) 2022, MediaTek Inc. + */ + +#ifndef __MTK_DPMAIF_DRV_H__ +#define __MTK_DPMAIF_DRV_H__ + +enum dpmaif_drv_dir { + DPMAIF_TX, + DPMAIF_RX, +}; + +enum mtk_data_hw_feature_type { + DATA_HW_F_LRO = BIT(0), + DATA_HW_F_FRAG = BIT(1), +}; + +enum dpmaif_drv_cmd { + DATA_HW_INTR_COALESCE_SET, + DATA_HW_HASH_GET, + DATA_HW_HASH_SET, + DATA_HW_HASH_KEY_SIZE_GET, + DATA_HW_INDIR_GET, + DATA_HW_INDIR_SET, + DATA_HW_INDIR_SIZE_GET, + DATA_HW_LRO_SET, +}; + +struct dpmaif_drv_intr { + enum dpmaif_drv_dir dir; + unsigned int q_mask; + unsigned int mode; + unsigned int pkt_threshold; + unsigned int time_threshold; +}; + +struct dpmaif_hpc_rule { + unsigned int type:4; + unsigned int flow_lab:20; /* only use for ipv6 */ + unsigned int hop_lim:8; /* only use for ipv6 */ + unsigned short src_port; + unsigned short dst_port; + union{ + struct{ + unsigned int v4src_addr; + unsigned int v4dst_addr; + unsigned int resv[6]; + }; + struct{ + unsigned int v6src_addr3; + unsigned int v6dst_addr3; + unsigned int v6src_addr0; + unsigned int v6src_addr1; + unsigned int v6src_addr2; + unsigned int v6dst_addr0; + unsigned int v6dst_addr1; + unsigned int v6dst_addr2; + }; + }; +}; + +enum mtk_drv_err { + DATA_ERR_STOP_MAX = 10, + DATA_HW_REG_TIMEOUT, + DATA_HW_REG_CHK_FAIL, + DATA_FLOW_CHK_ERR, + DATA_DMA_MAP_ERR, + DATA_DL_ONCE_MORE, + DATA_PIT_SEQ_CHK_FAIL, + DATA_LOW_MEM_TYPE_MAX, + DATA_LOW_MEM_DRB, + DATA_LOW_MEM_SKB, +}; + +#define DPMAIF_RXQ_CNT_MAX 2 +#define DPMAIF_TXQ_CNT_MAX 5 +#define DPMAIF_IRQ_CNT_MAX 3 + +#define DPMAIF_PIT_SEQ_MAX 251 + +#define DPMAIF_HW_PKT_ALIGN 64 +#define DPMAIF_HW_BAT_RSVLEN 0 + +enum { + DPMAIF_CLEAR_INTR, + DPMAIF_UNMASK_INTR, +}; + +enum dpmaif_drv_dlq_id { + DPMAIF_DLQ0 = 0, + DPMAIF_DLQ1, +}; + +struct dpmaif_drv_dlq { + bool q_started; + dma_addr_t pit_base; + u32 pit_size; +}; + +struct dpmaif_drv_ulq { + bool q_started; + dma_addr_t drb_base; + u32 drb_size; +}; + +struct dpmaif_drv_data_ring { + dma_addr_t normal_bat_base; + u32 normal_bat_size; + dma_addr_t frag_bat_base; + u32 frag_bat_size; + u32 normal_bat_remain_size; + u32 normal_bat_pkt_bufsz; + u32 frag_bat_pkt_bufsz; + u32 normal_bat_rsv_length; + u32 pkt_bid_max_cnt; + u32 pkt_alignment; + u32 mtu; + u32 chk_pit_num; + u32 chk_normal_bat_num; + u32 chk_frag_bat_num; +}; + +struct dpmaif_drv_property { + u32 features; + struct dpmaif_drv_dlq dlq[DPMAIF_RXQ_CNT_MAX]; + struct dpmaif_drv_ulq ulq[DPMAIF_TXQ_CNT_MAX]; + struct dpmaif_drv_data_ring ring; +}; + +enum dpmaif_drv_ring_type { + DPMAIF_PIT, + DPMAIF_BAT, + DPMAIF_FRAG, + DPMAIF_DRB, +}; + +enum dpmaif_drv_ring_idx { + DPMAIF_PIT_WIDX, + DPMAIF_PIT_RIDX, + DPMAIF_BAT_WIDX, + DPMAIF_BAT_RIDX, + DPMAIF_FRAG_WIDX, + DPMAIF_FRAG_RIDX, + DPMAIF_DRB_WIDX, + DPMAIF_DRB_RIDX, +}; + +struct dpmaif_drv_irq_en_mask { + u32 ap_ul_l2intr_en_mask; + u32 ap_dl_l2intr_en_mask; + u32 ap_udl_ip_busy_en_mask; +}; + +struct dpmaif_drv_info { + struct mtk_md_dev *mdev; + bool ulq_all_enable, dlq_all_enable; + struct dpmaif_drv_property drv_property; + struct dpmaif_drv_irq_en_mask drv_irq_en_mask; + struct dpmaif_drv_ops *drv_ops; +}; + +struct dpmaif_drv_cfg { + dma_addr_t drb_base[DPMAIF_TXQ_CNT_MAX]; + u32 drb_cnt[DPMAIF_TXQ_CNT_MAX]; + dma_addr_t pit_base[DPMAIF_RXQ_CNT_MAX]; + u32 pit_cnt[DPMAIF_RXQ_CNT_MAX]; + dma_addr_t normal_bat_base; + u32 normal_bat_cnt; + dma_addr_t frag_bat_base; + u32 frag_bat_cnt; + u32 normal_bat_buf_size; + u32 frag_bat_buf_size; + u32 max_mtu; + u32 features; +}; + +enum dpmaif_drv_intr_type { + DPMAIF_INTR_MIN = 0, + /* uplink part */ + DPMAIF_INTR_UL_DONE, + /* downlink part */ + DPMAIF_INTR_DL_BATCNT_LEN_ERR, + DPMAIF_INTR_DL_FRGCNT_LEN_ERR, + DPMAIF_INTR_DL_PITCNT_LEN_ERR, + DPMAIF_INTR_DL_DONE, + DPMAIF_INTR_MAX +}; + +#define DPMAIF_INTR_COUNT ((DPMAIF_INTR_MAX) - (DPMAIF_INTR_MIN) - 1) + +struct dpmaif_drv_intr_info { + unsigned char intr_cnt; + enum dpmaif_drv_intr_type intr_types[DPMAIF_INTR_COUNT]; + /* it's a queue mask or queue index */ + u32 intr_queues[DPMAIF_INTR_COUNT]; +}; + +/* This structure defines the management hooks for dpmaif devices. */ +struct dpmaif_drv_ops { + /* Initialize dpmaif hardware. */ + int (*init)(struct dpmaif_drv_info *drv_info, void *data); + /* Start dpmaif hardware transaction and unmask dpmaif interrupt. */ + int (*start_queue)(struct dpmaif_drv_info *drv_info, enum dpmaif_drv_dir dir); + /* Stop dpmaif hardware transaction and mask dpmaif interrupt. */ + int (*stop_queue)(struct dpmaif_drv_info *drv_info, enum dpmaif_drv_dir dir); + /* Check, mask and clear the dpmaif interrupts, + * and then, collect interrupt information for data plane transaction layer. + */ + int (*intr_handle)(struct dpmaif_drv_info *drv_info, void *data, u8 irq_id); + /* Unmask or clear dpmaif interrupt. */ + int (*intr_complete)(struct dpmaif_drv_info *drv_info, enum dpmaif_drv_intr_type type, + u8 q_id, u64 data); + int (*clear_ip_busy)(struct dpmaif_drv_info *drv_info); + int (*send_doorbell)(struct dpmaif_drv_info *drv_info, enum dpmaif_drv_ring_type type, + u8 q_id, u32 cnt); + int (*get_ring_idx)(struct dpmaif_drv_info *drv_info, enum dpmaif_drv_ring_idx index, + u8 q_id); + int (*feature_cmd)(struct dpmaif_drv_info *drv_info, enum dpmaif_drv_cmd cmd, void *data); + void (*dump)(struct dpmaif_drv_info *drv_info); +}; + +static inline int mtk_dpmaif_drv_init(struct dpmaif_drv_info *drv_info, void *data) +{ + return drv_info->drv_ops->init(drv_info, data); +} + +static inline int mtk_dpmaif_drv_start_queue(struct dpmaif_drv_info *drv_info, + enum dpmaif_drv_dir dir) +{ + return drv_info->drv_ops->start_queue(drv_info, dir); +} + +static inline int mtk_dpmaif_drv_stop_queue(struct dpmaif_drv_info *drv_info, + enum dpmaif_drv_dir dir) +{ + return drv_info->drv_ops->stop_queue(drv_info, dir); +} + +static inline int mtk_dpmaif_drv_intr_handle(struct dpmaif_drv_info *drv_info, + void *data, u8 irq_id) +{ + return drv_info->drv_ops->intr_handle(drv_info, data, irq_id); +} + +static inline int mtk_dpmaif_drv_intr_complete(struct dpmaif_drv_info *drv_info, + enum dpmaif_drv_intr_type type, u8 q_id, u64 data) +{ + return drv_info->drv_ops->intr_complete(drv_info, type, q_id, data); +} + +static inline int mtk_dpmaif_drv_clear_ip_busy(struct dpmaif_drv_info *drv_info) +{ + return drv_info->drv_ops->clear_ip_busy(drv_info); +} + +static inline int mtk_dpmaif_drv_send_doorbell(struct dpmaif_drv_info *drv_info, + enum dpmaif_drv_ring_type type, u8 q_id, u32 cnt) +{ + return drv_info->drv_ops->send_doorbell(drv_info, type, q_id, cnt); +} + +static inline int mtk_dpmaif_drv_get_ring_idx(struct dpmaif_drv_info *drv_info, + enum dpmaif_drv_ring_idx index, u8 q_id) +{ + return drv_info->drv_ops->get_ring_idx(drv_info, index, q_id); +} + +static inline int mtk_dpmaif_drv_feature_cmd(struct dpmaif_drv_info *drv_info, + enum dpmaif_drv_cmd cmd, void *data) +{ + return drv_info->drv_ops->feature_cmd(drv_info, cmd, data); +} + +extern struct dpmaif_drv_ops dpmaif_drv_ops_t800; + +#endif diff --git a/drivers/net/wwan/mediatek/pcie/mtk_dpmaif_drv_t800.c b/drivers/net/wwan/mediatek/pcie/mtk_dpmaif_drv_t800.c new file mode 100644 index 000000000000..c9a1cb431cbe --- /dev/null +++ b/drivers/net/wwan/mediatek/pcie/mtk_dpmaif_drv_t800.c @@ -0,0 +1,2115 @@ +// SPDX-License-Identifier: BSD-3-Clause-Clear +/* + * Copyright (c) 2022, MediaTek Inc. + */ + +#include +#include + +#include "mtk_dev.h" +#include "mtk_dpmaif_drv.h" +#include "mtk_dpmaif_reg_t800.h" + +#define DRV_TO_MDEV(__drv_info) ((__drv_info)->mdev) + +/* 2ms -> 2 * 1000 / 10 = 200 */ +#define POLL_MAX_TIMES 200 +#define POLL_INTERVAL_US 10 + +static void mtk_dpmaif_drv_reset(struct dpmaif_drv_info *drv_info) +{ + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_AP_AO_RGU_ASSERT, DPMAIF_AP_AO_RST_BIT); + /* Delay 2 us to wait for hardware ready. */ + udelay(2); + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_AP_RGU_ASSERT, DPMAIF_AP_RST_BIT); + udelay(2); + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_AP_AO_RGU_DEASSERT, DPMAIF_AP_AO_RST_BIT); + udelay(2); + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_AP_RGU_DEASSERT, DPMAIF_AP_RST_BIT); + udelay(2); +} + +static bool mtk_dpmaif_drv_sram_init(struct dpmaif_drv_info *drv_info) +{ + u32 val, cnt = 0; + bool ret = true; + + val = mtk_hw_read32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_AP_MISC_RSTR_CLR); + val |= DPMAIF_MEM_CLR_MASK; + mtk_hw_write32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_AP_MISC_RSTR_CLR, val); + + do { + if (!(mtk_hw_read32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_AP_MISC_RSTR_CLR) & + DPMAIF_MEM_CLR_MASK)) + break; + + cnt++; + udelay(POLL_INTERVAL_US); + } while (cnt < POLL_MAX_TIMES); + + if (cnt >= POLL_MAX_TIMES) { + dev_err(DRV_TO_MDEV(drv_info)->dev, "Failed to initialize sram.\n"); + return false; + } + return ret; +} + +static bool mtk_dpmaif_drv_config(struct dpmaif_drv_info *drv_info) +{ + u32 val; + + /* Reset dpmaif HW setting. */ + mtk_dpmaif_drv_reset(drv_info); + + /* Initialize dpmaif sram. */ + if (!mtk_dpmaif_drv_sram_init(drv_info)) + return false; + + /* Set DPMAIF AP port mode. */ + val = mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_AO_DL_RDY_CHK_THRES); + val &= ~DPMAIF_PORT_MODE_MSK; + val |= DPMAIF_PORT_MODE_PCIE; + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_AO_DL_RDY_CHK_THRES, val); + + /* Set CG enable. */ + mtk_hw_write32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_AP_MISC_CG_EN, 0x7f); + return true; +} + +static bool mtk_dpmaif_drv_init_intr(struct dpmaif_drv_info *drv_info) +{ + struct dpmaif_drv_irq_en_mask *irq_en_mask; + u32 cnt = 0, cfg; + + irq_en_mask = &drv_info->drv_irq_en_mask; + + /* Set SW UL interrupt. */ + irq_en_mask->ap_ul_l2intr_en_mask = DPMAIF_AP_UL_L2INTR_EN_MASK; + + /* Clear dummy status. */ + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_PD_AP_UL_L2TISAR0, 0xFFFFFFFF); + + /* Set HW UL interrupt enable mask. */ + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_PD_AP_UL_L2TICR0, + irq_en_mask->ap_ul_l2intr_en_mask); + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_PD_AP_UL_L2TISR0, + ~(irq_en_mask->ap_ul_l2intr_en_mask)); + mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_PD_AP_UL_L2TISR0); + + /* Check UL interrupt mask set done. */ + do { + if (!((mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_PD_AP_UL_L2TIMR0) & + irq_en_mask->ap_ul_l2intr_en_mask) == irq_en_mask->ap_ul_l2intr_en_mask)) + break; + + cnt++; + udelay(POLL_INTERVAL_US); + } while (cnt < POLL_MAX_TIMES); + + if (cnt >= POLL_MAX_TIMES) { + dev_err(DRV_TO_MDEV(drv_info)->dev, "Failed to set UL interrupt mask.\n"); + return false; + } + + /* Set SW DL interrupt. */ + irq_en_mask->ap_dl_l2intr_en_mask = DPMAIF_AP_DL_L2INTR_EN_MASK; + + /* Clear dummy status. */ + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_PD_AP_DL_L2TISAR0, 0xFFFFFFFF); + + /* Set HW DL interrupt enable mask. */ + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_PD_AP_DL_L2TISR0, + ~(irq_en_mask->ap_dl_l2intr_en_mask)); + mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_PD_AP_DL_L2TISR0); + + /* Check DL interrupt mask set done. */ + cnt = 0; + do { + if (!((mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_PD_AP_DL_L2TIMR0) & + irq_en_mask->ap_dl_l2intr_en_mask) == irq_en_mask->ap_dl_l2intr_en_mask)) + break; + + cnt++; + udelay(POLL_INTERVAL_US); + } while (cnt < POLL_MAX_TIMES); + + if (cnt >= POLL_MAX_TIMES) { + dev_err(DRV_TO_MDEV(drv_info)->dev, "Failed to set DL interrupt mask\n"); + return false; + } + + /* Set SW AP IP busy. */ + irq_en_mask->ap_udl_ip_busy_en_mask = DPMAIF_AP_UDL_IP_BUSY_EN_MASK; + + /* Clear dummy status. */ + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_PD_AP_IP_BUSY, 0xFFFFFFFF); + + /* Set HW IP busy mask. */ + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_PD_AP_DLUL_IP_BUSY_MASK, + irq_en_mask->ap_udl_ip_busy_en_mask); + + /* DLQ HPC setting. */ + cfg = mtk_hw_read32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_AO_UL_AP_L1TIMR0); + cfg |= DPMAIF_DL_INT_Q2APTOP_MSK | DPMAIF_DL_INT_Q2TOQ1_MSK | DPMAIF_UL_TOP0_INT_MSK; + mtk_hw_write32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_AO_UL_AP_L1TIMR0, cfg); + mtk_hw_write32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_HPC_INTR_MASK, 0xffff); + + dev_info(DRV_TO_MDEV(drv_info)->dev, + "ul_mask=0x%08x, dl_mask=0x%08x, busy_mask=0x%08x\n", + irq_en_mask->ap_ul_l2intr_en_mask, + irq_en_mask->ap_dl_l2intr_en_mask, + irq_en_mask->ap_udl_ip_busy_en_mask); + return true; +} + +static void mtk_dpmaif_drv_set_property(struct dpmaif_drv_info *drv_info, + struct dpmaif_drv_cfg *drv_cfg) +{ + struct dpmaif_drv_property *drv_property = &drv_info->drv_property; + struct dpmaif_drv_data_ring *ring; + struct dpmaif_drv_dlq *dlq; + struct dpmaif_drv_ulq *ulq; + u32 i; + + drv_property->features = drv_cfg->features; + + for (i = 0; i < DPMAIF_DLQ_NUM; i++) { + dlq = &drv_property->dlq[i]; + dlq->pit_base = drv_cfg->pit_base[i]; + dlq->pit_size = drv_cfg->pit_cnt[i]; + dlq->q_started = true; + } + + for (i = 0; i < DPMAIF_ULQ_NUM; i++) { + ulq = &drv_property->ulq[i]; + ulq->drb_base = drv_cfg->drb_base[i]; + ulq->drb_size = drv_cfg->drb_cnt[i]; + ulq->q_started = true; + } + + ring = &drv_property->ring; + + /* Normal bat setting. */ + ring->normal_bat_base = drv_cfg->normal_bat_base; + ring->normal_bat_size = drv_cfg->normal_bat_cnt; + ring->normal_bat_pkt_bufsz = drv_cfg->normal_bat_buf_size; + ring->normal_bat_remain_size = DPMAIF_HW_BAT_REMAIN; + ring->normal_bat_rsv_length = DPMAIF_HW_BAT_RSVLEN; + ring->chk_normal_bat_num = DPMAIF_HW_CHK_BAT_NUM; + + /* Frag bat setting. */ + if (drv_property->features & DATA_HW_F_FRAG) { + ring->frag_bat_base = drv_cfg->frag_bat_base; + ring->frag_bat_size = drv_cfg->frag_bat_cnt; + ring->frag_bat_pkt_bufsz = drv_cfg->frag_bat_buf_size; + ring->chk_frag_bat_num = DPMAIF_HW_CHK_FRG_NUM; + } + + ring->mtu = drv_cfg->max_mtu; + ring->pkt_bid_max_cnt = DPMAIF_HW_PKT_BIDCNT; + ring->pkt_alignment = DPMAIF_HW_PKT_ALIGN; + ring->chk_pit_num = DPMAIF_HW_CHK_PIT_NUM; +} + +static void mtk_dpmaif_drv_init_common_hw(struct dpmaif_drv_info *drv_info) +{ + u32 val; + + /* Config PCIe mode. */ + mtk_hw_write32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_UL_RESERVE_AO_RW, + DPMAIF_PCIE_MODE_SET_VALUE); + + /* Bat cache enable. */ + val = mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_PD_DL_BAT_INIT_CON1); + val |= DPMAIF_DL_BAT_CACHE_PRI; + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_PD_DL_BAT_INIT_CON1, val); + + /* Pit burst enable. */ + val = mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_AO_DL_RDY_CHK_THRES); + val |= DPMAIF_DL_BURST_PIT_EN; + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_AO_DL_RDY_CHK_THRES, val); +} + +static void mtk_dpmaif_drv_set_hpc_cntl(struct dpmaif_drv_info *drv_info) +{ + u32 cfg = 0; + + cfg = (DPMAIF_HPC_LRO_PATH_DF & 0x3) << 0; + cfg |= (DPMAIF_HPC_ADD_MODE_DF & 0x3) << 2; + cfg |= (DPMAIF_HASH_PRIME_DF & 0xf) << 4; + cfg |= (DPMAIF_HPC_TOTAL_NUM & 0xff) << 8; + + /* Configuration include hpc dlq path, + * hpc add mode, hash prime, hpc total number. + */ + mtk_hw_write32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_AO_DL_HPC_CNTL, cfg); +} + +static void mtk_dpmaif_drv_set_agg_cfg(struct dpmaif_drv_info *drv_info) +{ + u32 cfg; + + cfg = (DPMAIF_AGG_MAX_LEN_DF & 0xffff) << 0; + cfg |= (DPMAIF_AGG_TBL_ENT_NUM_DF & 0xffff) << 16; + + /* Configuration include agg max length, agg table number. */ + mtk_hw_write32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_AO_DL_LRO_AGG_CFG, cfg); + + /* enable/disable AGG */ + cfg = mtk_hw_read32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_AO_DL_RDY_CHK_FRG_THRES); + if (drv_info->drv_property.features & DATA_HW_F_LRO) + mtk_hw_write32(DRV_TO_MDEV(drv_info), + NRL2_DPMAIF_AO_DL_RDY_CHK_FRG_THRES, cfg | (0xff << 20)); + else + mtk_hw_write32(DRV_TO_MDEV(drv_info), + NRL2_DPMAIF_AO_DL_RDY_CHK_FRG_THRES, cfg & 0xf00fffff); +} + +static void mtk_dpmaif_drv_set_hash_bit_choose(struct dpmaif_drv_info *drv_info) +{ + u32 cfg; + + cfg = (DPMAIF_LRO_HASH_BIT_CHOOSE_DF & 0x7) << 0; + mtk_hw_write32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_AO_DL_LROPIT_INIT_CON5, cfg); +} + +static void mtk_dpmaif_drv_set_mid_pit_timeout_threshold(struct dpmaif_drv_info *drv_info) +{ + mtk_hw_write32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_AO_DL_LROPIT_TIMEOUT0, + DPMAIF_MID_TIMEOUT_THRES_DF); +} + +static void mtk_dpmaif_drv_set_dlq_timeout_threshold(struct dpmaif_drv_info *drv_info) +{ + u32 val, i; + + for (i = 0; i < DPMAIF_HPC_MAX_TOTAL_NUM; i++) { + val = mtk_hw_read32(DRV_TO_MDEV(drv_info), + NRL2_DPMAIF_AO_DL_LROPIT_TIMEOUT1 + 4 * (i / 2)); + + if (i % 2) + val = (val & 0xFFFF) | (DPMAIF_LRO_TIMEOUT_THRES_DF << 16); + else + val = (val & 0xFFFF0000) | (DPMAIF_LRO_TIMEOUT_THRES_DF); + + mtk_hw_write32(DRV_TO_MDEV(drv_info), + NRL2_DPMAIF_AO_DL_LROPIT_TIMEOUT1 + (4 * (i / 2)), val); + } +} + +static void mtk_dpmaif_drv_set_dlq_start_prs_threshold(struct dpmaif_drv_info *drv_info) +{ + mtk_hw_write32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_AO_DL_LROPIT_TRIG_THRES, + DPMAIF_LRO_PRS_THRES_DF & 0x3FFFF); +} + +static void mtk_dpmaif_drv_toeplitz_hash_enable(struct dpmaif_drv_info *drv_info, u32 enable) +{ + mtk_hw_write32(DRV_TO_MDEV(drv_info), NRL2_REG_TOE_HASH_EN, enable); +} + +static void mtk_dpmaif_drv_hash_default_value_set(struct dpmaif_drv_info *drv_info, u32 hash) +{ + u32 val = mtk_hw_read32(DRV_TO_MDEV(drv_info), NRL2_REG_HASH_CFG_CON); + + mtk_hw_write32(DRV_TO_MDEV(drv_info), NRL2_REG_HASH_CFG_CON, + (val & DPMAIF_HASH_DEFAULT_V_MASK) | hash); +} + +static int mtk_dpmaif_drv_hash_sec_key_set(struct dpmaif_drv_info *drv_info, u8 *hash_key) +{ + u32 i, cnt = 0; + u32 index; + u32 val; + + for (i = 0; i < DPMAIF_HASH_SEC_KEY_NUM / 4; i++) { + index = i << 2; + val = hash_key[index] << 24 | hash_key[index + 1] << 16 | + hash_key[index + 2] << 8 | hash_key[index + 3]; + mtk_hw_write32(DRV_TO_MDEV(drv_info), NRL2_REG_HASH_SEC_KEY_0 + index, val); + } + + mtk_hw_write32(DRV_TO_MDEV(drv_info), NRL2_REG_HASH_SEC_KEY_UPD, 1); + + do { + if (!(mtk_hw_read32(DRV_TO_MDEV(drv_info), NRL2_REG_HASH_SEC_KEY_UPD))) + break; + + cnt++; + udelay(POLL_INTERVAL_US); + } while (cnt < POLL_MAX_TIMES); + + if (cnt >= POLL_MAX_TIMES) + return -DATA_HW_REG_TIMEOUT; + + return 0; +} + +static int mtk_dpmaif_drv_hash_sec_key_get(struct dpmaif_drv_info *drv_info, u8 *hash_key) +{ + u32 index; + u32 val; + u32 i; + + for (i = 0; i < DPMAIF_HASH_SEC_KEY_NUM / 4; i++) { + index = i << 2; + val = mtk_hw_read32(DRV_TO_MDEV(drv_info), NRL2_REG_HASH_SEC_KEY_0 + index); + hash_key[index] = val >> 24 & 0xff; + hash_key[index + 1] = val >> 16 & 0xff; + hash_key[index + 2] = val >> 8 & 0xff; + hash_key[index + 3] = val & 0xff; + } + + return 0; +} + +static void mtk_dpmaif_drv_hash_bit_mask_set(struct dpmaif_drv_info *drv_info, u32 mask) +{ + u32 val = mtk_hw_read32(DRV_TO_MDEV(drv_info), NRL2_REG_HASH_CFG_CON); + + mtk_hw_write32(DRV_TO_MDEV(drv_info), NRL2_REG_HASH_CFG_CON, + (val & DPMAIF_HASH_BIT_MASK) | (mask << 8)); +} + +static void mtk_dpmaif_drv_hash_indir_mask_set(struct dpmaif_drv_info *drv_info, u32 mask) +{ + u32 val = mtk_hw_read32(DRV_TO_MDEV(drv_info), NRL2_REG_HASH_CFG_CON); + + mtk_hw_write32(DRV_TO_MDEV(drv_info), NRL2_REG_HASH_CFG_CON, + (val & DPMAIF_HASH_INDR_MASK) | (mask << 16)); +} + +static u32 mtk_dpmaif_drv_hash_indir_mask_get(struct dpmaif_drv_info *drv_info) +{ + u32 val = mtk_hw_read32(DRV_TO_MDEV(drv_info), NRL2_REG_HASH_CFG_CON); + + return (val & (~DPMAIF_HASH_INDR_MASK)) >> 16; +} + +static void mtk_dpmaif_drv_hpc_stats_thres_set(struct dpmaif_drv_info *drv_info, u32 thres) +{ + mtk_hw_write32(DRV_TO_MDEV(drv_info), NRL2_REG_HPC_STATS_THRES, thres); +} + +static void mtk_dpmaif_drv_hpc_stats_time_cfg_set(struct dpmaif_drv_info *drv_info, u32 time_cfg) +{ + mtk_hw_write32(DRV_TO_MDEV(drv_info), NRL2_REG_HPC_STATS_TIMER_CFG, time_cfg); +} + +static void mtk_dpmaif_drv_init_dl_hpc_hw(struct dpmaif_drv_info *drv_info) +{ + u8 hash_key[DPMAIF_HASH_SEC_KEY_NUM]; + + mtk_dpmaif_drv_set_hpc_cntl(drv_info); + mtk_dpmaif_drv_set_agg_cfg(drv_info); + mtk_dpmaif_drv_set_hash_bit_choose(drv_info); + mtk_dpmaif_drv_set_mid_pit_timeout_threshold(drv_info); + mtk_dpmaif_drv_set_dlq_timeout_threshold(drv_info); + mtk_dpmaif_drv_set_dlq_start_prs_threshold(drv_info); + mtk_dpmaif_drv_toeplitz_hash_enable(drv_info, DPMAIF_TOEPLITZ_HASH_EN); + mtk_dpmaif_drv_hash_default_value_set(drv_info, DPMAIF_HASH_DEFAULT_VALUE); + get_random_bytes(hash_key, sizeof(hash_key)); + mtk_dpmaif_drv_hash_sec_key_set(drv_info, hash_key); + mtk_dpmaif_drv_hash_bit_mask_set(drv_info, DPMAIF_HASH_BIT_MASK_DF); + mtk_dpmaif_drv_hash_indir_mask_set(drv_info, DPMAIF_HASH_INDR_MASK_DF); + mtk_dpmaif_drv_hpc_stats_thres_set(drv_info, DPMAIF_HPC_STATS_THRESHOLD); + mtk_dpmaif_drv_hpc_stats_time_cfg_set(drv_info, DPMAIF_HPC_STATS_TIMER_CFG); +} + +static void mtk_dpmaif_drv_dl_set_ao_remain_minsz(struct dpmaif_drv_info *drv_info, u32 sz) +{ + u32 val; + + val = mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_AO_DL_PKTINFO_CONO); + val &= ~DPMAIF_BAT_REMAIN_MINSZ_MSK; + val |= ((sz / DPMAIF_BAT_REMAIN_SZ_BASE) << 8) & DPMAIF_BAT_REMAIN_MINSZ_MSK; + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_AO_DL_PKTINFO_CONO, val); +} + +static void mtk_dpmaif_drv_dl_set_ao_bat_bufsz(struct dpmaif_drv_info *drv_info, u32 buf_sz) +{ + u32 val; + + val = mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_AO_DL_PKTINFO_CON2); + val &= ~DPMAIF_BAT_BUF_SZ_MSK; + val |= ((buf_sz / DPMAIF_BAT_BUFFER_SZ_BASE) << 8) & DPMAIF_BAT_BUF_SZ_MSK; + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_AO_DL_PKTINFO_CON2, val); +} + +static void mtk_dpmaif_drv_dl_set_ao_bat_rsv_length(struct dpmaif_drv_info *drv_info, u32 length) +{ + u32 val; + + val = mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_AO_DL_PKTINFO_CON2); + val &= ~DPMAIF_BAT_RSV_LEN_MSK; + val |= length & DPMAIF_BAT_RSV_LEN_MSK; + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_AO_DL_PKTINFO_CON2, val); +} + +static void mtk_dpmaif_drv_dl_set_ao_bid_maxcnt(struct dpmaif_drv_info *drv_info, u32 cnt) +{ + u32 val; + + val = mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_AO_DL_PKTINFO_CONO); + val &= ~DPMAIF_BAT_BID_MAXCNT_MSK; + val |= (cnt << 16) & DPMAIF_BAT_BID_MAXCNT_MSK; + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_AO_DL_PKTINFO_CONO, val); +} + +static void mtk_dpmaif_drv_dl_set_pkt_alignment(struct dpmaif_drv_info *drv_info, + bool enable, u32 mode) +{ + u32 val; + + val = mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_AO_DL_RDY_CHK_THRES); + val &= ~DPMAIF_PKT_ALIGN_MSK; + if (enable) { + val |= DPMAIF_PKT_ALIGN_EN; + val |= (mode << 22) & DPMAIF_PKT_ALIGN_MSK; + } + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_AO_DL_RDY_CHK_THRES, val); +} + +static void mtk_dpmaif_drv_dl_set_pit_seqnum(struct dpmaif_drv_info *drv_info, u32 seq) +{ + u32 val; + + val = mtk_hw_read32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_AO_DL_PIT_SEQ_END); + val &= ~DPMAIF_DL_PIT_SEQ_MSK; + val |= seq & DPMAIF_DL_PIT_SEQ_MSK; + mtk_hw_write32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_AO_DL_PIT_SEQ_END, val); +} + +static void mtk_dpmaif_drv_dl_set_ao_mtu(struct dpmaif_drv_info *drv_info, u32 mtu_sz) +{ + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_AO_DL_PKTINFO_CON1, mtu_sz); +} + +static void mtk_dpmaif_drv_dl_set_ao_pit_chknum(struct dpmaif_drv_info *drv_info, u32 number) +{ + u32 val; + + val = mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_AO_DL_PKTINFO_CON2); + val &= ~DPMAIF_PIT_CHK_NUM_MSK; + val |= (number << 24) & DPMAIF_PIT_CHK_NUM_MSK; + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_AO_DL_PKTINFO_CON2, val); +} + +static void mtk_dpmaif_drv_dl_set_ao_bat_check_threshold(struct dpmaif_drv_info *drv_info, u32 size) +{ + u32 val; + + val = mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_AO_DL_RDY_CHK_THRES); + val &= ~DPMAIF_BAT_CHECK_THRES_MSK; + val |= (size << 16) & DPMAIF_BAT_CHECK_THRES_MSK; + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_AO_DL_RDY_CHK_THRES, val); +} + +static void mtk_dpmaif_drv_dl_frg_ao_en(struct dpmaif_drv_info *drv_info, bool enable) +{ + u32 val; + + val = mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_AO_DL_FRG_CHK_THRES); + if (enable) + val |= DPMAIF_FRG_EN_MSK; + else + val &= ~DPMAIF_FRG_EN_MSK; + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_AO_DL_FRG_CHK_THRES, val); +} + +static void mtk_dpmaif_drv_dl_set_ao_frg_bufsz(struct dpmaif_drv_info *drv_info, u32 buf_sz) +{ + u32 val; + + val = mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_AO_DL_FRG_CHK_THRES); + val &= ~DPMAIF_FRG_BUF_SZ_MSK; + val |= ((buf_sz / DPMAIF_FRG_BUFFER_SZ_BASE) << 8) & DPMAIF_FRG_BUF_SZ_MSK; + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_AO_DL_FRG_CHK_THRES, val); +} + +static void mtk_dpmaif_drv_dl_set_ao_frg_check_threshold(struct dpmaif_drv_info *drv_info, u32 size) +{ + u32 val; + + val = mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_AO_DL_FRG_CHK_THRES); + val &= ~DPMAIF_FRG_CHECK_THRES_MSK; + val |= size & DPMAIF_FRG_CHECK_THRES_MSK; + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_AO_DL_FRG_CHK_THRES, val); +} + +static void mtk_dpmaif_drv_dl_set_bat_base_addr(struct dpmaif_drv_info *drv_info, u64 addr) +{ + u32 lb_addr = (u32)(addr & 0xFFFFFFFF); + u32 hb_addr = (u32)(addr >> 32); + + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_PD_DL_BAT_INIT_CON0, lb_addr); + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_PD_DL_BAT_INIT_CON3, hb_addr); +} + +static void mtk_dpmaif_drv_dl_set_bat_size(struct dpmaif_drv_info *drv_info, u32 size) +{ + u32 val; + + val = mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_PD_DL_BAT_INIT_CON1); + val &= ~DPMAIF_BAT_SIZE_MSK; + val |= size & DPMAIF_BAT_SIZE_MSK; + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_PD_DL_BAT_INIT_CON1, val); +} + +static void mtk_dpmaif_drv_dl_bat_en(struct dpmaif_drv_info *drv_info, bool enable) +{ + u32 val; + + val = mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_PD_DL_BAT_INIT_CON1); + if (enable) + val |= DPMAIF_BAT_EN_MSK; + else + val &= ~DPMAIF_BAT_EN_MSK; + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_PD_DL_BAT_INIT_CON1, val); +} + +static void mtk_dpmaif_drv_dl_bat_init_done(struct dpmaif_drv_info *drv_info, bool frag_en) +{ + u32 cnt = 0, dl_bat_init; + + dl_bat_init = DPMAIF_DL_BAT_INIT_ALLSET; + dl_bat_init |= DPMAIF_DL_BAT_INIT_EN; + + if (frag_en) + dl_bat_init |= DPMAIF_DL_BAT_FRG_INIT; + + do { + if (!(mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_PD_DL_BAT_INIT) & + DPMAIF_DL_BAT_INIT_NOT_READY)) { + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_PD_DL_BAT_INIT, dl_bat_init); + break; + } + cnt++; + udelay(POLL_INTERVAL_US); + } while (cnt < POLL_MAX_TIMES); + + if (cnt >= POLL_MAX_TIMES) { + dev_err(DRV_TO_MDEV(drv_info)->dev, "Failed to initialize bat.\n"); + return; + } + + cnt = 0; + do { + if (!((mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_PD_DL_BAT_INIT) & + DPMAIF_DL_BAT_INIT_NOT_READY) == DPMAIF_DL_BAT_INIT_NOT_READY)) + break; + + cnt++; + udelay(POLL_INTERVAL_US); + } while (cnt < POLL_MAX_TIMES); + + if (cnt >= POLL_MAX_TIMES) { + dev_err(DRV_TO_MDEV(drv_info)->dev, "Initialize bat is not ready.\n"); + return; + } +} + +static void mtk_dpmaif_drv_dl_set_pit_base_addr(struct dpmaif_drv_info *drv_info, u64 addr) +{ + u32 lb_addr = (u32)(addr & 0xFFFFFFFF); + u32 hb_addr = (u32)(addr >> 32); + + mtk_hw_write32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_DL_LROPIT_INIT_CON0, lb_addr); + mtk_hw_write32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_DL_LROPIT_INIT_CON4, hb_addr); +} + +static void mtk_dpmaif_drv_dl_set_pit_size(struct dpmaif_drv_info *drv_info, u32 size) +{ + u32 val; + + val = mtk_hw_read32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_DL_LROPIT_INIT_CON1); + val &= ~DPMAIF_PIT_SIZE_MSK; + val |= size & DPMAIF_PIT_SIZE_MSK; + mtk_hw_write32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_DL_LROPIT_INIT_CON1, val); + + mtk_hw_write32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_DL_LROPIT_INIT_CON2, 0); + mtk_hw_write32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_DL_LROPIT_INIT_CON3, 0); + mtk_hw_write32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_DL_LROPIT_INIT_CON5, 0); + mtk_hw_write32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_DL_LROPIT_INIT_CON6, 0); +} + +static void mtk_dpmaif_drv_dl_pit_en(struct dpmaif_drv_info *drv_info, bool enable) +{ + u32 val; + + val = mtk_hw_read32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_DL_LROPIT_INIT_CON3); + if (enable) + val |= DPMAIF_LROPIT_EN_MSK; + else + val &= ~DPMAIF_LROPIT_EN_MSK; + mtk_hw_write32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_DL_LROPIT_INIT_CON3, val); +} + +static void mtk_dpmaif_drv_dl_pit_init_done(struct dpmaif_drv_info *drv_info, u32 pit_idx) +{ + int cnt = 0, dl_pit_init; + + dl_pit_init = DPMAIF_DL_PIT_INIT_ALLSET; + dl_pit_init |= pit_idx << DPMAIF_LROPIT_CHAN_OFS; + dl_pit_init |= DPMAIF_DL_PIT_INIT_EN; + + do { + if (!(mtk_hw_read32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_DL_LROPIT_INIT) & + DPMAIF_DL_PIT_INIT_NOT_READY)) { + mtk_hw_write32(DRV_TO_MDEV(drv_info), + NRL2_DPMAIF_DL_LROPIT_INIT, dl_pit_init); + break; + } + udelay(POLL_INTERVAL_US); + cnt++; + } while (cnt < POLL_MAX_TIMES); + + if (cnt >= POLL_MAX_TIMES) { + dev_err(DRV_TO_MDEV(drv_info)->dev, "Failed to initialize pit.\n"); + return; + } + + cnt = 0; + do { + if (!((mtk_hw_read32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_DL_LROPIT_INIT) & + DPMAIF_DL_PIT_INIT_NOT_READY) == DPMAIF_DL_PIT_INIT_NOT_READY)) + break; + + udelay(POLL_INTERVAL_US); + cnt++; + } while (cnt < POLL_MAX_TIMES); + + if (cnt >= POLL_MAX_TIMES) { + dev_err(DRV_TO_MDEV(drv_info)->dev, "Initialize pit is not ready.\n"); + return; + } +} + +static void mtk_dpmaif_drv_config_dlq_pit_hw(struct dpmaif_drv_info *drv_info, u8 q_num, + struct dpmaif_drv_dlq *dlq) +{ + mtk_dpmaif_drv_dl_set_pit_base_addr(drv_info, (u64)dlq->pit_base); + mtk_dpmaif_drv_dl_set_pit_size(drv_info, dlq->pit_size); + mtk_dpmaif_drv_dl_pit_en(drv_info, true); + mtk_dpmaif_drv_dl_pit_init_done(drv_info, q_num); +} + +static int mtk_dpmaif_drv_dlq_all_en(struct dpmaif_drv_info *drv_info, bool enable) +{ + u32 val, dl_bat_init, cnt = 0; + + val = mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_PD_DL_BAT_INIT_CON1); + + if (enable) + val |= DPMAIF_BAT_EN_MSK; + else + val &= ~DPMAIF_BAT_EN_MSK; + + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_PD_DL_BAT_INIT_CON1, val); + mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_PD_DL_BAT_INIT_CON1); + + dl_bat_init = DPMAIF_DL_BAT_INIT_ONLY_ENABLE_BIT; + dl_bat_init |= DPMAIF_DL_BAT_INIT_EN; + + /* Update DL bat setting to HW */ + do { + if (!(mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_PD_DL_BAT_INIT) & + DPMAIF_DL_BAT_INIT_NOT_READY)) { + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_PD_DL_BAT_INIT, dl_bat_init); + break; + } + udelay(POLL_INTERVAL_US); + cnt++; + } while (cnt < POLL_MAX_TIMES); + + if (cnt >= POLL_MAX_TIMES) { + dev_err(DRV_TO_MDEV(drv_info)->dev, "Failed to enable all dl queue.\n"); + return -DATA_HW_REG_TIMEOUT; + } + + /* Wait HW update done */ + cnt = 0; + do { + if (!((mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_PD_DL_BAT_INIT) & + DPMAIF_DL_BAT_INIT_NOT_READY) == DPMAIF_DL_BAT_INIT_NOT_READY)) + break; + + udelay(POLL_INTERVAL_US); + cnt++; + } while (cnt < POLL_MAX_TIMES); + + if (cnt >= POLL_MAX_TIMES) { + dev_err(DRV_TO_MDEV(drv_info)->dev, "Enable all dl queue is not ready.\n"); + return -DATA_HW_REG_TIMEOUT; + } + + return 0; +} + +static bool mtk_dpmaif_drv_dl_idle_check(struct dpmaif_drv_info *drv_info) +{ + bool is_idle = false; + u32 dl_dbg_sta; + + dl_dbg_sta = mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_PD_DL_DBG_STA1); + + /* If all the queues are idle, DL idle is true. */ + if ((dl_dbg_sta & DPMAIF_DL_IDLE_STS) == DPMAIF_DL_IDLE_STS) + is_idle = true; + + return is_idle; +} + +static u32 mtk_dpmaif_drv_dl_get_wridx(struct dpmaif_drv_info *drv_info) +{ + return ((mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_AO_DL_PIT_STA3)) & + DPMAIF_DL_PIT_WRIDX_MSK); +} + +static u32 mtk_dpmaif_drv_dl_get_pit_ridx(struct dpmaif_drv_info *drv_info) +{ + return ((mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_AO_DL_PIT_STA2)) & + DPMAIF_DL_PIT_WRIDX_MSK); +} + +static void mtk_dpmaif_drv_dl_set_pkt_checksum(struct dpmaif_drv_info *drv_info) +{ + u32 val; + + val = mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_AO_DL_RDY_CHK_THRES); + val |= DPMAIF_DL_PKT_CHECKSUM_EN; + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_AO_DL_RDY_CHK_THRES, val); +} + +static bool mtk_dpmaif_drv_config_dlq_hw(struct dpmaif_drv_info *drv_info) +{ + struct dpmaif_drv_property *drv_property = &drv_info->drv_property; + struct dpmaif_drv_data_ring *ring = &drv_property->ring; + struct dpmaif_drv_dlq *dlq; + u32 i; + + mtk_dpmaif_drv_init_dl_hpc_hw(drv_info); + mtk_dpmaif_drv_dl_set_ao_remain_minsz(drv_info, ring->normal_bat_remain_size); + mtk_dpmaif_drv_dl_set_ao_bat_bufsz(drv_info, ring->normal_bat_pkt_bufsz); + mtk_dpmaif_drv_dl_set_ao_bat_rsv_length(drv_info, ring->normal_bat_rsv_length); + mtk_dpmaif_drv_dl_set_ao_bid_maxcnt(drv_info, ring->pkt_bid_max_cnt); + + if (ring->pkt_alignment == 64) + mtk_dpmaif_drv_dl_set_pkt_alignment(drv_info, true, DPMAIF_PKT_ALIGN64_MODE); + else if (ring->pkt_alignment == 128) + mtk_dpmaif_drv_dl_set_pkt_alignment(drv_info, true, DPMAIF_PKT_ALIGN128_MODE); + else + mtk_dpmaif_drv_dl_set_pkt_alignment(drv_info, false, 0); + + mtk_dpmaif_drv_dl_set_pit_seqnum(drv_info, DPMAIF_PIT_SEQ_MAX); + mtk_dpmaif_drv_dl_set_ao_mtu(drv_info, ring->mtu); + mtk_dpmaif_drv_dl_set_ao_pit_chknum(drv_info, ring->chk_pit_num); + mtk_dpmaif_drv_dl_set_ao_bat_check_threshold(drv_info, ring->chk_normal_bat_num); + + /* Initialize frag bat. */ + if (drv_property->features & DATA_HW_F_FRAG) { + mtk_dpmaif_drv_dl_frg_ao_en(drv_info, true); + mtk_dpmaif_drv_dl_set_ao_frg_bufsz(drv_info, ring->frag_bat_pkt_bufsz); + mtk_dpmaif_drv_dl_set_ao_frg_check_threshold(drv_info, ring->chk_frag_bat_num); + mtk_dpmaif_drv_dl_set_bat_base_addr(drv_info, (u64)ring->frag_bat_base); + mtk_dpmaif_drv_dl_set_bat_size(drv_info, ring->frag_bat_size); + mtk_dpmaif_drv_dl_bat_en(drv_info, true); + mtk_dpmaif_drv_dl_bat_init_done(drv_info, true); + } + + /* Initialize normal bat. */ + mtk_dpmaif_drv_dl_set_bat_base_addr(drv_info, (u64)ring->normal_bat_base); + mtk_dpmaif_drv_dl_set_bat_size(drv_info, ring->normal_bat_size); + mtk_dpmaif_drv_dl_bat_en(drv_info, false); + mtk_dpmaif_drv_dl_bat_init_done(drv_info, false); + + /* Initialize pit information. */ + for (i = 0; i < DPMAIF_DLQ_NUM; i++) { + dlq = &drv_property->dlq[i]; + mtk_dpmaif_drv_config_dlq_pit_hw(drv_info, i, dlq); + } + + if (mtk_dpmaif_drv_dlq_all_en(drv_info, true)) + return false; + mtk_dpmaif_drv_dl_set_pkt_checksum(drv_info); + return true; +} + +static void mtk_dpmaif_drv_ul_update_drb_size(struct dpmaif_drv_info *drv_info, u8 q_num, u32 size) +{ + u32 old_size; + u64 addr; + + addr = DPMAIF_UL_DRBSIZE_ADDRH_N(q_num); + + old_size = mtk_hw_read32(DRV_TO_MDEV(drv_info), addr); + old_size &= ~DPMAIF_DRB_SIZE_MSK; + old_size |= size & DPMAIF_DRB_SIZE_MSK; + mtk_hw_write32(DRV_TO_MDEV(drv_info), addr, old_size); +} + +static void mtk_dpmaif_drv_ul_update_drb_base_addr(struct dpmaif_drv_info *drv_info, + u8 q_num, u64 addr) +{ + u32 lb_addr = (u32)(addr & 0xFFFFFFFF); + u32 hb_addr = (u32)(addr >> 32); + + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_ULQSAR_N(q_num), lb_addr); + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_UL_DRB_ADDRH_N(q_num), hb_addr); +} + +static void mtk_dpmaif_drv_ul_rdy_en(struct dpmaif_drv_info *drv_info, u8 q_num, bool ready) +{ + u32 ul_rdy_en; + + ul_rdy_en = mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_PD_UL_CHNL_ARB0); + if (ready) + ul_rdy_en |= (1 << q_num); + else + ul_rdy_en &= ~(1 << q_num); + + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_PD_UL_CHNL_ARB0, ul_rdy_en); +} + +static void mtk_dpmaif_drv_ul_arb_en(struct dpmaif_drv_info *drv_info, u8 q_num, bool enable) +{ + u32 ul_arb_en; + + ul_arb_en = mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_PD_UL_CHNL_ARB0); + if (enable) + ul_arb_en |= (1 << (q_num + 8)); + else + ul_arb_en &= ~(1 << (q_num + 8)); + + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_PD_UL_CHNL_ARB0, ul_arb_en); +} + +static void mtk_dpmaif_drv_config_ulq_hw(struct dpmaif_drv_info *drv_info) +{ + struct dpmaif_drv_ulq *ulq; + u32 i; + + for (i = 0; i < DPMAIF_ULQ_NUM; i++) { + ulq = &drv_info->drv_property.ulq[i]; + mtk_dpmaif_drv_ul_update_drb_size(drv_info, i, + (ulq->drb_size * DPMAIF_UL_DRB_ENTRY_WORD)); + mtk_dpmaif_drv_ul_update_drb_base_addr(drv_info, i, (u64)ulq->drb_base); + mtk_dpmaif_drv_ul_rdy_en(drv_info, i, true); + mtk_dpmaif_drv_ul_arb_en(drv_info, i, true); + } +} + +static bool mtk_dpmaif_drv_init_done(struct dpmaif_drv_info *drv_info) +{ + u32 val, cnt = 0; + + /* Sync default value to SRAM. */ + val = mtk_hw_read32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_AP_MISC_OVERWRITE_CFG); + val |= DPMAIF_SRAM_SYNC_MASK; + mtk_hw_write32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_AP_MISC_OVERWRITE_CFG, val); + do { + if (!(mtk_hw_read32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_AP_MISC_OVERWRITE_CFG) & + DPMAIF_SRAM_SYNC_MASK)) + break; + + udelay(POLL_INTERVAL_US); + cnt++; + } while (cnt < POLL_MAX_TIMES); + + if (cnt >= POLL_MAX_TIMES) { + dev_err(DRV_TO_MDEV(drv_info)->dev, "Failed to sync default value to sram\n"); + return false; + } + + /* UL configure done. */ + mtk_hw_write32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_AO_UL_INIT_SET, DPMAIF_UL_INIT_DONE_MASK); + + /* DL configure done. */ + mtk_hw_write32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_AO_DL_INIT_SET, DPMAIF_DL_INIT_DONE_MASK); + return true; +} + +static bool mtk_dpmaif_drv_cfg_hw(struct dpmaif_drv_info *drv_info) +{ + mtk_dpmaif_drv_init_common_hw(drv_info); + if (!mtk_dpmaif_drv_config_dlq_hw(drv_info)) + return false; + mtk_dpmaif_drv_config_ulq_hw(drv_info); + if (!mtk_dpmaif_drv_init_done(drv_info)) + return false; + + drv_info->ulq_all_enable = true; + drv_info->dlq_all_enable = true; + + return true; +} + +static void mtk_dpmaif_drv_clr_ul_all_intr(struct dpmaif_drv_info *drv_info) +{ + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_PD_AP_UL_L2TISAR0, 0xFFFFFFFF); +} + +static void mtk_dpmaif_drv_clr_dl_all_intr(struct dpmaif_drv_info *drv_info) +{ + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_PD_AP_DL_L2TISAR0, 0xFFFFFFFF); +} + +static int mtk_dpmaif_drv_init_t800(struct dpmaif_drv_info *drv_info, void *data) +{ + struct dpmaif_drv_cfg *drv_cfg = data; + + if (!drv_cfg) { + dev_err(DRV_TO_MDEV(drv_info)->dev, "Invalid parameter\n"); + return -DATA_FLOW_CHK_ERR; + } + + /* Initialize port mode and clock. */ + if (!mtk_dpmaif_drv_config(drv_info)) + return DATA_HW_REG_CHK_FAIL; + + /* Initialize dpmaif interrupt. */ + if (!mtk_dpmaif_drv_init_intr(drv_info)) + return DATA_HW_REG_CHK_FAIL; + + /* Get initialization information from trans layer. */ + mtk_dpmaif_drv_set_property(drv_info, drv_cfg); + + /* Configure HW queue setting. */ + if (!mtk_dpmaif_drv_cfg_hw(drv_info)) + return DATA_HW_REG_CHK_FAIL; + + /* Clear all interrupt status. */ + mtk_dpmaif_drv_clr_ul_all_intr(drv_info); + mtk_dpmaif_drv_clr_dl_all_intr(drv_info); + + return 0; +} + +static int mtk_dpmaif_drv_ulq_all_en(struct dpmaif_drv_info *drv_info, bool enable) +{ + u32 ul_arb_en; + + ul_arb_en = mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_PD_UL_CHNL_ARB0); + if (enable) + ul_arb_en |= DPMAIF_UL_ALL_QUE_ARB_EN; + else + ul_arb_en &= ~DPMAIF_UL_ALL_QUE_ARB_EN; + + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_PD_UL_CHNL_ARB0, ul_arb_en); + mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_PD_UL_CHNL_ARB0); + + return 0; +} + +static bool mtk_dpmaif_drv_ul_all_idle_check(struct dpmaif_drv_info *drv_info) +{ + bool is_idle = false; + u32 ul_dbg_sta; + + ul_dbg_sta = mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_PD_UL_DBG_STA2); + /* If all the queues are idle, UL idle is true. */ + if ((ul_dbg_sta & DPMAIF_UL_IDLE_STS_MSK) == DPMAIF_UL_IDLE_STS) + is_idle = true; + + return is_idle; +} + +static int mtk_dpmaif_drv_unmask_ulq_intr(struct dpmaif_drv_info *drv_info, u32 q_num) +{ + u32 ui_que_done_mask; + + ui_que_done_mask = (1 << (q_num + DP_UL_INT_DONE_OFFSET)) & DPMAIF_UL_INT_QDONE_MSK; + drv_info->drv_irq_en_mask.ap_ul_l2intr_en_mask |= ui_que_done_mask; + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_PD_AP_UL_L2TICR0, ui_que_done_mask); + + return 0; +} + +static int mtk_dpmaif_drv_ul_unmask_all_tx_done_intr(struct dpmaif_drv_info *drv_info) +{ + int ret; + u8 i; + + for (i = 0; i < DPMAIF_ULQ_NUM; i++) { + ret = mtk_dpmaif_drv_unmask_ulq_intr(drv_info, i); + if (ret < 0) + break; + } + + return ret; +} + +static int mtk_dpmaif_drv_dl_unmask_rx_done_intr(struct dpmaif_drv_info *drv_info, u8 qno) +{ + u32 di_que_done_mask; + + if (qno == DPMAIF_DLQ0) + di_que_done_mask = DPMAIF_DL_INT_DLQ0_QDONE_MSK; + else + di_que_done_mask = DPMAIF_DL_INT_DLQ1_QDONE_MSK; + + drv_info->drv_irq_en_mask.ap_dl_l2intr_en_mask |= di_que_done_mask; + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_PD_AP_DL_L2TICR0, di_que_done_mask); + + return 0; +} + +static int mtk_dpmaif_drv_dl_unmask_all_rx_done_intr(struct dpmaif_drv_info *drv_info) +{ + int ret; + u8 i; + + for (i = 0; i < DPMAIF_DLQ_NUM; i++) { + ret = mtk_dpmaif_drv_dl_unmask_rx_done_intr(drv_info, i); + if (ret < 0) + break; + } + + return ret; +} + +static int mtk_dpmaif_drv_dlq_mask_rx_done_intr(struct dpmaif_drv_info *drv_info, u8 qno) +{ + u32 cnt = 0, di_que_done_mask; + + if (qno == DPMAIF_DLQ0) + di_que_done_mask = DPMAIF_DL_INT_DLQ0_QDONE_MSK; + else + di_que_done_mask = DPMAIF_DL_INT_DLQ1_QDONE_MSK; + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_PD_AP_DL_L2TISR0, di_que_done_mask); + mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_PD_AP_DL_L2TISR0); + + /* Check mask status. */ + do { + if (!((mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_PD_AP_DL_L2TIMR0) & + di_que_done_mask) != di_que_done_mask)) + break; + + dev_err(DRV_TO_MDEV(drv_info)->dev, "Failed to mask dlq%u interrupt done-0x%08x\n", + qno, mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_PD_AP_DL_L2TIMR0)); + udelay(POLL_INTERVAL_US); + cnt++; + } while (cnt < POLL_MAX_TIMES); + + if (cnt >= POLL_MAX_TIMES) { + dev_err(DRV_TO_MDEV(drv_info)->dev, "Failed to mask dlq0 interrupt done\n"); + return -DATA_HW_REG_TIMEOUT; + } + + drv_info->drv_irq_en_mask.ap_dl_l2intr_en_mask &= ~di_que_done_mask; + + return 0; +} + +static int mtk_dpmaif_drv_dl_mask_all_rx_done_intr(struct dpmaif_drv_info *drv_info) +{ + int ret; + u8 i; + + for (i = 0; i < DPMAIF_DLQ_NUM; i++) { + ret = mtk_dpmaif_drv_dlq_mask_rx_done_intr(drv_info, i); + if (ret < 0) + break; + } + + return ret; +} + +static void mtk_dpmaif_drv_mask_dl_batcnt_len_err_intr(struct dpmaif_drv_info *drv_info, u32 q_num) +{ + drv_info->drv_irq_en_mask.ap_dl_l2intr_en_mask &= ~DPMAIF_DL_INT_BATCNT_LEN_ERR_MSK; + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_PD_AP_DL_L2TISR0, + DPMAIF_DL_INT_BATCNT_LEN_ERR_MSK); + mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_PD_AP_DL_L2TISR0); +} + +static void mtk_dpmaif_drv_unmask_dl_batcnt_len_err_intr(struct dpmaif_drv_info *drv_info) +{ + drv_info->drv_irq_en_mask.ap_dl_l2intr_en_mask |= DPMAIF_DL_INT_BATCNT_LEN_ERR_MSK; + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_PD_AP_DL_L2TICR0, + DPMAIF_DL_INT_BATCNT_LEN_ERR_MSK); +} + +static int mtk_dpmaif_drv_mask_dl_frgcnt_len_err_intr(struct dpmaif_drv_info *drv_info, u32 q_num) +{ + drv_info->drv_irq_en_mask.ap_dl_l2intr_en_mask &= ~DPMAIF_DL_INT_FRG_LEN_ERR_MSK; + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_PD_AP_DL_L2TISR0, + DPMAIF_DL_INT_FRG_LEN_ERR_MSK); + mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_PD_AP_DL_L2TISR0); + + return 0; +} + +static void mtk_dpmaif_drv_unmask_dl_frgcnt_len_err_intr(struct dpmaif_drv_info *drv_info) +{ + drv_info->drv_irq_en_mask.ap_dl_l2intr_en_mask |= DPMAIF_DL_INT_FRG_LEN_ERR_MSK; + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_PD_AP_DL_L2TICR0, + DPMAIF_DL_INT_FRG_LEN_ERR_MSK); +} + +static int mtk_dpmaif_drv_dlq_mask_pit_cnt_len_err_intr(struct dpmaif_drv_info *drv_info, u8 qno) +{ + if (qno == DPMAIF_DLQ0) + mtk_hw_write32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_AO_UL_APDL_L2TIMSR0, + DPMAIF_DL_INT_DLQ0_PITCNT_LEN_ERR_MSK); + else + mtk_hw_write32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_AO_UL_APDL_L2TIMSR0, + DPMAIF_DL_INT_DLQ1_PITCNT_LEN_ERR_MSK); + + mtk_hw_read32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_AO_UL_APDL_L2TIMSR0); + + return 0; +} + +static int mtk_dpmaif_drv_dlq_unmask_pit_cnt_len_err_intr(struct dpmaif_drv_info *drv_info, u8 qno) +{ + if (qno == DPMAIF_DLQ0) + mtk_hw_write32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_AO_UL_APDL_L2TIMCR0, + DPMAIF_DL_INT_DLQ0_PITCNT_LEN_ERR_MSK); + else + mtk_hw_write32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_AO_UL_APDL_L2TIMCR0, + DPMAIF_DL_INT_DLQ1_PITCNT_LEN_ERR_MSK); + + return 0; +} + +static int mtk_dpmaif_drv_start_queue_t800(struct dpmaif_drv_info *drv_info, + enum dpmaif_drv_dir dir) +{ + int ret; + + if (dir == DPMAIF_TX) { + if (unlikely(drv_info->ulq_all_enable)) { + dev_info(DRV_TO_MDEV(drv_info)->dev, "ulq all enabled\n"); + return 0; + } + + ret = mtk_dpmaif_drv_ulq_all_en(drv_info, true); + if (ret < 0) + return ret; + + ret = mtk_dpmaif_drv_ul_unmask_all_tx_done_intr(drv_info); + if (ret < 0) + return ret; + + drv_info->ulq_all_enable = true; + } else { + if (unlikely(drv_info->dlq_all_enable)) { + dev_info(DRV_TO_MDEV(drv_info)->dev, "dlq all enabled\n"); + return 0; + } + + ret = mtk_dpmaif_drv_dlq_all_en(drv_info, true); + if (ret < 0) + return ret; + + ret = mtk_dpmaif_drv_dl_unmask_all_rx_done_intr(drv_info); + if (ret < 0) + return ret; + + drv_info->dlq_all_enable = true; + } + + return 0; +} + +static int mtk_dpmaif_drv_stop_ulq(struct dpmaif_drv_info *drv_info) +{ + int cnt = 0; + + /* Disable HW arb and check idle. */ + mtk_dpmaif_drv_ulq_all_en(drv_info, false); + do { + if (++cnt >= POLL_MAX_TIMES) { + dev_err(DRV_TO_MDEV(drv_info)->dev, "Failed to stop ul queue, 0x%x\n", + mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_PD_UL_DBG_STA2)); + return -DATA_HW_REG_TIMEOUT; + } + udelay(POLL_INTERVAL_US); + } while (!mtk_dpmaif_drv_ul_all_idle_check(drv_info)); + + return 0; +} + +static int mtk_dpmaif_drv_mask_ulq_intr(struct dpmaif_drv_info *drv_info, u32 q_num) +{ + u32 cnt = 0, ui_que_done_mask; + + ui_que_done_mask = (1 << (q_num + DP_UL_INT_DONE_OFFSET)) & DPMAIF_UL_INT_QDONE_MSK; + + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_PD_AP_UL_L2TISR0, ui_que_done_mask); + mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_PD_AP_UL_L2TISR0); + + do { + if (!((mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_PD_AP_UL_L2TIMR0) & + ui_que_done_mask) != ui_que_done_mask)) + break; + + dev_err(DRV_TO_MDEV(drv_info)->dev, + "Failed to mask ul%u interrupt done-0x%08x\n", q_num, + mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_PD_AP_UL_L2TIMR0)); + udelay(POLL_INTERVAL_US); + cnt++; + } while (cnt < POLL_MAX_TIMES); + + if (cnt >= POLL_MAX_TIMES) { + dev_err(DRV_TO_MDEV(drv_info)->dev, "Failed to mask dlq0 interrupt done\n"); + return -DATA_HW_REG_TIMEOUT; + } + drv_info->drv_irq_en_mask.ap_ul_l2intr_en_mask &= ~ui_que_done_mask; + + return 0; +} + +static void mtk_dpmaif_drv_ul_mask_multi_tx_done_intr(struct dpmaif_drv_info *drv_info, u8 q_mask) +{ + u32 i; + + for (i = 0; i < DPMAIF_ULQ_NUM; i++) { + if (q_mask & (1 << i)) + mtk_dpmaif_drv_mask_ulq_intr(drv_info, i); + } +} + +static int mtk_dpmaif_drv_ul_mask_all_tx_done_intr(struct dpmaif_drv_info *drv_info) +{ + int ret; + u8 i; + + for (i = 0; i < DPMAIF_ULQ_NUM; i++) { + ret = mtk_dpmaif_drv_mask_ulq_intr(drv_info, i); + if (ret < 0) + break; + } + + return ret; +} + +static int mtk_dpmaif_drv_stop_dlq(struct dpmaif_drv_info *drv_info) +{ + u32 cnt = 0, wridx, ridx; + + /* Disable HW arb and check idle. */ + mtk_dpmaif_drv_dlq_all_en(drv_info, false); + do { + if (++cnt >= POLL_MAX_TIMES) { + dev_err(DRV_TO_MDEV(drv_info)->dev, "Failed to stop dl queue, 0x%x\n", + mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_PD_DL_DBG_STA1)); + return -DATA_HW_REG_TIMEOUT; + } + udelay(POLL_INTERVAL_US); + } while (!mtk_dpmaif_drv_dl_idle_check(drv_info)); + + /* Check middle pit sync done. */ + cnt = 0; + do { + wridx = mtk_dpmaif_drv_dl_get_wridx(drv_info); + ridx = mtk_dpmaif_drv_dl_get_pit_ridx(drv_info); + if (wridx == ridx) + break; + + udelay(POLL_INTERVAL_US); + cnt++; + } while (cnt < POLL_MAX_TIMES); + + if (cnt >= POLL_MAX_TIMES) { + dev_err(DRV_TO_MDEV(drv_info)->dev, "Failed to check middle pit sync\n"); + return -DATA_HW_REG_TIMEOUT; + } + + return 0; +} + +static int mtk_dpmaif_drv_stop_queue_t800(struct dpmaif_drv_info *drv_info, enum dpmaif_drv_dir dir) +{ + int ret; + + if (dir == DPMAIF_TX) { + if (unlikely(!drv_info->ulq_all_enable)) { + dev_info(DRV_TO_MDEV(drv_info)->dev, "ulq all disabled\n"); + return 0; + } + + ret = mtk_dpmaif_drv_stop_ulq(drv_info); + if (ret < 0) + return ret; + + ret = mtk_dpmaif_drv_ul_mask_all_tx_done_intr(drv_info); + if (ret < 0) + return ret; + + drv_info->ulq_all_enable = false; + } else { + if (unlikely(!drv_info->dlq_all_enable)) { + dev_info(DRV_TO_MDEV(drv_info)->dev, "dlq all disabled\n"); + return 0; + } + + ret = mtk_dpmaif_drv_stop_dlq(drv_info); + if (ret < 0) + return ret; + + ret = mtk_dpmaif_drv_dl_mask_all_rx_done_intr(drv_info); + if (ret < 0) + return ret; + + drv_info->dlq_all_enable = false; + } + + return 0; +} + +static u32 mtk_dpmaif_drv_get_dl_lv2_sts(struct dpmaif_drv_info *drv_info) +{ + return mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_PD_AP_DL_L2TISAR0); +} + +static u32 mtk_dpmaif_drv_get_ul_lv2_sts(struct dpmaif_drv_info *drv_info) +{ + return mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_PD_AP_UL_L2TISAR0); +} + +static u32 mtk_dpmaif_drv_get_ul_intr_mask(struct dpmaif_drv_info *drv_info) +{ + return mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_PD_AP_UL_L2TIMR0); +} + +static u32 mtk_dpmaif_drv_get_dl_intr_mask(struct dpmaif_drv_info *drv_info) +{ + return mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_PD_AP_DL_L2TIMR0); +} + +static bool mtk_dpmaif_drv_check_clr_ul_done_status(struct dpmaif_drv_info *drv_info, u8 qno) +{ + u32 val, l2tisar0; + bool ret = false; + /* get TX interrupt status. */ + l2tisar0 = mtk_dpmaif_drv_get_ul_lv2_sts(drv_info); + val = l2tisar0 & DPMAIF_UL_INT_QDONE & (1 << (DP_UL_INT_DONE_OFFSET + qno)); + + /* ulq status. */ + if (val) { + /* clear ulq done status */ + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_PD_AP_UL_L2TISAR0, val); + ret = true; + } + + return ret; +} + +static u32 mtk_dpmaif_drv_irq_src0_dl_filter(struct dpmaif_drv_info *drv_info, u32 l2risar0, + u32 l2rimr0) +{ + if (l2rimr0 & DPMAIF_DL_INT_DLQ0_QDONE_MSK) + l2risar0 &= ~DPMAIF_DL_INT_DLQ0_QDONE; + + if (l2rimr0 & DPMAIF_DL_INT_DLQ0_PITCNT_LEN_ERR_MSK) + l2risar0 &= ~DPMAIF_DL_INT_DLQ0_PITCNT_LEN_ERR; + + if (l2rimr0 & DPMAIF_DL_INT_FRG_LEN_ERR_MSK) + l2risar0 &= ~DPMAIF_DL_INT_FRG_LEN_ERR; + + if (l2rimr0 & DPMAIF_DL_INT_BATCNT_LEN_ERR_MSK) + l2risar0 &= ~DPMAIF_DL_INT_BATCNT_LEN_ERR; + + return l2risar0; +} + +static u32 mtk_dpmaif_drv_irq_src1_dl_filter(struct dpmaif_drv_info *drv_info, u32 l2risar0, + u32 l2rimr0) +{ + if (l2rimr0 & DPMAIF_DL_INT_DLQ1_QDONE_MSK) + l2risar0 &= ~DPMAIF_DL_INT_DLQ1_QDONE; + + if (l2rimr0 & DPMAIF_DL_INT_DLQ1_PITCNT_LEN_ERR_MSK) + l2risar0 &= ~DPMAIF_DL_INT_DLQ1_PITCNT_LEN_ERR; + + return l2risar0; +} + +static int mtk_dpmaif_drv_irq_src0(struct dpmaif_drv_info *drv_info, + struct dpmaif_drv_intr_info *intr_info) +{ + u32 val, l2risar0, l2rimr0; + + l2risar0 = mtk_dpmaif_drv_get_dl_lv2_sts(drv_info); + l2rimr0 = mtk_dpmaif_drv_get_dl_intr_mask(drv_info); + + l2risar0 &= DPMAIF_SRC0_DL_STATUS_MASK; + if (l2risar0) { + /* Filter to get DL unmasked interrupts */ + l2risar0 = mtk_dpmaif_drv_irq_src0_dl_filter(drv_info, l2risar0, l2rimr0); + + val = l2risar0 & DPMAIF_DL_INT_BATCNT_LEN_ERR; + if (val) { + intr_info->intr_types[intr_info->intr_cnt] = DPMAIF_INTR_DL_BATCNT_LEN_ERR; + intr_info->intr_queues[intr_info->intr_cnt] = DPMAIF_DLQ0; + intr_info->intr_cnt++; + mtk_dpmaif_drv_mask_dl_batcnt_len_err_intr(drv_info, DPMAIF_DLQ0); + } + + val = l2risar0 & DPMAIF_DL_INT_FRG_LEN_ERR; + if (val) { + intr_info->intr_types[intr_info->intr_cnt] = DPMAIF_INTR_DL_FRGCNT_LEN_ERR; + intr_info->intr_queues[intr_info->intr_cnt] = DPMAIF_DLQ0; + intr_info->intr_cnt++; + mtk_dpmaif_drv_mask_dl_frgcnt_len_err_intr(drv_info, DPMAIF_DLQ0); + } + + val = l2risar0 & DPMAIF_DL_INT_DLQ0_PITCNT_LEN_ERR; + if (val) { + intr_info->intr_types[intr_info->intr_cnt] = DPMAIF_INTR_DL_PITCNT_LEN_ERR; + intr_info->intr_queues[intr_info->intr_cnt] = 0x01 << DPMAIF_DLQ0; + intr_info->intr_cnt++; + mtk_dpmaif_drv_dlq_mask_pit_cnt_len_err_intr(drv_info, DPMAIF_DLQ0); + } + + val = l2risar0 & DPMAIF_DL_INT_DLQ0_QDONE; + if (val) { + if (!mtk_dpmaif_drv_dlq_mask_rx_done_intr(drv_info, DPMAIF_DLQ0)) { + intr_info->intr_types[intr_info->intr_cnt] = DPMAIF_INTR_DL_DONE; + intr_info->intr_queues[intr_info->intr_cnt] = 0x01 << DPMAIF_DLQ0; + intr_info->intr_cnt++; + } + } + + /* Clear interrupt status. */ + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_PD_AP_DL_L2TISAR0, l2risar0); + } + + return 0; +} + +static int mtk_dpmaif_drv_irq_src1(struct dpmaif_drv_info *drv_info, + struct dpmaif_drv_intr_info *intr_info) +{ + u32 val, l2risar0, l2rimr0; + + l2risar0 = mtk_dpmaif_drv_get_dl_lv2_sts(drv_info); + l2rimr0 = mtk_dpmaif_drv_get_dl_intr_mask(drv_info); + + /* Check and process interrupt. */ + l2risar0 &= DPMAIF_SRC1_DL_STATUS_MASK; + if (l2risar0) { + /* Filter to get DL unmasked interrupts */ + l2risar0 = mtk_dpmaif_drv_irq_src1_dl_filter(drv_info, l2risar0, l2rimr0); + + val = l2risar0 & DPMAIF_DL_INT_DLQ1_PITCNT_LEN_ERR; + if (val) { + intr_info->intr_types[intr_info->intr_cnt] = DPMAIF_INTR_DL_PITCNT_LEN_ERR; + intr_info->intr_queues[intr_info->intr_cnt] = 0x01 << DPMAIF_DLQ1; + intr_info->intr_cnt++; + mtk_dpmaif_drv_dlq_mask_pit_cnt_len_err_intr(drv_info, DPMAIF_DLQ1); + } + + val = l2risar0 & DPMAIF_DL_INT_DLQ1_QDONE; + if (val) { + if (!mtk_dpmaif_drv_dlq_mask_rx_done_intr(drv_info, DPMAIF_DLQ1)) { + intr_info->intr_types[intr_info->intr_cnt] = DPMAIF_INTR_DL_DONE; + intr_info->intr_queues[intr_info->intr_cnt] = 0x01 << DPMAIF_DLQ1; + intr_info->intr_cnt++; + } + } + + /* Clear interrupt status. */ + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_PD_AP_DL_L2TISAR0, l2risar0); + } + + return 0; +} + +static int mtk_dpmaif_drv_irq_src2(struct dpmaif_drv_info *drv_info, + struct dpmaif_drv_intr_info *intr_info) +{ + u32 l2tisar0, l2timr0; + u8 q_mask; + u32 val; + + l2tisar0 = mtk_dpmaif_drv_get_ul_lv2_sts(drv_info); + l2timr0 = mtk_dpmaif_drv_get_ul_intr_mask(drv_info); + + /* Check and process interrupt. */ + l2tisar0 &= (~l2timr0); + if (l2tisar0) { + val = l2tisar0 & DPMAIF_UL_INT_QDONE; + if (val) { + q_mask = val >> DP_UL_INT_DONE_OFFSET & DPMAIF_ULQS; + mtk_dpmaif_drv_ul_mask_multi_tx_done_intr(drv_info, q_mask); + intr_info->intr_types[intr_info->intr_cnt] = DPMAIF_INTR_UL_DONE; + intr_info->intr_queues[intr_info->intr_cnt] = val >> DP_UL_INT_DONE_OFFSET; + intr_info->intr_cnt++; + } + + /* clear interrupt status */ + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_PD_AP_UL_L2TISAR0, l2tisar0); + } + + return 0; +} + +static int mtk_dpmaif_drv_intr_handle_t800(struct dpmaif_drv_info *drv_info, void *data, u8 irq_id) +{ + switch (irq_id) { + case MTK_IRQ_SRC_DPMAIF: + mtk_dpmaif_drv_irq_src0(drv_info, data); + break; + case MTK_IRQ_SRC_DPMAIF2: + mtk_dpmaif_drv_irq_src1(drv_info, data); + break; + case MTK_IRQ_SRC_DPMAIF3: + mtk_dpmaif_drv_irq_src2(drv_info, data); + break; + default: + break; + } + + return 0; +} + +static int mtk_dpmaif_drv_intr_complete_t800(struct dpmaif_drv_info *drv_info, + enum dpmaif_drv_intr_type type, u8 q_id, u64 data) +{ + int ret = 0; + + switch (type) { + case DPMAIF_INTR_UL_DONE: + if (data == DPMAIF_CLEAR_INTR) + mtk_dpmaif_drv_check_clr_ul_done_status(drv_info, q_id); + else + ret = mtk_dpmaif_drv_unmask_ulq_intr(drv_info, q_id); + break; + case DPMAIF_INTR_DL_BATCNT_LEN_ERR: + mtk_dpmaif_drv_unmask_dl_batcnt_len_err_intr(drv_info); + break; + case DPMAIF_INTR_DL_FRGCNT_LEN_ERR: + mtk_dpmaif_drv_unmask_dl_frgcnt_len_err_intr(drv_info); + break; + case DPMAIF_INTR_DL_PITCNT_LEN_ERR: + ret = mtk_dpmaif_drv_dlq_unmask_pit_cnt_len_err_intr(drv_info, q_id); + break; + case DPMAIF_INTR_DL_DONE: + ret = mtk_dpmaif_drv_dl_unmask_rx_done_intr(drv_info, q_id); + break; + default: + break; + } + + return ret; +} + +static int mtk_dpmaif_drv_clr_ip_busy_sts_t800(struct dpmaif_drv_info *drv_info) +{ + u32 ip_busy_sts; + + /* Get AP IP busy status. */ + ip_busy_sts = mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_PD_AP_IP_BUSY); + + /* Clear AP IP busy. */ + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_PD_AP_IP_BUSY, ip_busy_sts); + + return 0; +} + +static int mtk_dpmaif_drv_dl_add_pit_cnt(struct dpmaif_drv_info *drv_info, + u32 qno, u32 pit_remain_cnt) +{ + u32 cnt = 0, dl_update; + + dl_update = pit_remain_cnt & 0x0003ffff; + dl_update |= DPMAIF_DL_ADD_UPDATE | (qno << DPMAIF_ADD_LRO_PIT_CHAN_OFS); + + do { + if ((mtk_hw_read32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_DL_LROPIT_ADD) & + DPMAIF_DL_ADD_NOT_READY) == 0) { + mtk_hw_write32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_DL_LROPIT_ADD, dl_update); + break; + } + + udelay(POLL_INTERVAL_US); + cnt++; + } while (cnt < POLL_MAX_TIMES); + + if (cnt >= POLL_MAX_TIMES) { + dev_err(DRV_TO_MDEV(drv_info)->dev, "Failed to add dlq%u pit-1, cnt=%u\n", + qno, pit_remain_cnt); + return -DATA_HW_REG_TIMEOUT; + } + + cnt = 0; + do { + if (!((mtk_hw_read32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_DL_LROPIT_ADD) & + DPMAIF_DL_ADD_NOT_READY) == DPMAIF_DL_ADD_NOT_READY)) + break; + + cnt++; + udelay(POLL_INTERVAL_US); + } while (cnt < POLL_MAX_TIMES); + + if (cnt >= POLL_MAX_TIMES) { + dev_err(DRV_TO_MDEV(drv_info)->dev, "Failed to add dlq%u pit-2, cnt=%u\n", + qno, pit_remain_cnt); + return false; + } + + return 0; +} + +static int mtk_dpmaif_drv_dl_add_bat_cnt(struct dpmaif_drv_info *drv_info, u32 bat_entry_cnt) +{ + u32 cnt = 0, dl_bat_update; + + dl_bat_update = bat_entry_cnt & 0xffff; + dl_bat_update |= DPMAIF_DL_ADD_UPDATE; + do { + if ((mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_PD_DL_BAT_ADD) & + DPMAIF_DL_ADD_NOT_READY) == 0) { + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_PD_DL_BAT_ADD, dl_bat_update); + break; + } + + udelay(POLL_INTERVAL_US); + cnt++; + } while (cnt < POLL_MAX_TIMES); + + if (cnt >= POLL_MAX_TIMES) { + dev_err(DRV_TO_MDEV(drv_info)->dev, + "Failed to add bat-1, cnt=%u\n", bat_entry_cnt); + return -DATA_HW_REG_TIMEOUT; + } + + cnt = 0; + do { + if (!((mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_PD_DL_BAT_ADD) & + DPMAIF_DL_ADD_NOT_READY) == DPMAIF_DL_ADD_NOT_READY)) + break; + + cnt++; + udelay(POLL_INTERVAL_US); + } while (cnt < POLL_MAX_TIMES); + + if (cnt >= POLL_MAX_TIMES) { + dev_err(DRV_TO_MDEV(drv_info)->dev, "Failed to add bat-2, cnt=%u\n", + bat_entry_cnt); + return -DATA_HW_REG_TIMEOUT; + } + return 0; +} + +static int mtk_dpmaif_drv_dl_add_frg_cnt(struct dpmaif_drv_info *drv_info, u32 frg_entry_cnt) +{ + u32 cnt = 0, dl_frg_update; + int ret = 0; + + dl_frg_update = frg_entry_cnt & 0xffff; + dl_frg_update |= DPMAIF_DL_FRG_ADD_UPDATE; + dl_frg_update |= DPMAIF_DL_ADD_UPDATE; + + do { + if (!(mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_PD_DL_BAT_ADD) + & DPMAIF_DL_ADD_NOT_READY)) { + mtk_hw_write32(DRV_TO_MDEV(drv_info), DPMAIF_PD_DL_BAT_ADD, dl_frg_update); + break; + } + + udelay(POLL_INTERVAL_US); + cnt++; + } while (cnt < POLL_MAX_TIMES); + + if (cnt >= POLL_MAX_TIMES) { + dev_err(DRV_TO_MDEV(drv_info)->dev, "Failed to add frag bat-1, cnt=%u\n", + frg_entry_cnt); + return -DATA_HW_REG_TIMEOUT; + } + + cnt = 0; + do { + if (!((mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_PD_DL_BAT_ADD) & + DPMAIF_DL_ADD_NOT_READY) == DPMAIF_DL_ADD_NOT_READY)) + break; + + cnt++; + udelay(POLL_INTERVAL_US); + } while (cnt < POLL_MAX_TIMES); + + if (cnt >= POLL_MAX_TIMES) { + dev_err(DRV_TO_MDEV(drv_info)->dev, "Failed to add frag bat-2, cnt=%u\n", + frg_entry_cnt); + return -DATA_HW_REG_TIMEOUT; + } + return ret; +} + +static int mtk_dpmaif_drv_ul_add_drb(struct dpmaif_drv_info *drv_info, u8 q_num, u32 drb_cnt) +{ + u32 drb_entry_cnt = drb_cnt * DPMAIF_UL_DRB_ENTRY_WORD; + u32 cnt = 0, ul_update; + u64 addr; + + ul_update = drb_entry_cnt & 0x0000ffff; + ul_update |= DPMAIF_UL_ADD_UPDATE; + + if (q_num == 4) + addr = NRL2_DPMAIF_UL_ADD_DESC_CH4; + else + addr = DPMAIF_ULQ_ADD_DESC_CH_N(q_num); + + do { + if (!(mtk_hw_read32(DRV_TO_MDEV(drv_info), addr) & DPMAIF_UL_ADD_NOT_READY)) { + mtk_hw_write32(DRV_TO_MDEV(drv_info), addr, ul_update); + break; + } + + udelay(POLL_INTERVAL_US); + cnt++; + } while (cnt < POLL_MAX_TIMES); + + if (cnt >= POLL_MAX_TIMES) { + dev_err(DRV_TO_MDEV(drv_info)->dev, "Failed to add ulq%u drb-1, cnt=%u\n", + q_num, drb_cnt); + return -DATA_HW_REG_TIMEOUT; + } + + cnt = 0; + do { + if (!((mtk_hw_read32(DRV_TO_MDEV(drv_info), addr) & + DPMAIF_UL_ADD_NOT_READY) == DPMAIF_UL_ADD_NOT_READY)) + break; + + cnt++; + udelay(POLL_INTERVAL_US); + } while (cnt < POLL_MAX_TIMES); + + if (cnt >= POLL_MAX_TIMES) { + dev_err(DRV_TO_MDEV(drv_info)->dev, "Failed to add ulq%u drb-2, cnt=%u\n", + q_num, drb_cnt); + return -DATA_HW_REG_TIMEOUT; + } + return 0; +} + +static int mtk_dpmaif_drv_send_doorbell_t800(struct dpmaif_drv_info *drv_info, + enum dpmaif_drv_ring_type type, + u8 q_id, u32 cnt) +{ + int ret = 0; + + switch (type) { + case DPMAIF_PIT: + ret = mtk_dpmaif_drv_dl_add_pit_cnt(drv_info, q_id, cnt); + break; + case DPMAIF_BAT: + ret = mtk_dpmaif_drv_dl_add_bat_cnt(drv_info, cnt); + break; + case DPMAIF_FRAG: + ret = mtk_dpmaif_drv_dl_add_frg_cnt(drv_info, cnt); + break; + case DPMAIF_DRB: + ret = mtk_dpmaif_drv_ul_add_drb(drv_info, q_id, cnt); + break; + default: + break; + } + + return ret; +} + +static int mtk_dpmaif_drv_dl_get_pit_wridx(struct dpmaif_drv_info *drv_info, u32 qno) +{ + u32 pit_wridx; + + pit_wridx = (mtk_hw_read32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_AO_DL_LRO_STA5 + qno * 0x20)) + & DPMAIF_DL_PIT_WRIDX_MSK; + if (unlikely(pit_wridx >= drv_info->drv_property.dlq[qno].pit_size)) + return -DATA_HW_REG_CHK_FAIL; + + return pit_wridx; +} + +static int mtk_dpmaif_drv_dl_get_pit_rdidx(struct dpmaif_drv_info *drv_info, u32 qno) +{ + u32 pit_rdidx; + + pit_rdidx = (mtk_hw_read32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_AO_DL_LRO_STA6 + qno * 0x20)) + & DPMAIF_DL_PIT_WRIDX_MSK; + if (unlikely(pit_rdidx >= drv_info->drv_property.dlq[qno].pit_size)) + return -DATA_HW_REG_CHK_FAIL; + + return pit_rdidx; +} + +static int mtk_dpmaif_drv_dl_get_bat_ridx(struct dpmaif_drv_info *drv_info) +{ + u32 bat_ridx; + + bat_ridx = mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_AO_DL_BAT_STA2) & + DPMAIF_DL_BAT_WRIDX_MSK; + + if (unlikely(bat_ridx >= drv_info->drv_property.ring.normal_bat_size)) + return -DATA_HW_REG_CHK_FAIL; + + return bat_ridx; +} + +static int mtk_dpmaif_drv_dl_get_bat_wridx(struct dpmaif_drv_info *drv_info) +{ + u32 bat_wridx; + + bat_wridx = mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_AO_DL_BAT_STA3) & + DPMAIF_DL_BAT_WRIDX_MSK; + if (unlikely(bat_wridx >= drv_info->drv_property.ring.normal_bat_size)) + return -DATA_HW_REG_CHK_FAIL; + + return bat_wridx; +} + +static int mtk_dpmaif_drv_dl_get_frg_ridx(struct dpmaif_drv_info *drv_info) +{ + u32 frg_ridx; + + frg_ridx = mtk_hw_read32(DRV_TO_MDEV(drv_info), DPMAIF_AO_DL_FRG_STA2) & + DPMAIF_DL_FRG_WRIDX_MSK; + if (unlikely(frg_ridx >= drv_info->drv_property.ring.frag_bat_size)) + return -DATA_HW_REG_CHK_FAIL; + + return frg_ridx; +} + +static int mtk_dpmaif_drv_ul_get_drb_ridx(struct dpmaif_drv_info *drv_info, u8 q_num) +{ + u32 drb_ridx; + u64 addr; + + addr = DPMAIF_ULQ_STA0_N(q_num); + + drb_ridx = mtk_hw_read32(DRV_TO_MDEV(drv_info), addr) >> 16; + drb_ridx = drb_ridx / DPMAIF_UL_DRB_ENTRY_WORD; + + if (unlikely(drb_ridx >= drv_info->drv_property.ulq[q_num].drb_size)) + return -DATA_HW_REG_CHK_FAIL; + + return drb_ridx; +} + +static int mtk_dpmaif_drv_get_ring_idx_t800(struct dpmaif_drv_info *drv_info, + enum dpmaif_drv_ring_idx index, u8 q_id) +{ + int ret = 0; + + switch (index) { + case DPMAIF_PIT_WIDX: + ret = mtk_dpmaif_drv_dl_get_pit_wridx(drv_info, q_id); + break; + case DPMAIF_PIT_RIDX: + ret = mtk_dpmaif_drv_dl_get_pit_rdidx(drv_info, q_id); + break; + case DPMAIF_BAT_WIDX: + ret = mtk_dpmaif_drv_dl_get_bat_wridx(drv_info); + break; + case DPMAIF_BAT_RIDX: + ret = mtk_dpmaif_drv_dl_get_bat_ridx(drv_info); + break; + case DPMAIF_FRAG_RIDX: + ret = mtk_dpmaif_drv_dl_get_frg_ridx(drv_info); + break; + case DPMAIF_DRB_RIDX: + ret = mtk_dpmaif_drv_ul_get_drb_ridx(drv_info, q_id); + break; + default: + break; + } + + return ret; +} + +static u32 mtk_dpmaif_drv_hash_indir_get(struct dpmaif_drv_info *drv_info, u32 *indir) +{ + u32 val = mtk_dpmaif_drv_hash_indir_mask_get(drv_info); + u8 i; + + for (i = 0; i < DPMAIF_HASH_INDR_SIZE; i++) { + if (val & (0x01 << i)) + indir[i] = 1; + else + indir[i] = 0; + } + + return 0; +} + +static u32 mtk_dpmaif_drv_hash_indir_set(struct dpmaif_drv_info *drv_info, u32 *indir) +{ + u32 val = 0; + u8 i; + + for (i = 0; i < DPMAIF_HASH_INDR_SIZE; i++) { + if (indir[i]) + val |= (0x01 << i); + } + mtk_dpmaif_drv_hash_indir_mask_set(drv_info, val); + + return 0; +} + +static u32 mtk_dpmaif_drv_5tuple_trig(struct dpmaif_drv_info *drv_info, + struct dpmaif_hpc_rule *rule, u32 sw_add, + u32 agg_en, u32 ovw_en) +{ + u32 cnt, i, *val = (u32 *)rule; + + for (i = 0; i < sizeof(*rule) / sizeof(u32); i++) + mtk_hw_write32(DRV_TO_MDEV(drv_info), + NRL2_DPMAIF_HPC_SW_ADD_RULE0 + 4 * i, + *(val + i)); + + mtk_hw_write32(DRV_TO_MDEV(drv_info), + NRL2_DPMAIF_HPC_SW_5TUPLE_TRIG, + (ovw_en << 3) | (agg_en << 2) | (sw_add << 1) | 0x1); + + /* wait hw 5-tuple process finish */ + cnt = 0; + do { + if (!(mtk_hw_read32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_HPC_SW_5TUPLE_TRIG) & 0x1)) + break; + + cnt++; + udelay(POLL_INTERVAL_US); + } while (cnt < POLL_MAX_TIMES); + + if (cnt >= POLL_MAX_TIMES) { + dev_err(DRV_TO_MDEV(drv_info)->dev, "Failed to 5tuple trigger\n"); + return -DATA_HW_REG_TIMEOUT; + } + if (mtk_hw_read32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_HPC_5TUPLE_STS)) + return -DATA_HW_REG_CHK_FAIL; + + return 0; +} + +static int mtk_dpmaif_drv_ul_set_delay_intr(struct dpmaif_drv_info *drv_info, + u8 q_num, u8 mode, u32 time_us, u32 pkt_cnt) +{ + u32 ret = 0, cfg; + + cfg = ((mode & 0x3) << 30) | ((pkt_cnt & 0x3fff) << 16) | (time_us & 0xffff); + + switch (q_num) { + case 0: + mtk_hw_write32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_DLY_IRQ_TIMER3, cfg); + break; + case 1: + mtk_hw_write32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_DLY_IRQ_TIMER4, cfg); + break; + case 2: + mtk_hw_write32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_DLY_IRQ_TIMER5, cfg); + break; + case 3: + mtk_hw_write32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_DLY_IRQ_TIMER6, cfg); + break; + case 4: + mtk_hw_write32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_DLY_IRQ_TIMER7, cfg); + break; + default: + dev_err(DRV_TO_MDEV(drv_info)->dev, "Invalid ulq=%d!\n", q_num); + ret = -EINVAL; + } + + return ret; +} + +static int mtk_dpmaif_drv_dl_set_delay_intr(struct dpmaif_drv_info *drv_info, + u8 q_num, u8 mode, u32 time_us, u32 pkt_cnt) +{ + int ret = 0; + u32 cfg = 0; + + cfg = ((mode & 0x3) << 30) | ((pkt_cnt & 0x3fff) << 16) | (time_us & 0xffff); + + switch (q_num) { + case 0: + mtk_hw_write32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_AO_DL_DLY_IRQ_TIMER1, cfg); + break; + case 1: + mtk_hw_write32(DRV_TO_MDEV(drv_info), NRL2_DPMAIF_AO_DL_DLY_IRQ_TIMER2, cfg); + break; + default: + dev_info(DRV_TO_MDEV(drv_info)->dev, "Invalid dlq=%d!\n", q_num); + ret = -EINVAL; + } + + return ret; +} + +static int mtk_dpmaif_drv_intr_coalesce_set(struct dpmaif_drv_info *drv_info, + struct dpmaif_drv_intr *intr) +{ + u8 i; + + if (intr->dir == DPMAIF_TX) { + for (i = 0; i < DPMAIF_ULQ_NUM; i++) { + if (intr->q_mask & (1 << i)) + mtk_dpmaif_drv_ul_set_delay_intr(drv_info, i, intr->mode, + intr->time_threshold, + intr->pkt_threshold); + } + } else { + for (i = 0; i < DPMAIF_DLQ_NUM; i++) { + if (intr->q_mask & (1 << i)) + mtk_dpmaif_drv_dl_set_delay_intr(drv_info, i, intr->mode, + intr->time_threshold, + intr->pkt_threshold); + } + } + + return 0; +} + +static int mtk_dpmaif_drv_feature_cmd_t800(struct dpmaif_drv_info *drv_info, + enum dpmaif_drv_cmd cmd, void *data) +{ + int ret = 0; + + switch (cmd) { + case DATA_HW_INTR_COALESCE_SET: + ret = mtk_dpmaif_drv_intr_coalesce_set(drv_info, data); + break; + case DATA_HW_HASH_GET: + ret = mtk_dpmaif_drv_hash_sec_key_get(drv_info, data); + break; + case DATA_HW_HASH_SET: + ret = mtk_dpmaif_drv_hash_sec_key_set(drv_info, data); + break; + case DATA_HW_HASH_KEY_SIZE_GET: + *(u32 *)data = DPMAIF_HASH_SEC_KEY_NUM; + break; + case DATA_HW_INDIR_GET: + ret = mtk_dpmaif_drv_hash_indir_get(drv_info, data); + break; + case DATA_HW_INDIR_SET: + ret = mtk_dpmaif_drv_hash_indir_set(drv_info, data); + break; + case DATA_HW_INDIR_SIZE_GET: + *(u32 *)data = DPMAIF_HASH_INDR_SIZE; + break; + case DATA_HW_LRO_SET: + ret = mtk_dpmaif_drv_5tuple_trig(drv_info, data, 1, 1, 1); + break; + default: + dev_info(DRV_TO_MDEV(drv_info)->dev, "Unsupport cmd=%d\n", cmd); + ret = -EOPNOTSUPP; + break; + } + + return ret; +} + +struct dpmaif_drv_ops dpmaif_drv_ops_t800 = { + .init = mtk_dpmaif_drv_init_t800, + .start_queue = mtk_dpmaif_drv_start_queue_t800, + .stop_queue = mtk_dpmaif_drv_stop_queue_t800, + .intr_handle = mtk_dpmaif_drv_intr_handle_t800, + .intr_complete = mtk_dpmaif_drv_intr_complete_t800, + .clear_ip_busy = mtk_dpmaif_drv_clr_ip_busy_sts_t800, + .send_doorbell = mtk_dpmaif_drv_send_doorbell_t800, + .get_ring_idx = mtk_dpmaif_drv_get_ring_idx_t800, + .feature_cmd = mtk_dpmaif_drv_feature_cmd_t800, +}; diff --git a/drivers/net/wwan/mediatek/pcie/mtk_dpmaif_reg_t800.h b/drivers/net/wwan/mediatek/pcie/mtk_dpmaif_reg_t800.h new file mode 100644 index 000000000000..8db2cd782a80 --- /dev/null +++ b/drivers/net/wwan/mediatek/pcie/mtk_dpmaif_reg_t800.h @@ -0,0 +1,368 @@ +/* SPDX-License-Identifier: BSD-3-Clause-Clear + * + * Copyright (c) 2022, MediaTek Inc. + */ + +#ifndef __MTK_DPMAIF_DRV_T800_H__ +#define __MTK_DPMAIF_DRV_T800_H__ + +#define DPMAIF_DEV_PD_BASE (0x1022D000) +#define DPMAIF_DEV_AO_BASE (0x10011000) + +#define DPMAIF_PD_BASE DPMAIF_DEV_PD_BASE +#define DPMAIF_AO_BASE DPMAIF_DEV_AO_BASE + +#define BASE_NADDR_NRL2_DPMAIF_UL ((unsigned long)(DPMAIF_PD_BASE)) +#define BASE_NADDR_NRL2_DPMAIF_DL ((unsigned long)(DPMAIF_PD_BASE + 0x100)) +#define BASE_NADDR_NRL2_DPMAIF_AP_MISC ((unsigned long)(DPMAIF_PD_BASE + 0x400)) +#define BASE_NADDR_NRL2_DPMAIF_PD_SRAM_UL ((unsigned long)(DPMAIF_PD_BASE + 0xD00)) +#define BASE_NADDR_NRL2_DPMAIF_PD_SRAM_DL ((unsigned long)(DPMAIF_PD_BASE + 0xC00)) +#define BASE_NADDR_NRL2_DPMAIF_DL_LRO_REMOVEAO_IDX ((unsigned long)(DPMAIF_PD_BASE + 0x900)) +#define BASE_NADDR_NRL2_DPMAIF_MMW_HPC ((unsigned long)(DPMAIF_PD_BASE + 0x600)) +#define BASE_NADDR_NRL2_DPMAIF_PD_SRAM_MISC2 ((unsigned long)(DPMAIF_PD_BASE + 0xF00)) +#define BASE_NADDR_NRL2_DPMAIF_AO_UL ((unsigned long)(DPMAIF_AO_BASE)) +#define BASE_NADDR_NRL2_DPMAIF_AO_DL ((unsigned long)(DPMAIF_AO_BASE + 0x400)) + +/* dpmaif uplink part registers. */ +#define NRL2_DPMAIF_UL_ADD_DESC (BASE_NADDR_NRL2_DPMAIF_UL + 0x00) +#define NRL2_DPMAIF_UL_DBG_STA2 (BASE_NADDR_NRL2_DPMAIF_UL + 0x88) +#define NRL2_DPMAIF_UL_RESERVE_AO_RW (BASE_NADDR_NRL2_DPMAIF_UL + 0xAC) +#define NRL2_DPMAIF_UL_ADD_DESC_CH0 (BASE_NADDR_NRL2_DPMAIF_UL + 0xB0) +#define NRL2_DPMAIF_UL_ADD_DESC_CH4 (BASE_NADDR_NRL2_DPMAIF_UL + 0xE0) + +/* dpmaif downlink part registers. */ +#define NRL2_DPMAIF_DL_BAT_INIT (BASE_NADDR_NRL2_DPMAIF_DL + 0x00) +#define NRL2_DPMAIF_DL_BAT_INIT (BASE_NADDR_NRL2_DPMAIF_DL + 0x00) +#define NRL2_DPMAIF_DL_BAT_ADD (BASE_NADDR_NRL2_DPMAIF_DL + 0x04) +#define NRL2_DPMAIF_DL_BAT_INIT_CON0 (BASE_NADDR_NRL2_DPMAIF_DL + 0x08) +#define NRL2_DPMAIF_DL_BAT_INIT_CON1 (BASE_NADDR_NRL2_DPMAIF_DL + 0x0C) +#define NRL2_DPMAIF_DL_BAT_INIT_CON3 (BASE_NADDR_NRL2_DPMAIF_DL + 0x50) +#define NRL2_DPMAIF_DL_DBG_STA1 (BASE_NADDR_NRL2_DPMAIF_DL + 0xB4) + +/* dpmaif ap misc part registers. */ +#define NRL2_DPMAIF_AP_MISC_AP_L2TISAR0 (BASE_NADDR_NRL2_DPMAIF_AP_MISC + 0x00) +#define NRL2_DPMAIF_AP_MISC_APDL_L2TISAR0 (BASE_NADDR_NRL2_DPMAIF_AP_MISC + 0x50) +#define NRL2_DPMAIF_AP_MISC_AP_IP_BUSY (BASE_NADDR_NRL2_DPMAIF_AP_MISC + 0x60) +#define NRL2_DPMAIF_AP_MISC_CG_EN (BASE_NADDR_NRL2_DPMAIF_AP_MISC + 0x68) +#define NRL2_DPMAIF_AP_MISC_OVERWRITE_CFG (BASE_NADDR_NRL2_DPMAIF_AP_MISC + 0x90) +#define NRL2_DPMAIF_AP_MISC_RSTR_CLR (BASE_NADDR_NRL2_DPMAIF_AP_MISC + 0x94) + +/* dpmaif uplink ao part registers. */ +#define NRL2_DPMAIF_AO_UL_INIT_SET (BASE_NADDR_NRL2_DPMAIF_AO_UL + 0x0) +#define NRL2_DPMAIF_AO_UL_CHNL_ARB0 (BASE_NADDR_NRL2_DPMAIF_AO_UL + 0x1C) +#define NRL2_DPMAIF_AO_UL_AP_L2TIMR0 (BASE_NADDR_NRL2_DPMAIF_AO_UL + 0x80) +#define NRL2_DPMAIF_AO_UL_AP_L2TIMCR0 (BASE_NADDR_NRL2_DPMAIF_AO_UL + 0x84) +#define NRL2_DPMAIF_AO_UL_AP_L2TIMSR0 (BASE_NADDR_NRL2_DPMAIF_AO_UL + 0x88) +#define NRL2_DPMAIF_AO_UL_AP_L1TIMR0 (BASE_NADDR_NRL2_DPMAIF_AO_UL + 0x8C) +#define NRL2_DPMAIF_AO_UL_APDL_L2TIMR0 (BASE_NADDR_NRL2_DPMAIF_AO_UL + 0x90) +#define NRL2_DPMAIF_AO_UL_APDL_L2TIMCR0 (BASE_NADDR_NRL2_DPMAIF_AO_UL + 0x94) +#define NRL2_DPMAIF_AO_UL_APDL_L2TIMSR0 (BASE_NADDR_NRL2_DPMAIF_AO_UL + 0x98) +#define NRL2_DPMAIF_AO_UL_AP_DL_UL_IP_BUSY_MASK (BASE_NADDR_NRL2_DPMAIF_AO_UL + 0x9C) + +/* dpmaif uplink pd sram part registers. */ +#define NRL2_DPMAIF_AO_UL_CHNL0_CON0 (BASE_NADDR_NRL2_DPMAIF_PD_SRAM_UL + 0x10) +#define NRL2_DPMAIF_AO_UL_CHNL0_CON1 (BASE_NADDR_NRL2_DPMAIF_PD_SRAM_UL + 0x14) +#define NRL2_DPMAIF_AO_UL_CHNL0_CON2 (BASE_NADDR_NRL2_DPMAIF_PD_SRAM_UL + 0x18) +#define NRL2_DPMAIF_DLY_IRQ_TIMER3 (BASE_NADDR_NRL2_DPMAIF_PD_SRAM_UL + 0x1C) +#define NRL2_DPMAIF_DLY_IRQ_TIMER4 (BASE_NADDR_NRL2_DPMAIF_PD_SRAM_UL + 0x2C) +#define NRL2_DPMAIF_DLY_IRQ_TIMER5 (BASE_NADDR_NRL2_DPMAIF_PD_SRAM_UL + 0x3C) +#define NRL2_DPMAIF_DLY_IRQ_TIMER6 (BASE_NADDR_NRL2_DPMAIF_PD_SRAM_UL + 0x60) +#define NRL2_DPMAIF_DLY_IRQ_TIMER7 (BASE_NADDR_NRL2_DPMAIF_PD_SRAM_UL + 0x64) +#define NRL2_DPMAIF_AO_UL_CH0_STA (BASE_NADDR_NRL2_DPMAIF_PD_SRAM_UL + 0xE0) + +/* dpmaif downlink ao part registers. */ +#define NRL2_DPMAIF_AO_DL_INIT_SET (BASE_NADDR_NRL2_DPMAIF_AO_DL + 0x0) +#define NRL2_DPMAIF_AO_DL_LROPIT_INIT_CON5 (BASE_NADDR_NRL2_DPMAIF_AO_DL + 0x28) +#define NRL2_DPMAIF_AO_DL_LROPIT_TRIG_THRES (BASE_NADDR_NRL2_DPMAIF_AO_DL + 0x34) + +/* dpmaif downlink pd sram part registers. */ +#define NRL2_DPMAIF_AO_DL_PKTINFO_CON0 (BASE_NADDR_NRL2_DPMAIF_PD_SRAM_DL + 0x0) +#define NRL2_DPMAIF_AO_DL_PKTINFO_CON1 (BASE_NADDR_NRL2_DPMAIF_PD_SRAM_DL + 0x4) +#define NRL2_DPMAIF_AO_DL_PKTINFO_CON2 (BASE_NADDR_NRL2_DPMAIF_PD_SRAM_DL + 0x8) +#define NRL2_DPMAIF_AO_DL_RDY_CHK_THRES (BASE_NADDR_NRL2_DPMAIF_PD_SRAM_DL + 0xC) +#define NRL2_DPMAIF_AO_DL_RDY_CHK_FRG_THRES (BASE_NADDR_NRL2_DPMAIF_PD_SRAM_DL + 0x10) +#define NRL2_DPMAIF_AO_DL_LRO_AGG_CFG (BASE_NADDR_NRL2_DPMAIF_PD_SRAM_DL + 0x20) +#define NRL2_DPMAIF_AO_DL_LROPIT_TIMEOUT0 (BASE_NADDR_NRL2_DPMAIF_PD_SRAM_DL + 0x24) +#define NRL2_DPMAIF_AO_DL_LROPIT_TIMEOUT1 (BASE_NADDR_NRL2_DPMAIF_PD_SRAM_DL + 0x28) +#define NRL2_DPMAIF_AO_DL_HPC_CNTL (BASE_NADDR_NRL2_DPMAIF_PD_SRAM_DL + 0x38) +#define NRL2_DPMAIF_AO_DL_PIT_SEQ_END (BASE_NADDR_NRL2_DPMAIF_PD_SRAM_DL + 0x40) +#define NRL2_DPMAIF_AO_DL_DLY_IRQ_TIMER1 (BASE_NADDR_NRL2_DPMAIF_PD_SRAM_DL + 0x58) +#define NRL2_DPMAIF_AO_DL_DLY_IRQ_TIMER2 (BASE_NADDR_NRL2_DPMAIF_PD_SRAM_DL + 0x5C) +#define NRL2_DPMAIF_AO_DL_BAT_STA2 (BASE_NADDR_NRL2_DPMAIF_PD_SRAM_DL + 0xD8) +#define NRL2_DPMAIF_AO_DL_BAT_STA3 (BASE_NADDR_NRL2_DPMAIF_PD_SRAM_DL + 0xDC) +#define NRL2_DPMAIF_AO_DL_PIT_STA2 (BASE_NADDR_NRL2_DPMAIF_PD_SRAM_DL + 0xEC) +#define NRL2_DPMAIF_AO_DL_PIT_STA3 (BASE_NADDR_NRL2_DPMAIF_PD_SRAM_DL + 0x60) +#define NRL2_DPMAIF_AO_DL_FRGBAT_STA2 (BASE_NADDR_NRL2_DPMAIF_PD_SRAM_DL + 0x78) +#define NRL2_DPMAIF_AO_DL_LRO_STA5 (BASE_NADDR_NRL2_DPMAIF_PD_SRAM_DL + 0xA4) +#define NRL2_DPMAIF_AO_DL_LRO_STA6 (BASE_NADDR_NRL2_DPMAIF_PD_SRAM_DL + 0xA8) + +/* dpmaif hpc part registers. */ +#define NRL2_DPMAIF_HPC_SW_5TUPLE_TRIG (BASE_NADDR_NRL2_DPMAIF_MMW_HPC + 0x030) +#define NRL2_DPMAIF_HPC_5TUPLE_STS (BASE_NADDR_NRL2_DPMAIF_MMW_HPC + 0x034) +#define NRL2_DPMAIF_HPC_SW_ADD_RULE0 (BASE_NADDR_NRL2_DPMAIF_MMW_HPC + 0x060) +#define NRL2_DPMAIF_HPC_INTR_MASK (BASE_NADDR_NRL2_DPMAIF_MMW_HPC + 0x0F4) + +/* dpmaif LRO part registers. */ +#define NRL2_DPMAIF_DL_LROPIT_INIT (BASE_NADDR_NRL2_DPMAIF_DL_LRO_REMOVEAO_IDX + 0x0) +#define NRL2_DPMAIF_DL_LROPIT_ADD (BASE_NADDR_NRL2_DPMAIF_DL_LRO_REMOVEAO_IDX + 0x10) +#define NRL2_DPMAIF_DL_LROPIT_INIT_CON0 (BASE_NADDR_NRL2_DPMAIF_DL_LRO_REMOVEAO_IDX + 0x14) +#define NRL2_DPMAIF_DL_LROPIT_INIT_CON1 (BASE_NADDR_NRL2_DPMAIF_DL_LRO_REMOVEAO_IDX + 0x18) +#define NRL2_DPMAIF_DL_LROPIT_INIT_CON2 (BASE_NADDR_NRL2_DPMAIF_DL_LRO_REMOVEAO_IDX + 0x1C) +#define NRL2_DPMAIF_DL_LROPIT_INIT_CON5 (BASE_NADDR_NRL2_DPMAIF_DL_LRO_REMOVEAO_IDX + 0x28) +#define NRL2_DPMAIF_DL_LROPIT_INIT_CON3 (BASE_NADDR_NRL2_DPMAIF_DL_LRO_REMOVEAO_IDX + 0x20) +#define NRL2_DPMAIF_DL_LROPIT_INIT_CON4 (BASE_NADDR_NRL2_DPMAIF_DL_LRO_REMOVEAO_IDX + 0x24) +#define NRL2_DPMAIF_DL_LROPIT_INIT_CON6 (BASE_NADDR_NRL2_DPMAIF_DL_LRO_REMOVEAO_IDX + 0x2C) + +/* dpmaif pd sram misc2 part registers. */ +#define NRL2_REG_TOE_HASH_EN (BASE_NADDR_NRL2_DPMAIF_PD_SRAM_MISC2 + 0x0) +#define NRL2_REG_HASH_CFG_CON (BASE_NADDR_NRL2_DPMAIF_PD_SRAM_MISC2 + 0x4) +#define NRL2_REG_HASH_SEC_KEY_0 (BASE_NADDR_NRL2_DPMAIF_PD_SRAM_MISC2 + 0x8) +#define NRL2_REG_HPC_STATS_THRES (BASE_NADDR_NRL2_DPMAIF_PD_SRAM_MISC2 + 0x30) +#define NRL2_REG_HPC_STATS_TIMER_CFG (BASE_NADDR_NRL2_DPMAIF_PD_SRAM_MISC2 + 0x34) +#define NRL2_REG_HASH_SEC_KEY_UPD (BASE_NADDR_NRL2_DPMAIF_PD_SRAM_MISC2 + 0X70) + +/* dpmaif pd ul, ao ul config. */ +#define DPMAIF_PD_UL_CHNL_ARB0 NRL2_DPMAIF_AO_UL_CHNL_ARB0 +#define DPMAIF_PD_UL_CHNL0_CON0 NRL2_DPMAIF_AO_UL_CHNL0_CON0 +#define DPMAIF_PD_UL_CHNL0_CON1 NRL2_DPMAIF_AO_UL_CHNL0_CON1 +#define DPMAIF_PD_UL_CHNL0_CON2 NRL2_DPMAIF_AO_UL_CHNL0_CON2 +#define DPMAIF_PD_UL_ADD_DESC_CH NRL2_DPMAIF_UL_ADD_DESC_CH0 +#define DPMAIF_PD_UL_DBG_STA2 NRL2_DPMAIF_UL_DBG_STA2 + +/* dpmaif pd dl config. */ +#define DPMAIF_PD_DL_BAT_INIT NRL2_DPMAIF_DL_BAT_INIT +#define DPMAIF_PD_DL_BAT_ADD NRL2_DPMAIF_DL_BAT_ADD +#define DPMAIF_PD_DL_BAT_INIT_CON0 NRL2_DPMAIF_DL_BAT_INIT_CON0 +#define DPMAIF_PD_DL_BAT_INIT_CON1 NRL2_DPMAIF_DL_BAT_INIT_CON1 +#define DPMAIF_PD_DL_BAT_INIT_CON3 NRL2_DPMAIF_DL_BAT_INIT_CON3 +#define DPMAIF_PD_DL_DBG_STA1 NRL2_DPMAIF_DL_DBG_STA1 + +/* dpmaif pd ap misc, ao ul misc config. */ +#define DPMAIF_PD_AP_UL_L2TISAR0 NRL2_DPMAIF_AP_MISC_AP_L2TISAR0 +#define DPMAIF_PD_AP_UL_L2TIMR0 NRL2_DPMAIF_AO_UL_AP_L2TIMR0 +#define DPMAIF_PD_AP_UL_L2TICR0 NRL2_DPMAIF_AO_UL_AP_L2TIMCR0 +#define DPMAIF_PD_AP_UL_L2TISR0 NRL2_DPMAIF_AO_UL_AP_L2TIMSR0 +#define DPMAIF_PD_AP_DL_L2TISAR0 NRL2_DPMAIF_AP_MISC_APDL_L2TISAR0 +#define DPMAIF_PD_AP_DL_L2TIMR0 NRL2_DPMAIF_AO_UL_APDL_L2TIMR0 +#define DPMAIF_PD_AP_DL_L2TICR0 NRL2_DPMAIF_AO_UL_APDL_L2TIMCR0 +#define DPMAIF_PD_AP_DL_L2TISR0 NRL2_DPMAIF_AO_UL_APDL_L2TIMSR0 +#define DPMAIF_PD_AP_IP_BUSY NRL2_DPMAIF_AP_MISC_AP_IP_BUSY +#define DPMAIF_PD_AP_DLUL_IP_BUSY_MASK NRL2_DPMAIF_AO_UL_AP_DL_UL_IP_BUSY_MASK + +/* dpmaif ao dl config. */ +#define DPMAIF_AO_DL_PKTINFO_CONO NRL2_DPMAIF_AO_DL_PKTINFO_CON0 +#define DPMAIF_AO_DL_PKTINFO_CON1 NRL2_DPMAIF_AO_DL_PKTINFO_CON1 +#define DPMAIF_AO_DL_PKTINFO_CON2 NRL2_DPMAIF_AO_DL_PKTINFO_CON2 +#define DPMAIF_AO_DL_RDY_CHK_THRES NRL2_DPMAIF_AO_DL_RDY_CHK_THRES +#define DPMAIF_AO_DL_BAT_STA2 NRL2_DPMAIF_AO_DL_BAT_STA2 +#define DPMAIF_AO_DL_BAT_STA3 NRL2_DPMAIF_AO_DL_BAT_STA3 +#define DPMAIF_AO_DL_PIT_STA2 NRL2_DPMAIF_AO_DL_PIT_STA2 +#define DPMAIF_AO_DL_PIT_STA3 NRL2_DPMAIF_AO_DL_PIT_STA3 +#define DPMAIF_AO_DL_FRG_CHK_THRES NRL2_DPMAIF_AO_DL_RDY_CHK_FRG_THRES +#define DPMAIF_AO_DL_FRG_STA2 NRL2_DPMAIF_AO_DL_FRGBAT_STA2 + +/* DPMAIF AO register */ +#define DPMAIF_AP_RGU_ASSERT 0x10001120 +#define DPMAIF_AP_RGU_DEASSERT 0x10001124 +#define DPMAIF_AP_RST_BIT BIT(4) +#define DPMAIF_AP_AO_RGU_ASSERT 0x10001140 +#define DPMAIF_AP_AO_RGU_DEASSERT 0x10001144 +#define DPMAIF_AP_AO_RST_BIT BIT(3) + +/* hw configuration */ +#define DPMAIF_ULQSAR_N(q_num)\ + ((DPMAIF_PD_UL_CHNL0_CON0) + (0x10 * (q_num))) + +#define DPMAIF_UL_DRBSIZE_ADDRH_N(q_num)\ + ((DPMAIF_PD_UL_CHNL0_CON1) + (0x10 * (q_num))) + +#define DPMAIF_UL_DRB_ADDRH_N(q_num)\ + ((DPMAIF_PD_UL_CHNL0_CON2) + (0x10 * (q_num))) + +#define DPMAIF_ULQ_STA0_N(q_num)\ + ((NRL2_DPMAIF_AO_UL_CH0_STA) + (0x04 * (q_num))) + +#define DPMAIF_ULQ_ADD_DESC_CH_N(q_num)\ + ((DPMAIF_PD_UL_ADD_DESC_CH) + (0x04 * (q_num))) + +#define DPMAIF_ULQS 0x1F + +#define DPMAIF_UL_ADD_NOT_READY BIT(31) +#define DPMAIF_UL_ADD_UPDATE BIT(31) +#define DPMAIF_UL_ALL_QUE_ARB_EN (DPMAIF_ULQS << 8) + +#define DPMAIF_DL_ADD_UPDATE BIT(31) +#define DPMAIF_DL_ADD_NOT_READY BIT(31) +#define DPMAIF_DL_FRG_ADD_UPDATE BIT(16) + +#define DPMAIF_DL_BAT_INIT_ALLSET BIT(0) +#define DPMAIF_DL_BAT_FRG_INIT BIT(16) +#define DPMAIF_DL_BAT_INIT_EN BIT(31) +#define DPMAIF_DL_BAT_INIT_NOT_READY BIT(31) +#define DPMAIF_DL_BAT_INIT_ONLY_ENABLE_BIT 0 + +#define DPMAIF_DL_PIT_INIT_ALLSET BIT(0) +#define DPMAIF_DL_PIT_INIT_EN BIT(31) +#define DPMAIF_DL_PIT_INIT_NOT_READY BIT(31) + +#define DPMAIF_PKT_ALIGN64_MODE 0 +#define DPMAIF_PKT_ALIGN128_MODE 1 + +#define DPMAIF_BAT_REMAIN_SZ_BASE 16 +#define DPMAIF_BAT_BUFFER_SZ_BASE 128 +#define DPMAIF_FRG_BUFFER_SZ_BASE 128 + +#define DPMAIF_PIT_SIZE_MSK 0x3FFFF + +#define DPMAIF_BAT_EN_MSK BIT(16) +#define DPMAIF_FRG_EN_MSK BIT(28) +#define DPMAIF_BAT_SIZE_MSK 0xFFFF + +#define DPMAIF_BAT_BID_MAXCNT_MSK 0xFFFF0000 +#define DPMAIF_BAT_REMAIN_MINSZ_MSK 0x0000FF00 +#define DPMAIF_PIT_CHK_NUM_MSK 0xFF000000 +#define DPMAIF_BAT_BUF_SZ_MSK 0x0001FF00 +#define DPMAIF_FRG_BUF_SZ_MSK 0x0001FF00 +#define DPMAIF_BAT_RSV_LEN_MSK 0x000000FF +#define DPMAIF_PKT_ALIGN_MSK (0x3 << 22) + +#define DPMAIF_BAT_CHECK_THRES_MSK (0x3F << 16) +#define DPMAIF_FRG_CHECK_THRES_MSK 0xFF +#define DPMAIF_PKT_ALIGN_EN BIT(23) +#define DPMAIF_DRB_SIZE_MSK 0x0000FFFF + +#define DPMAIF_DL_PIT_WRIDX_MSK 0x3FFFF +#define DPMAIF_DL_BAT_WRIDX_MSK 0x3FFFF +#define DPMAIF_DL_FRG_WRIDX_MSK 0x3FFFF + +/* DPMAIF_PD_UL_DBG_STA2 */ +#define DPMAIF_UL_IDLE_STS_MSK BIT(11) +#define DPMAIF_UL_IDLE_STS BIT(11) + +/* DPMAIF_PD_DL_DBG_STA1 */ +#define DPMAIF_DL_IDLE_STS BIT(23) +#define DPMAIF_DL_PKT_CHECKSUM_EN BIT(31) +#define DPMAIF_PORT_MODE_MSK BIT(30) +#define DPMAIF_PORT_MODE_PCIE BIT(30) + +/* BASE_NADDR_NRL2_DPMAIF_WDMA */ +#define DPMAIF_DL_BAT_CACHE_PRI BIT(22) +#define DPMAIF_DL_BURST_PIT_EN BIT(13) +#define DPMAIF_MEM_CLR_MASK BIT(0) +#define DPMAIF_SRAM_SYNC_MASK BIT(0) +#define DPMAIF_UL_INIT_DONE_MASK BIT(0) +#define DPMAIF_DL_INIT_DONE_MASK BIT(0) + +#define DPMAIF_DL_PIT_SEQ_MSK 0xFF +#define DPMAIF_PCIE_MODE_SET_VALUE 0x55 + +#define DPMAIF_UDL_IP_BUSY_MSK BIT(0) + +#define DP_UL_INT_DONE_OFFSET 0 +#define DP_UL_INT_EMPTY_OFFSET 5 +#define DP_UL_INT_MD_NOTRDY_OFFSET 10 +#define DP_UL_INT_PWR_NOTRDY_OFFSET 15 +#define DP_UL_INT_LEN_ERR_OFFSET 20 + +/* Enable and mask/unmaks UL interrupt */ +#define DPMAIF_UL_INT_QDONE_MSK (DPMAIF_ULQS << DP_UL_INT_DONE_OFFSET) +#define DPMAIF_UL_TOP0_INT_MSK BIT(9) + +/* UL interrupt status */ +#define DPMAIF_UL_INT_QDONE (DPMAIF_ULQS << DP_UL_INT_DONE_OFFSET) + +/* Enable and Mask/unmask DL interrupt */ +#define DPMAIF_DL_INT_BATCNT_LEN_ERR_MSK BIT(2) +#define DPMAIF_DL_INT_FRG_LEN_ERR_MSK BIT(7) +#define DPMAIF_DL_INT_DLQ0_QDONE_MSK BIT(8) +#define DPMAIF_DL_INT_DLQ1_QDONE_MSK BIT(9) +#define DPMAIF_DL_INT_DLQ0_PITCNT_LEN_ERR_MSK BIT(10) +#define DPMAIF_DL_INT_DLQ1_PITCNT_LEN_ERR_MSK BIT(11) +#define DPMAIF_DL_INT_Q2TOQ1_MSK BIT(24) +#define DPMAIF_DL_INT_Q2APTOP_MSK BIT(25) + +/* DL interrupt status */ +#define DPMAIF_DL_INT_DUMMY_STATUS BIT(0) +#define DPMAIF_DL_INT_BATCNT_LEN_ERR BIT(2) +#define DPMAIF_DL_INT_FRG_LEN_ERR BIT(7) +#define DPMAIF_DL_INT_DLQ0_PITCNT_LEN_ERR BIT(8) +#define DPMAIF_DL_INT_DLQ1_PITCNT_LEN_ERR BIT(9) +#define DPMAIF_DL_INT_DLQ0_QDONE BIT(13) +#define DPMAIF_DL_INT_DLQ1_QDONE BIT(14) + +/* DPMAIF LRO HW configure */ +#define DPMAIF_HPC_LRO_PATH_DF 3 +/* 0: HPC rules add by HW; 1: HPC rules add by Host */ +#define DPMAIF_HPC_ADD_MODE_DF 0 +#define DPMAIF_HPC_TOTAL_NUM 8 +#define DPMAIF_HPC_MAX_TOTAL_NUM 8 +#define DPMAIF_AGG_MAX_LEN_DF 65535 +#define DPMAIF_AGG_TBL_ENT_NUM_DF 50 +#define DPMAIF_HASH_PRIME_DF 13 +#define DPMAIF_MID_TIMEOUT_THRES_DF 100 +#define DPMAIF_LRO_TIMEOUT_THRES_DF 100 +#define DPMAIF_LRO_PRS_THRES_DF 10 +#define DPMAIF_LRO_HASH_BIT_CHOOSE_DF 0 + +#define DPMAIF_LROPIT_EN_MSK 0x100000 +#define DPMAIF_LROPIT_CHAN_OFS 16 +#define DPMAIF_ADD_LRO_PIT_CHAN_OFS 20 + +#define DPMAIF_DL_PIT_BYTE_SIZE 16 +#define DPMAIF_DL_BAT_BYTE_SIZE 8 +#define DPMAIF_DL_FRG_BYTE_SIZE 8 +#define DPMAIF_UL_DRB_BYTE_SIZE 16 + +#define DPMAIF_UL_DRB_ENTRY_WORD (DPMAIF_UL_DRB_BYTE_SIZE >> 2) +#define DPMAIF_DL_PIT_ENTRY_WORD (DPMAIF_DL_PIT_BYTE_SIZE >> 2) +#define DPMAIF_DL_BAT_ENTRY_WORD (DPMAIF_DL_BAT_BYTE_SIZE >> 2) + +#define DPMAIF_HW_BAT_REMAIN 64 +#define DPMAIF_HW_PKT_BIDCNT 1 + +#define DPMAIF_HW_CHK_BAT_NUM 62 +#define DPMAIF_HW_CHK_FRG_NUM 3 +#define DPMAIF_HW_CHK_PIT_NUM (2 * DPMAIF_HW_CHK_BAT_NUM) + +#define DPMAIF_DLQ_NUM 2 +#define DPMAIF_ULQ_NUM 5 +#define DPMAIF_PKT_BIDCNT 1 + +#define DPMAIF_TOEPLITZ_HASH_EN 1 + +/* word num */ +#define DPMAIF_HASH_SEC_KEY_NUM 40 +#define DPMAIF_HASH_DEFAULT_VALUE 0 +#define DPMAIF_HASH_BIT_MASK_DF 0x7 +#define DPMAIF_HASH_INDR_MASK_DF 0xF0 + +/* 10k */ +#define DPMAIF_HPC_STATS_THRESHOLD 0x2800 + +/* 0x7A1- 1s: unit:512us */ +#define DPMAIF_HPC_STATS_TIMER_CFG 0 + +#define DPMAIF_HASH_INDR_SIZE (DPMAIF_HASH_BIT_MASK_DF + 1) +#define DPMAIF_HASH_INDR_MASK 0xFF00FFFF +#define DPMAIF_HASH_DEFAULT_V_MASK 0xFFFFFF00 +#define DPMAIF_HASH_BIT_MASK 0xFFFFF0FF + +/* dpmaif interrupt configuration */ +#define DPMAIF_AP_UL_L2INTR_EN_MASK DPMAIF_UL_INT_QDONE_MSK + +#define DPMAIF_AP_DL_L2INTR_EN_MASK\ + (DPMAIF_DL_INT_DLQ0_QDONE_MSK | DPMAIF_DL_INT_DLQ1_QDONE_MSK |\ + DPMAIF_DL_INT_DLQ0_PITCNT_LEN_ERR_MSK | DPMAIF_DL_INT_DLQ1_PITCNT_LEN_ERR_MSK |\ + DPMAIF_DL_INT_BATCNT_LEN_ERR_MSK | DPMAIF_DL_INT_FRG_LEN_ERR_MSK) + +#define DPMAIF_AP_UDL_IP_BUSY_EN_MASK (DPMAIF_UDL_IP_BUSY_MSK) + +/* dpmaif interrupt mask status by interrupt source */ +#define DPMAIF_SRC0_DL_STATUS_MASK\ + (DPMAIF_DL_INT_DLQ0_QDONE | DPMAIF_DL_INT_DLQ0_PITCNT_LEN_ERR |\ + DPMAIF_DL_INT_BATCNT_LEN_ERR | DPMAIF_DL_INT_FRG_LEN_ERR | DPMAIF_DL_INT_DUMMY_STATUS) + +#define DPMAIF_SRC1_DL_STATUS_MASK\ + (DPMAIF_DL_INT_DLQ1_QDONE | DPMAIF_DL_INT_DLQ1_PITCNT_LEN_ERR) + +#endif From patchwork Tue Nov 22 11:11:48 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?WWFuY2hhbyBZYW5nICjmnajlvabotoUp?= X-Patchwork-Id: 24308 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp2148622wrr; Tue, 22 Nov 2022 03:41:52 -0800 (PST) X-Google-Smtp-Source: AA0mqf6kx2P5IgOSIBMxbh54s/o76kEEFzvz+ULw8rCBLJ1aaAbg5lnDsUgLlijV8JJ/Z0JgrB52 X-Received: by 2002:a63:1206:0:b0:43c:76f4:c666 with SMTP id h6-20020a631206000000b0043c76f4c666mr6468811pgl.90.1669117311688; Tue, 22 Nov 2022 03:41:51 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1669117311; cv=none; d=google.com; s=arc-20160816; b=KPhCA5vx7Cgnlk5bEdJP0A66O4ltARpFzyCEsVYPheU08isbaM7IZHTTL5xQZwjKJ9 AVn6FxaRRxUMD7jlXBEbBKX6RR4PSjm3FemEP8qsm/FWeiSVbv1g2gPWqn+QIy1hDv3P I21VtsQ9jDa5kccTTAb9wA1ofNqA+zUk5jSgLU68KxKbjtZWi2az1EjmjZqXGcyhMK3Z U3WEE6lgak5dJ3Gunv2jljuGZJD/Szeul+PJI9zcOvtJfqMhEYZ/mO38bJReHksEA3V8 fZeSWZ5i48T4h5uEOiAAKqR94s1rfeRYmclmEYQKIMlz860MJLZEyYtyti0VpRIMzaUz +KLA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=0KBD3v3iybdPSsDDpCZaaS/2WJOerzwPy/S2REm0wTM=; b=HfaqUqisVLMK/idYj9AlxoMyvkafv6kdGiJQnnm1mwkXx5xqt/eYukxNUBkIWRPia7 ZPAzmT2P7C4/6IBw5apP7gl1Aj+jH6b8lTsGl1MsYdym/OUp5OQm0lE12Q0uhkjimPkD qjlJgRvfQvLbA0IULm9G0Nc8mqVccfA62/El1TbUkFf5Cj+LTO8XKt7C0aLLryGwydcS RnhBKdSfLmfKF54bHd71zerDYNEHp2WYH6FVTma3s5sJO/cO6yrzvVb5L5UxYqWYk9db t71asWAJ1C2k9zaGdxybz81a9pEIr/iOUNawgdBg+5/mmSWk1CX3i0pK5EJkvVAb39Ph Wi1g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@mediatek.com header.s=dk header.b=l2ie8X4N; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=mediatek.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id 4-20020a170902c20400b001783df7f2e9si182464pll.166.2022.11.22.03.41.32; Tue, 22 Nov 2022 03:41:51 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@mediatek.com header.s=dk header.b=l2ie8X4N; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=mediatek.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233259AbiKVL3E (ORCPT + 99 others); Tue, 22 Nov 2022 06:29:04 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45850 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233250AbiKVL2e (ORCPT ); Tue, 22 Nov 2022 06:28:34 -0500 Received: from mailgw01.mediatek.com (unknown [60.244.123.138]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 55A2713F75; Tue, 22 Nov 2022 03:21:52 -0800 (PST) X-UUID: 3b4617a42e2b4b1982877d78a2e988f4-20221122 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=mediatek.com; s=dk; h=Content-Type:MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:CC:To:From; bh=0KBD3v3iybdPSsDDpCZaaS/2WJOerzwPy/S2REm0wTM=; b=l2ie8X4NMBWOM+wbGQZej00tUelGrI3LN27mNhuJPF2bu9J/V0eq/BTlX6KdOgNkbzyAjjJDquWdRXFcoffYsxPlgqeYhUwx1E/YQ3bbx6ofny6TzQiN+qOntjMYFhGF9kX9DxPq2MsPUckw2619Nj3JPApTVbCSE7TklrL2Mv4=; X-CID-P-RULE: Release_Ham X-CID-O-INFO: VERSION:1.1.13,REQID:67b744f3-33ff-4f02-bad1-b5dd7ac0768f,IP:0,U RL:0,TC:0,Content:-25,EDM:0,RT:0,SF:0,FILE:0,BULK:0,RULE:Release_Ham,ACTIO N:release,TS:-25 X-CID-META: VersionHash:d12e911,CLOUDID:5c89fbf8-3a34-4838-abcf-dfedf9dd068e,B ulkID:nil,BulkQuantity:0,Recheck:0,SF:102,TC:nil,Content:0,EDM:-3,IP:nil,U RL:0,File:nil,Bulk:nil,QS:nil,BEC:nil,COL:1 X-UUID: 3b4617a42e2b4b1982877d78a2e988f4-20221122 Received: from mtkmbs11n1.mediatek.inc [(172.21.101.185)] by mailgw01.mediatek.com (envelope-from ) (Generic MTA with TLSv1.2 ECDHE-RSA-AES256-GCM-SHA384 256/256) with ESMTP id 729153306; Tue, 22 Nov 2022 19:21:47 +0800 Received: from mtkmbs11n2.mediatek.inc (172.21.101.187) by mtkmbs10n1.mediatek.inc (172.21.101.34) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.792.15; Tue, 22 Nov 2022 19:21:45 +0800 Received: from mcddlt001.gcn.mediatek.inc (10.19.240.15) by mtkmbs11n2.mediatek.inc (172.21.101.73) with Microsoft SMTP Server id 15.2.792.15 via Frontend Transport; Tue, 22 Nov 2022 19:21:43 +0800 From: Yanchao Yang To: Loic Poulain , Sergey Ryazanov , Johannes Berg , "David S . Miller" , Eric Dumazet , "Jakub Kicinski" , Paolo Abeni , netdev ML , kernel ML CC: MTK ML , Liang Lu , Haijun Liu , Hua Yang , Ting Wang , Felix Chen , Mingliang Xu , Min Dong , Aiden Wang , Guohao Zhang , Chris Feng , "Yanchao Yang" , Lambert Wang , Mingchuang Qiao , Xiayu Zhang , Haozhe Chang , MediaTek Corporation Subject: [PATCH net-next v1 09/13] net: wwan: tmi: Add data plane transaction layer Date: Tue, 22 Nov 2022 19:11:48 +0800 Message-ID: <20221122111152.160377-10-yanchao.yang@mediatek.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20221122111152.160377-1-yanchao.yang@mediatek.com> References: <20221122111152.160377-1-yanchao.yang@mediatek.com> MIME-Version: 1.0 X-MTK: N X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_MSPIKE_H2,SPF_HELO_PASS, SPF_PASS,UNPARSEABLE_RELAY autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1750196354417995836?= X-GMAIL-MSGID: =?utf-8?q?1750196354417995836?= From: MediaTek Corporation Data Path Modem AP Interface (DPMAIF) provides methods for initialization, ring buffer management, ISR, control and handling of TX/RX services' flows. DPMAIF TX It exposes the function 'mtk_dpmaif_send' which can be called by the port layer indirectly to transmit packets. The transaction layer manages uplink data with Descriptor Ring Buffer (DRB), which includes one message DRB entry and one or more normal DRB entries. Message DRB holds the general packet information and each normal DRB entry holds the address of the packet segment. At the same time, DPMAIF provides multiple virtual queues with different priorities. DPMAIF RX The downlink buffer management uses Buffer Address Table (BAT), which includes normal BAT and fragment BAT, and Packet Information Table (PIT) rings. The BAT ring holds the address of the skb data buffer for the hardware to use, while the PIT contains metadata about a whole network packet including a reference to the BAT entry holding the data buffer address. The driver reads the PIT and BAT entries written by the modem. When reaching a threshold, the driver reloads the PIT and BAT rings. Signed-off-by: Yanchao Yang Signed-off-by: MediaTek Corporation --- drivers/net/wwan/mediatek/Makefile | 3 +- drivers/net/wwan/mediatek/mtk_data_plane.h | 101 + drivers/net/wwan/mediatek/mtk_dev.c | 8 + drivers/net/wwan/mediatek/mtk_dev.h | 13 + drivers/net/wwan/mediatek/mtk_dpmaif.c | 4051 ++++++++++++++++++++ drivers/net/wwan/mediatek/pcie/mtk_pci.c | 6 + 6 files changed, 4181 insertions(+), 1 deletion(-) create mode 100644 drivers/net/wwan/mediatek/mtk_data_plane.h create mode 100644 drivers/net/wwan/mediatek/mtk_dpmaif.c diff --git a/drivers/net/wwan/mediatek/Makefile b/drivers/net/wwan/mediatek/Makefile index 662594e1ad95..d48c2a0d33d9 100644 --- a/drivers/net/wwan/mediatek/Makefile +++ b/drivers/net/wwan/mediatek/Makefile @@ -12,7 +12,8 @@ mtk_tmi-y = \ pcie/mtk_dpmaif_drv_t800.o \ mtk_port.o \ mtk_port_io.o \ - mtk_fsm.o + mtk_fsm.o \ + mtk_dpmaif.o ccflags-y += -I$(srctree)/$(src)/ ccflags-y += -I$(srctree)/$(src)/pcie/ diff --git a/drivers/net/wwan/mediatek/mtk_data_plane.h b/drivers/net/wwan/mediatek/mtk_data_plane.h new file mode 100644 index 000000000000..4daf3ec32c91 --- /dev/null +++ b/drivers/net/wwan/mediatek/mtk_data_plane.h @@ -0,0 +1,101 @@ +/* SPDX-License-Identifier: BSD-3-Clause-Clear + * + * Copyright (c) 2022, MediaTek Inc. + */ + +#ifndef __MTK_DATA_PLANE_H__ +#define __MTK_DATA_PLANE_H__ + +#include +#include +#include + +#define SKB_TO_CMD(skb) ((struct mtk_data_cmd *)(skb)->data) +#define CMD_TO_DATA(cmd) (*(void **)(cmd)->data) +#define SKB_TO_CMD_DATA(skb) (*(void **)SKB_TO_CMD(skb)->data) + +#define IPV4_VERSION 0x40 +#define IPV6_VERSION 0x60 + +enum mtk_data_feature { + DATA_F_LRO = BIT(0), + DATA_F_RXFH = BIT(1), + DATA_F_INTR_COALESCE = BIT(2), + DATA_F_MULTI_NETDEV = BIT(16), + DATA_F_ETH_PDN = BIT(17), +}; + +struct mtk_data_blk { + struct mtk_md_dev *mdev; + struct mtk_dpmaif_ctlb *dcb; +}; + +enum mtk_data_type { + DATA_PKT, + DATA_CMD, +}; + +enum mtk_pkt_type { + PURE_IP, +}; + +enum mtk_data_cmd_type { + DATA_CMD_TRANS_CTL, + DATA_CMD_RXFH_GET, + DATA_CMD_RXFH_SET, + DATA_CMD_TRANS_DUMP, + DATA_CMD_RXQ_NUM_GET, + DATA_CMD_HKEY_SIZE_GET, + DATA_CMD_INDIR_SIZE_GET, + DATA_CMD_INTR_COALESCE_GET, + DATA_CMD_INTR_COALESCE_SET, + DATA_CMD_STRING_CNT_GET, + DATA_CMD_STRING_GET, +}; + +struct mtk_data_intr_coalesce { + unsigned int rx_coalesce_usecs; + unsigned int tx_coalesce_usecs; + unsigned int rx_coalesced_frames; + unsigned int tx_coalesced_frames; +}; + +struct mtk_data_rxfh { + unsigned int *indir; + u8 *key; +}; + +struct mtk_data_trans_ctl { + bool enable; +}; + +struct mtk_data_cmd { + void (*data_complete)(void *data); + struct completion done; + int ret; + enum mtk_data_cmd_type cmd; + unsigned int len; + char data[]; +}; + +struct mtk_data_trans_ops { + int (*poll)(struct napi_struct *napi, int budget); + int (*select_txq)(struct sk_buff *skb, enum mtk_pkt_type pkt_type); + int (*send)(struct mtk_data_blk *data_blk, enum mtk_data_type type, + struct sk_buff *skb, u64 data); +}; + +struct mtk_data_trans_info { + u32 cap; + unsigned char rxq_cnt; + unsigned char txq_cnt; + unsigned int max_mtu; + struct napi_struct **napis; +}; + +int mtk_data_init(struct mtk_md_dev *mdev); +int mtk_data_exit(struct mtk_md_dev *mdev); + +extern struct mtk_data_trans_ops data_trans_ops; + +#endif /* __MTK_DATA_PLANE_H__ */ diff --git a/drivers/net/wwan/mediatek/mtk_dev.c b/drivers/net/wwan/mediatek/mtk_dev.c index 3bdd2888e072..d4472491ce9a 100644 --- a/drivers/net/wwan/mediatek/mtk_dev.c +++ b/drivers/net/wwan/mediatek/mtk_dev.c @@ -5,6 +5,7 @@ #include "mtk_bm.h" #include "mtk_ctrl_plane.h" +#include "mtk_data_plane.h" #include "mtk_dev.h" #include "mtk_fsm.h" @@ -24,6 +25,12 @@ int mtk_dev_init(struct mtk_md_dev *mdev) if (ret) goto err_ctrl_init; + ret = mtk_data_init(mdev); + if (ret) + goto err_data_init; + +err_data_init: + mtk_ctrl_exit(mdev); err_ctrl_init: mtk_bm_exit(mdev); err_bm_init: @@ -36,6 +43,7 @@ void mtk_dev_exit(struct mtk_md_dev *mdev) { mtk_fsm_evt_submit(mdev, FSM_EVT_DEV_RM, 0, NULL, 0, EVT_MODE_BLOCKING | EVT_MODE_TOHEAD); + mtk_data_exit(mdev); mtk_ctrl_exit(mdev); mtk_bm_exit(mdev); mtk_fsm_exit(mdev); diff --git a/drivers/net/wwan/mediatek/mtk_dev.h b/drivers/net/wwan/mediatek/mtk_dev.h index 26f0c87079cb..2739b8068a31 100644 --- a/drivers/net/wwan/mediatek/mtk_dev.h +++ b/drivers/net/wwan/mediatek/mtk_dev.h @@ -119,6 +119,7 @@ struct mtk_hw_ops { int (*reset)(struct mtk_md_dev *mdev, enum mtk_reset_type type); int (*reinit)(struct mtk_md_dev *mdev, enum mtk_reinit_type type); + bool (*mmio_check)(struct mtk_md_dev *mdev); int (*get_hp_status)(struct mtk_md_dev *mdev); }; @@ -133,6 +134,7 @@ struct mtk_md_dev { struct mtk_md_fsm *fsm; void *ctrl_blk; + void *data_blk; struct mtk_bm_ctrl *bm_ctrl; }; @@ -427,6 +429,17 @@ static inline int mtk_hw_reinit(struct mtk_md_dev *mdev, enum mtk_reinit_type ty return mdev->hw_ops->reinit(mdev, type); } +/* mtk_hw_mmio_check() -Check if the PCIe MMIO is ready. + * + * @mdev: Device instance. + * + * Return: 0 indicates PCIe MMIO is ready, other value indicates not ready + */ +static inline bool mtk_hw_mmio_check(struct mtk_md_dev *mdev) +{ + return mdev->hw_ops->mmio_check(mdev); +} + /* mtk_hw_get_hp_status() -Get whether the device can be hot-plugged. * * @mdev: Device instance. diff --git a/drivers/net/wwan/mediatek/mtk_dpmaif.c b/drivers/net/wwan/mediatek/mtk_dpmaif.c new file mode 100644 index 000000000000..a8b23b2cf448 --- /dev/null +++ b/drivers/net/wwan/mediatek/mtk_dpmaif.c @@ -0,0 +1,4051 @@ +// SPDX-License-Identifier: BSD-3-Clause-Clear +/* + * Copyright (c) 2022, MediaTek Inc. + */ + +#include +#include +#include +#include +#include +#include +#include + +#include "mtk_bm.h" +#include "mtk_data_plane.h" +#include "mtk_dev.h" +#include "mtk_dpmaif_drv.h" +#include "mtk_fsm.h" +#include "mtk_reg.h" + +#define DPMAIF_PIT_CNT_UPDATE_THRESHOLD 60 +#define DPMAIF_SKB_TX_WEIGHT 5 + +/* Interrupt coalesce default value */ +#define DPMAIF_DFLT_INTR_RX_COA_FRAMES 0 +#define DPMAIF_DFLT_INTR_TX_COA_FRAMES 0 +#define DPMAIF_DFLT_INTR_RX_COA_USECS 0 +#define DPMAIF_DFLT_INTR_TX_COA_USECS 0 +#define DPMAIF_INTR_EN_TIME BIT(0) +#define DPMAIF_INTR_EN_PKT BIT(1) + +/* Dpmaif hardware DMA descriptor structure. */ +enum dpmaif_rcsum_state { + CS_RESULT_INVALID = -1, + CS_RESULT_PASS = 0, + CS_RESULT_FAIL = 1, + CS_RESULT_NOTSUPP = 2, + CS_RESULT_RSV = 3 +}; + +struct dpmaif_msg_pit { + __le32 dword1; + __le32 dword2; + __le32 dword3; + __le32 dword4; +}; + +#define PIT_MSG_DP BIT(31) /* Indicates software to drop this packet if set. */ +#define PIT_MSG_DW1_RSV1 GENMASK(30, 27) +#define PIT_MSG_NET_TYPE GENMASK(26, 24) +#define PIT_MSG_CHNL_ID GENMASK(23, 16) /* channel index */ +#define PIT_MSG_DW1_RSV2 GENMASK(15, 12) +#define PIT_MSG_HPC_IDX GENMASK(11, 8) +#define PIT_MSG_SRC_QID GENMASK(7, 5) +#define PIT_MSG_ERR BIT(4) +#define PIT_MSG_CHECKSUM GENMASK(3, 2) +#define PIT_MSG_CONT BIT(1) /* 0b: last entry; 1b: more entry */ +#define PIT_MSG_PKT_TYPE BIT(0) /* 0b: normal PIT entry; 1b: message PIT entry */ + +#define PIT_MSG_HP_IDX GENMASK(31, 27) +#define PIT_MSG_CMD GENMASK(26, 24) +#define PIT_MSG_DW2_RSV GENMASK(23, 21) +#define PIT_MSG_FLOW GENMASK(20, 16) +#define PIT_MSG_COUNT_L GENMASK(15, 0) + +#define PIT_MSG_HASH GENMASK(31, 24) /* Hash value calculated by Hardware using packet */ +#define PIT_MSG_DW3_RSV1 GENMASK(23, 18) +#define PIT_MSG_PRO GENMASK(17, 16) +#define PIT_MSG_VBID GENMASK(15, 3) +#define PIT_MSG_DW3_RSV2 GENMASK(2, 0) + +#define PIT_MSG_DLQ_DONE GENMASK(31, 30) +#define PIT_MSG_ULQ_DONE GENMASK(29, 24) +#define PIT_MSG_IP BIT(23) +#define PIT_MSG_DW4_RSV1 BIT(22) +#define PIT_MSG_MR GENMASK(21, 20) +#define PIT_MSG_DW4_RSV2 GENMASK(19, 17) +#define PIT_MSG_IG BIT(16) +#define PIT_MSG_DW4_RSV3 GENMASK(15, 11) +#define PIT_MSG_H_BID GENMASK(10, 8) +/* An incremental number for each PIT, updated for each PIT entries. + * It is reset to 0 when its value reaches the maximum value. + */ +#define PIT_MSG_PIT_SEQ GENMASK(7, 0) + +/* c_bit */ +#define DPMAIF_PIT_LASTONE 0x00 +#define DPMAIF_PIT_MORE 0x01 + +/* pit type */ +enum dpmaif_pit_type { + PD_PIT = 0, + MSG_PIT, +}; + +/* buffer type */ +enum dpmaif_bat_type { + NORMAL_BAT = 0, + FRAG_BAT = 1, +}; + +struct dpmaif_pd_pit { + __le32 pd_header; + __le32 addr_low; + __le32 addr_high; + __le32 pd_footer; +}; + +#define PIT_PD_DATA_LEN GENMASK(31, 16) /* Indicates the data length of current packet. */ +#define PIT_PD_BUF_ID GENMASK(15, 3) /* The low order of buffer index */ +#define PIT_PD_BUF_TYPE BIT(2) /* 0b: normal BAT entry; 1b: fragment BAT entry */ +#define PIT_PD_CONT BIT(1) /* 0b: last entry; 1b: more entry */ +#define PIT_PD_PKT_TYPE BIT(0) /* 0b: normal PIT entry; 1b: message PIT entry */ + +#define PIT_PD_DLQ_DONE GENMASK(31, 30) +#define PIT_PD_ULQ_DONE GENMASK(29, 24) +/* The header length of transport layer and internet layer. */ +#define PIT_PD_HD_OFFSET GENMASK(23, 19) +#define PIT_PD_BI_F GENMASK(18, 17) +#define PIT_PD_IG BIT(16) +#define PIT_PD_RSV GENMASK(15, 11) +#define PIT_PD_H_BID GENMASK(10, 8) /* The high order of buffer index */ +#define PIT_PD_SEQ GENMASK(7, 0) /* PIT sequence */ + +/* RX: buffer address table */ +struct dpmaif_bat { + __le32 buf_addr_low; + __le32 buf_addr_high; +}; + +/* drb->type */ +enum dpmaif_drb_type { + PD_DRB, + MSG_DRB, +}; + +#define DPMAIF_DRB_LASTONE 0x00 +#define DPMAIF_DRB_MORE 0x01 + +struct dpmaif_msg_drb { + __le32 msg_header1; + __le32 msg_header2; + __le32 msg_rsv1; + __le32 msg_rsv2; +}; + +#define DRB_MSG_PKT_LEN GENMASK(31, 16) /* The length of a whole packet. */ +#define DRB_MSG_DW1_RSV GENMASK(15, 3) +#define DRB_MSG_CONT BIT(2) /* 0b: last entry; 1b: more entry */ +#define DRB_MSG_DTYP GENMASK(1, 0) /* 00b: normal DRB entry; 01b: message DRB entry */ + +#define DRB_MSG_DW2_RSV1 GENMASK(31, 30) +#define DRB_MSG_L4_CHK BIT(29) /* 0b: disable layer4 checksum offload; 1b: enable */ +#define DRB_MSG_IP_CHK BIT(28) /* 0b: disable IP checksum, 1b: enable IP checksum */ +#define DRB_MSG_DW2_RSV2 BIT(27) +#define DRB_MSG_NET_TYPE GENMASK(26, 24) +#define DRB_MSG_CHNL_ID GENMASK(23, 16) /* channel index */ +#define DRB_MSG_COUNT_L GENMASK(15, 0) + +struct dpmaif_pd_drb { + __le32 pd_header; + __le32 addr_low; + __le32 addr_high; + __le32 pd_rsv; +}; + +#define DRB_PD_DATA_LEN GENMASK(31, 16) /* the length of a payload. */ +#define DRB_PD_RSV GENMASK(15, 3) +#define DRB_PD_CONT BIT(2)/* 0b: last entry; 1b: more entry */ +#define DRB_PD_DTYP GENMASK(1, 0) /* 00b: normal DRB entry; 01b: message DRB entry. */ + +/* software resource structure */ +#define DPMAIF_SRV_CNT_MAX DPMAIF_TXQ_CNT_MAX + +/* struct dpmaif_res_cfg - dpmaif resource configuration + * @tx_srv_cnt: Transmit services count. + * @tx_vq_cnt: Transmit virtual queue count. + * @tx_vq_srv_map: Transmit virtual queue and service map. + * Array index indicates virtual queue id, Array value indicates service id. + * @srv_prio_tbl: Transmit services priority + * Array index indicates service id, Array value indicates kthread nice value. + * @irq_cnt: Dpmaif interrupt source count. + * @irq_src: Dpmaif interrupt source id. + * @txq_cnt: Dpmaif Transmit queue count. + * @rxq_cnt: Dpmaif Receive queue count. + * @normal_bat_cnt: Dpmaif normal bat entry count. + * @frag_bat_cnt: Dpmaif frag bat entry count. + * @pit_cnt: Dpmaif pit entry count per receive queue. + * @drb_cnt: Dpmaif drb entry count per transmit queue. + * @cap: Dpmaif capability. + */ +struct dpmaif_res_cfg { + unsigned char tx_srv_cnt; + unsigned char tx_vq_cnt; + unsigned char tx_vq_srv_map[DPMAIF_TXQ_CNT_MAX]; + int srv_prio_tbl[DPMAIF_SRV_CNT_MAX]; + unsigned int txq_doorbell_delay[DPMAIF_TXQ_CNT_MAX]; + unsigned char irq_cnt; + enum mtk_irq_src irq_src[DPMAIF_IRQ_CNT_MAX]; + unsigned char txq_cnt; + unsigned char rxq_cnt; + unsigned int normal_bat_cnt; + unsigned int frag_bat_cnt; + unsigned int pit_cnt[DPMAIF_RXQ_CNT_MAX]; + unsigned int drb_cnt[DPMAIF_TXQ_CNT_MAX]; + unsigned int cap; +}; + +static const struct dpmaif_res_cfg res_cfg_t800 = { + .tx_srv_cnt = 4, + .tx_vq_cnt = 5, + .tx_vq_srv_map = {3, 1, 2, 0, 3}, + .srv_prio_tbl = {-20, -15, -10, -5}, + .txq_doorbell_delay = {0}, + .irq_cnt = 3, + .irq_src = {MTK_IRQ_SRC_DPMAIF, MTK_IRQ_SRC_DPMAIF2, MTK_IRQ_SRC_DPMAIF3}, + .txq_cnt = 5, + .rxq_cnt = 2, + .normal_bat_cnt = 16384, + .frag_bat_cnt = 8192, + .pit_cnt = {16384, 16384}, + .drb_cnt = {6144, 6144, 6144, 6144, 6144}, + .cap = DATA_F_LRO | DATA_F_RXFH | DATA_F_INTR_COALESCE, +}; + +enum dpmaif_state { + DPMAIF_STATE_MIN, + DPMAIF_STATE_PWROFF, + DPMAIF_STATE_PWRON, + DPMAIF_STATE_MAX +}; + +struct dpmaif_vq { + unsigned char q_id; + u32 max_len; /* align network tx qdisc 1000 */ + struct sk_buff_head list; +}; + +struct dpmaif_cmd_srv { + struct mtk_dpmaif_ctlb *dcb; + struct work_struct work; + struct dpmaif_vq *vq; +}; + +struct dpmaif_tx_srv { + struct mtk_dpmaif_ctlb *dcb; + unsigned char id; + int prio; + wait_queue_head_t wait; + struct task_struct *srv; + + unsigned long txq_drb_lack_sta; + unsigned char cur_vq_id; + unsigned char vq_cnt; + struct dpmaif_vq *vq[DPMAIF_TXQ_CNT_MAX]; +}; + +struct dpmaif_drb_skb { + struct sk_buff *skb; + dma_addr_t data_dma_addr; + unsigned short data_len; + unsigned short drb_idx:13; + unsigned short is_msg:1; + unsigned short is_frag:1; + unsigned short is_last:1; +}; + +struct dpmaif_txq { + struct mtk_dpmaif_ctlb *dcb; + unsigned char id; + atomic_t budget; + atomic_t to_submit_cnt; + struct dpmaif_pd_drb *drb_base; + dma_addr_t drb_dma_addr; + unsigned int drb_cnt; + unsigned short drb_wr_idx; + unsigned short drb_rd_idx; + unsigned short drb_rel_rd_idx; + unsigned long long dma_map_errs; + unsigned short last_ch_id; + struct dpmaif_drb_skb *sw_drb_base; + unsigned int doorbell_delay; + struct delayed_work doorbell_work; + struct delayed_work tx_done_work; + unsigned int intr_coalesce_frame; +}; + +struct dpmaif_rx_record { + bool msg_pit_recv; + struct sk_buff *cur_skb; + struct sk_buff *lro_parent; + struct sk_buff *lro_last_skb; + unsigned int lro_pkt_cnt; + unsigned int cur_ch_id; + unsigned int checksum; + unsigned int hash; + unsigned char pit_dp; + unsigned char err_payload; +}; + +struct dpmaif_rxq { + struct mtk_dpmaif_ctlb *dcb; + unsigned char id; + bool started; + struct dpmaif_pd_pit *pit_base; + dma_addr_t pit_dma_addr; + unsigned int pit_cnt; + unsigned short pit_wr_idx; + unsigned short pit_rd_idx; + unsigned short pit_rel_rd_idx; + unsigned char pit_seq_expect; + unsigned int pit_rel_cnt; + bool pit_cnt_err_intr_set; + unsigned int pit_burst_rel_cnt; + unsigned int pit_seq_fail_cnt; + struct napi_struct napi; + struct dpmaif_rx_record rx_record; + unsigned int intr_coalesce_frame; +}; + +struct skb_mapped_t { + struct sk_buff *skb; + dma_addr_t data_dma_addr; + unsigned int data_len; +}; + +struct page_mapped_t { + struct page *page; + dma_addr_t data_dma_addr; + unsigned int offset; + unsigned int data_len; +}; + +union dpmaif_bat_record { + struct skb_mapped_t normal; + struct page_mapped_t frag; +}; + +struct dpmaif_bat_ring { + enum dpmaif_bat_type type; + struct dpmaif_bat *bat_base; + dma_addr_t bat_dma_addr; + unsigned int bat_cnt; + unsigned short bat_wr_idx; + unsigned short bat_rd_idx; + unsigned short bat_rel_rd_idx; + union dpmaif_bat_record *sw_record_base; + unsigned int buf_size; + unsigned char *mask_tbl; + struct work_struct reload_work; + bool bat_cnt_err_intr_set; +}; + +struct dpmaif_bat_info { + struct mtk_dpmaif_ctlb *dcb; + unsigned int max_mtu; + bool frag_bat_enabled; + + struct dpmaif_bat_ring normal_bat_ring; + struct dpmaif_bat_ring frag_bat_ring; + + struct workqueue_struct *reload_wq; +}; + +struct dpmaif_irq_param { + unsigned char idx; + struct mtk_dpmaif_ctlb *dcb; + enum mtk_irq_src dpmaif_irq_src; + int dev_irq_id; +}; + +struct dpmaif_tx_evt { + unsigned long long ul_done; + unsigned long long ul_drb_empty; +}; + +struct dpmaif_rx_evt { + unsigned long long dl_done; + unsigned long long pit_len_err; +}; + +struct dpmaif_other_evt { + unsigned long long ul_md_notready; + unsigned long long ul_md_pwr_notready; + unsigned long long ul_len_err; + + unsigned long long dl_skb_len_err; + unsigned long long dl_bat_cnt_len_err; + unsigned long long dl_pkt_empty; + unsigned long long dl_frag_empty; + unsigned long long dl_mtu_err; + unsigned long long dl_frag_cnt_len_err; + unsigned long long hpc_ent_type_err; +}; + +struct dpmaif_traffic_stats { + /* txq traffic */ + unsigned long long tx_sw_packets[DPMAIF_TXQ_CNT_MAX]; + unsigned long long tx_hw_packets[DPMAIF_TXQ_CNT_MAX]; + unsigned long long tx_done_last_time[DPMAIF_TXQ_CNT_MAX]; + unsigned int tx_done_last_cnt[DPMAIF_TXQ_CNT_MAX]; + + /* rxq traffic */ + unsigned long long rx_packets[DPMAIF_RXQ_CNT_MAX]; + unsigned long long rx_errors[DPMAIF_RXQ_CNT_MAX]; + unsigned long long rx_dropped[DPMAIF_RXQ_CNT_MAX]; + unsigned long long rx_hw_ind_dropped[DPMAIF_RXQ_CNT_MAX]; + unsigned long long rx_done_last_time[DPMAIF_RXQ_CNT_MAX]; + unsigned int rx_done_last_cnt[DPMAIF_RXQ_CNT_MAX]; + + /* irq traffic */ + unsigned long long irq_total_cnt[DPMAIF_IRQ_CNT_MAX]; + unsigned long long irq_last_time[DPMAIF_IRQ_CNT_MAX]; + struct dpmaif_tx_evt irq_tx_evt[DPMAIF_TXQ_CNT_MAX]; + struct dpmaif_rx_evt irq_rx_evt[DPMAIF_RXQ_CNT_MAX]; + struct dpmaif_other_evt irq_other_evt; +}; + +enum dpmaif_dump_flag { + DPMAIF_DUMP_TX_PKT = 0, + DPMAIF_DUMP_RX_PKT, + DPMAIF_DUMP_DRB, + DPMAIF_DUMP_PIT +}; + +struct mtk_dpmaif_ctlb { + struct mtk_data_blk *data_blk; + struct dpmaif_drv_info *drv_info; + struct napi_struct *napi[DPMAIF_RXQ_CNT_MAX]; + + enum dpmaif_state dpmaif_state; + bool dpmaif_user_ready; + bool trans_enabled; + /* lock for enable/disable routine */ + struct mutex trans_ctl_lock; + const struct dpmaif_res_cfg *res_cfg; + + struct dpmaif_cmd_srv cmd_srv; + struct dpmaif_vq cmd_vq; + struct dpmaif_tx_srv *tx_srvs; + struct dpmaif_vq *tx_vqs; + + struct workqueue_struct *tx_done_wq; + struct workqueue_struct *tx_doorbell_wq; + struct dpmaif_txq *txqs; + struct dpmaif_rxq *rxqs; + struct dpmaif_bat_info bat_info; + bool irq_enabled; + struct dpmaif_irq_param *irq_params; + + struct mtk_bm_pool *skb_pool; + struct mtk_bm_pool *page_pool; + + struct dpmaif_traffic_stats traffic_stats; + struct mtk_data_intr_coalesce intr_coalesce; +}; + +struct dpmaif_pkt_info { + unsigned char intf_id; + unsigned char drb_cnt; +}; + +#define DPMAIF_SKB_CB(__skb) ((struct dpmaif_pkt_info *)&((__skb)->cb[0])) + +#define DCB_TO_DEV(dcb) ((dcb)->data_blk->mdev->dev) +#define DCB_TO_MDEV(dcb) ((dcb)->data_blk->mdev) +#define DCB_TO_DEV_STR(dcb) ((dcb)->data_blk->mdev->dev_str) +#define DPMAIF_GET_HW_VER(dcb) ((dcb)->data_blk->mdev->hw_ver) +#define DPMAIF_GET_DRB_CNT(__skb) (skb_shinfo(__skb)->nr_frags + 1 + 1) + +#define DPMAIF_JUMBO_SIZE 9000 +#define DPMAIF_DFLT_MTU 3000 +#define DPMAIF_DFLT_LRO_ENABLE true +#define DPMAIF_DL_BUF_MIN_SIZE 128 +#define DPMAIF_BUF_THRESHOLD (DPMAIF_DL_BUF_MIN_SIZE * 28) /* 3.5k, should be less than page size */ +#define DPMAIF_NORMAL_BUF_SIZE_IN_JUMBO (128 * 13) /* 1664 */ +#define DPMAIF_FRAG_BUF_SIZE_IN_JUMBO (128 * 15) /* 1920 */ + +static bool dpmaif_lro_enable = DPMAIF_DFLT_LRO_ENABLE; + +static unsigned int mtk_dpmaif_ring_buf_get_next_idx(unsigned int buf_len, unsigned int buf_idx) +{ + return (++buf_idx) % buf_len; +} + +static unsigned int mtk_dpmaif_ring_buf_readable(unsigned int total_cnt, unsigned int rd_idx, + unsigned int wr_idx) +{ + unsigned int pkt_cnt; + + if (wr_idx >= rd_idx) + pkt_cnt = wr_idx - rd_idx; + else + pkt_cnt = total_cnt + wr_idx - rd_idx; + + return pkt_cnt; +} + +static unsigned int mtk_dpmaif_ring_buf_writable(unsigned int total_cnt, unsigned int rel_idx, + unsigned int wr_idx) +{ + unsigned int pkt_cnt; + + if (wr_idx < rel_idx) + pkt_cnt = rel_idx - wr_idx - 1; + else + pkt_cnt = total_cnt + rel_idx - wr_idx - 1; + + return pkt_cnt; +} + +static unsigned int mtk_dpmaif_ring_buf_releasable(unsigned int total_cnt, unsigned int rel_idx, + unsigned int rd_idx) +{ + unsigned int pkt_cnt; + + if (rel_idx <= rd_idx) + pkt_cnt = rd_idx - rel_idx; + else + pkt_cnt = total_cnt + rd_idx - rel_idx; + + return pkt_cnt; +} + +static void mtk_dpmaif_trigger_dev_exception(struct mtk_dpmaif_ctlb *dcb) +{ + mtk_hw_send_ext_evt(DCB_TO_MDEV(dcb), EXT_EVT_H2D_RESERVED_FOR_DPMAIF); +} + +static void mtk_dpmaif_common_err_handle(struct mtk_dpmaif_ctlb *dcb, bool is_hw) +{ + if (!is_hw) { + dev_err(DCB_TO_DEV(dcb), "ASSERT file: %s, function: %s, line %d", + __FILE__, __func__, __LINE__); + return; + } + + if (mtk_hw_mmio_check(DCB_TO_MDEV(dcb))) + dev_err(DCB_TO_DEV(dcb), "Failed to access mmio\n"); + else + mtk_dpmaif_trigger_dev_exception(dcb); +} + +static unsigned int mtk_dpmaif_pit_bid(struct dpmaif_pd_pit *pit_info) +{ + unsigned int buf_id = FIELD_GET(PIT_PD_H_BID, le32_to_cpu(pit_info->pd_footer)) << 13; + + return buf_id + FIELD_GET(PIT_PD_BUF_ID, le32_to_cpu(pit_info->pd_header)); +} + +static void mtk_dpmaif_disable_irq(struct mtk_dpmaif_ctlb *dcb) +{ + unsigned char irq_cnt = dcb->res_cfg->irq_cnt; + struct dpmaif_irq_param *irq_param; + int i; + + if (!dcb->irq_enabled) + return; + + dcb->irq_enabled = false; + for (i = 0; i < irq_cnt; i++) { + irq_param = &dcb->irq_params[i]; + if (mtk_hw_mask_irq(DCB_TO_MDEV(dcb), irq_param->dev_irq_id) != 0) + dev_err(DCB_TO_DEV(dcb), "Failed to mask dev irq%d\n", + irq_param->dev_irq_id); + } +} + +static void mtk_dpmaif_enable_irq(struct mtk_dpmaif_ctlb *dcb) +{ + unsigned char irq_cnt = dcb->res_cfg->irq_cnt; + struct dpmaif_irq_param *irq_param; + int i; + + if (dcb->irq_enabled) + return; + + dcb->irq_enabled = true; + for (i = 0; i < irq_cnt; i++) { + irq_param = &dcb->irq_params[i]; + if (mtk_hw_unmask_irq(DCB_TO_MDEV(dcb), irq_param->dev_irq_id) != 0) + dev_err(DCB_TO_DEV(dcb), "Failed to unmask dev irq%d\n", + irq_param->dev_irq_id); + } +} + +static int mtk_dpmaif_set_rx_bat(struct mtk_dpmaif_ctlb *dcb, struct dpmaif_bat_ring *bat_ring, + unsigned int bat_cnt) +{ + unsigned short old_sw_rel_rd_idx, new_sw_wr_idx, old_sw_wr_idx; + int ret = 0; + + old_sw_rel_rd_idx = bat_ring->bat_rel_rd_idx; + old_sw_wr_idx = bat_ring->bat_wr_idx; + new_sw_wr_idx = old_sw_wr_idx + bat_cnt; + + /* bat_wr_idx should not exceed bat_rel_rd_idx. */ + if (old_sw_rel_rd_idx > old_sw_wr_idx) { + if (new_sw_wr_idx >= old_sw_rel_rd_idx) + ret = -DATA_FLOW_CHK_ERR; + } else { + if (new_sw_wr_idx >= bat_ring->bat_cnt) { + new_sw_wr_idx = new_sw_wr_idx - bat_ring->bat_cnt; + if (new_sw_wr_idx >= old_sw_rel_rd_idx) + ret = -DATA_FLOW_CHK_ERR; + } + } + + if (ret < 0) { + dev_err(DCB_TO_DEV(dcb), + "Failed to check bat, new_sw_wr_idx=%u, old_sw_rl_idx=%u\n", + new_sw_wr_idx, old_sw_rel_rd_idx); + goto out; + } + + bat_ring->bat_wr_idx = new_sw_wr_idx; +out: + return ret; +} + +static int mtk_dpmaif_reload_rx_skb(struct mtk_dpmaif_ctlb *dcb, + struct dpmaif_bat_ring *bat_ring, unsigned int buf_cnt) +{ + union dpmaif_bat_record *cur_bat_record; + struct skb_mapped_t *skb_info; + unsigned short cur_bat_idx; + struct dpmaif_bat *cur_bat; + unsigned int i; + int ret; + + /* Pin rx buffers to BAT entries */ + cur_bat_idx = bat_ring->bat_wr_idx; + for (i = 0 ; i < buf_cnt; i++) { + /* For re-init flow, in re-init flow, we don't release + * the rx buffer on FSM_STATE_OFF state. + * because we will pin rx buffers to BAT entries + * again on FSM_STATE_BOOTUP state. + */ + cur_bat_record = bat_ring->sw_record_base + cur_bat_idx; + skb_info = &cur_bat_record->normal; + if (!skb_info->skb) { + skb_info->skb = mtk_bm_alloc(dcb->skb_pool); + if (unlikely(!skb_info->skb)) { + dev_err(DCB_TO_DEV(dcb), "Failed to alloc skb, bat%d buf_cnt:%u/%u\n", + bat_ring->type, buf_cnt, i); + break; + } + + skb_info->data_len = bat_ring->buf_size; + ret = mtk_dma_map_single(DCB_TO_MDEV(dcb), + &skb_info->data_dma_addr, + skb_info->skb->data, + skb_info->data_len, + DMA_FROM_DEVICE); + + if (unlikely(ret < 0)) { + dev_kfree_skb_any(skb_info->skb); + skb_info->skb = NULL; + break; + } + } + + cur_bat = bat_ring->bat_base + cur_bat_idx; + cur_bat->buf_addr_high = cpu_to_le32(upper_32_bits(skb_info->data_dma_addr)); + cur_bat->buf_addr_low = cpu_to_le32(lower_32_bits(skb_info->data_dma_addr)); + + cur_bat_idx = mtk_dpmaif_ring_buf_get_next_idx(bat_ring->bat_cnt, cur_bat_idx); + } + + ret = i; + if (unlikely(ret == 0)) + ret = -DATA_LOW_MEM_SKB; + + return ret; +} + +static int mtk_dpmaif_reload_rx_page(struct mtk_dpmaif_ctlb *dcb, + struct dpmaif_bat_ring *bat_ring, unsigned int buf_cnt) +{ + union dpmaif_bat_record *cur_bat_record; + struct page_mapped_t *page_info; + unsigned short cur_bat_idx; + struct dpmaif_bat *cur_bat; + unsigned int i; + void *data; + int ret; + + /* Pin rx buffers to BAT entries */ + cur_bat_idx = bat_ring->bat_wr_idx; + for (i = 0 ; i < buf_cnt; i++) { + /* For re-init flow, In re-init flow, we don't release + * the rx buffer on FSM_STATE_OFF state. + * because we will pin rx buffers to BAT + * entries again on FSM_STATE_BOOTUP state. + */ + cur_bat_record = bat_ring->sw_record_base + cur_bat_idx; + page_info = &cur_bat_record->frag; + + if (!page_info->page) { + data = mtk_bm_alloc(dcb->page_pool); + if (unlikely(!data)) { + dev_err(DCB_TO_DEV(dcb), "Failed to alloc page, bat%d buf_cnt:%u/%u\n", + bat_ring->type, buf_cnt, i); + break; + } + + page_info->page = virt_to_head_page(data); + page_info->offset = data - page_address(page_info->page); + page_info->data_len = bat_ring->buf_size; + ret = mtk_dma_map_page(DCB_TO_MDEV(dcb), + &page_info->data_dma_addr, + page_info->page, + page_info->offset, + page_info->data_len, + DMA_FROM_DEVICE); + if (unlikely(ret < 0)) { + put_page(page_info->page); + page_info->page = NULL; + break; + } + } + + cur_bat = bat_ring->bat_base + cur_bat_idx; + cur_bat->buf_addr_high = cpu_to_le32(upper_32_bits(page_info->data_dma_addr)); + cur_bat->buf_addr_low = cpu_to_le32(lower_32_bits(page_info->data_dma_addr)); + cur_bat_idx = mtk_dpmaif_ring_buf_get_next_idx(bat_ring->bat_cnt, cur_bat_idx); + } + + ret = i; + if (unlikely(ret == 0)) + ret = -DATA_LOW_MEM_SKB; + + return ret; +} + +static int mtk_dpmaif_reload_rx_buf(struct mtk_dpmaif_ctlb *dcb, + struct dpmaif_bat_ring *bat_ring, + unsigned int buf_cnt, bool send_doorbell) +{ + unsigned int reload_cnt, bat_cnt; + int ret; + + if (unlikely(buf_cnt == 0 || buf_cnt > bat_ring->bat_cnt)) { + dev_err(DCB_TO_DEV(dcb), "Invalid alloc bat buffer count\n"); + return -DATA_FLOW_CHK_ERR; + } + + /* Get bat count that be reloaded rx buffer and check + * Rx buffer count should not be greater than bat entry count, + * because one rx buffer is pined to one bat entry. + */ + bat_cnt = mtk_dpmaif_ring_buf_writable(bat_ring->bat_cnt, bat_ring->bat_rel_rd_idx, + bat_ring->bat_wr_idx); + if (unlikely(buf_cnt > bat_cnt)) { + dev_err(DCB_TO_DEV(dcb), + "Invalid parameter,bat%d: rx_buff>bat_entries(%u>%u), w/r/rel-%u,%u,%u\n", + bat_ring->type, buf_cnt, bat_cnt, bat_ring->bat_wr_idx, + bat_ring->bat_rd_idx, bat_ring->bat_rel_rd_idx); + return -DATA_FLOW_CHK_ERR; + } + + /* Allocate rx buffer and pin it to bat entry. */ + if (bat_ring->type == NORMAL_BAT) + ret = mtk_dpmaif_reload_rx_skb(dcb, bat_ring, buf_cnt); + else + ret = mtk_dpmaif_reload_rx_page(dcb, bat_ring, buf_cnt); + + if (ret < 0) + return -DATA_LOW_MEM_SKB; + + /* Check and update bat_wr_idx */ + reload_cnt = ret; + ret = mtk_dpmaif_set_rx_bat(dcb, bat_ring, reload_cnt); + if (unlikely(ret < 0)) { + dev_err(DCB_TO_DEV(dcb), "Failed to update bat_wr_idx\n"); + goto out; + } + + /* Make sure all frag bat information write done before notify HW. */ + dma_wmb(); + + /* Notify hw the available frag bat buffer count. */ + if (send_doorbell) { + if (bat_ring->type == NORMAL_BAT) + ret = mtk_dpmaif_drv_send_doorbell(dcb->drv_info, DPMAIF_BAT, + 0, reload_cnt); + else + ret = mtk_dpmaif_drv_send_doorbell(dcb->drv_info, DPMAIF_FRAG, + 0, reload_cnt); + if (unlikely(ret < 0)) { + dev_err(DCB_TO_DEV(dcb), "Failed to send frag bat doorbell\n"); + mtk_dpmaif_common_err_handle(dcb, true); + goto out; + } + } + + return 0; +out: + return ret; +} + +static unsigned int mtk_dpmaif_chk_rel_bat_cnt(struct mtk_dpmaif_ctlb *dcb, + struct dpmaif_bat_ring *bat_ring) +{ + unsigned int i, cur_idx; + unsigned int count = 0; + unsigned char mask_val; + + /* Check and get the continuous used entries, + * and it is also the count that will be recycle. + */ + cur_idx = bat_ring->bat_rel_rd_idx; + for (i = 0; i < bat_ring->bat_cnt; i++) { + mask_val = bat_ring->mask_tbl[cur_idx]; + if (mask_val == 1) + count++; + else + break; + + cur_idx = mtk_dpmaif_ring_buf_get_next_idx(bat_ring->bat_cnt, cur_idx); + } + + return count; +} + +static int mtk_dpmaif_recycle_bat(struct mtk_dpmaif_ctlb *dcb, struct dpmaif_bat_ring *bat_ring, + unsigned int rel_bat_cnt) +{ + unsigned short old_sw_rel_idx, new_sw_rel_idx, hw_rd_idx; + bool type = bat_ring->type == NORMAL_BAT; + unsigned int cur_idx; + unsigned int i; + int ret; + + old_sw_rel_idx = bat_ring->bat_rel_rd_idx; + new_sw_rel_idx = old_sw_rel_idx + rel_bat_cnt; + + ret = mtk_dpmaif_drv_get_ring_idx(dcb->drv_info, + type ? DPMAIF_BAT_RIDX : DPMAIF_FRAG_RIDX, 0); + if (unlikely(ret < 0)) { + mtk_dpmaif_common_err_handle(dcb, true); + return ret; + } + + hw_rd_idx = ret; + bat_ring->bat_rd_idx = hw_rd_idx; + + /* Queue is empty and no need to release. */ + if (bat_ring->bat_wr_idx == old_sw_rel_idx) { + ret = -DATA_FLOW_CHK_ERR; + goto out; + } + + /* bat_rel_rd_idx should not exceed bat_rd_idx. */ + if (hw_rd_idx > old_sw_rel_idx) { + if (new_sw_rel_idx > hw_rd_idx) { + ret = -DATA_FLOW_CHK_ERR; + goto out; + } + } else if (hw_rd_idx < old_sw_rel_idx) { + if (new_sw_rel_idx >= bat_ring->bat_cnt) { + new_sw_rel_idx = new_sw_rel_idx - bat_ring->bat_cnt; + if (new_sw_rel_idx > hw_rd_idx) { + ret = -DATA_FLOW_CHK_ERR; + goto out; + } + } + } + + /* Reset bat mask value. */ + cur_idx = bat_ring->bat_rel_rd_idx; + for (i = 0; i < rel_bat_cnt; i++) { + bat_ring->mask_tbl[cur_idx] = 0; + cur_idx = mtk_dpmaif_ring_buf_get_next_idx(bat_ring->bat_cnt, cur_idx); + } + + bat_ring->bat_rel_rd_idx = new_sw_rel_idx; + + return rel_bat_cnt; + +out: + dev_err(DCB_TO_DEV(dcb), + "Failed to check bat%d rel_rd_idx, bat_rd=%u,old_sw_rel=%u, new_sw_rel=%u\n", + bat_ring->type, bat_ring->bat_rd_idx, old_sw_rel_idx, new_sw_rel_idx); + + return ret; +} + +static int mtk_dpmaif_reload_bat(struct mtk_dpmaif_ctlb *dcb, struct dpmaif_bat_ring *bat_ring) +{ + unsigned int rel_bat_cnt; + int ret = 0; + + rel_bat_cnt = mtk_dpmaif_chk_rel_bat_cnt(dcb, bat_ring); + if (unlikely(rel_bat_cnt == 0)) + goto out; + + /* Check and update bat_rd_idx, bat_rel_rd_idx. */ + ret = mtk_dpmaif_recycle_bat(dcb, bat_ring, rel_bat_cnt); + if (unlikely(ret < 0)) + goto out; + + /* Reload rx buffer, pin buffer to bat entries. + * update bat_wr_idx + * send doorbell to HW about new available BAT entries. + */ + ret = mtk_dpmaif_reload_rx_buf(dcb, bat_ring, rel_bat_cnt, true); +out: + return ret; +} + +static void mtk_dpmaif_bat_reload_work(struct work_struct *work) +{ + struct dpmaif_bat_ring *bat_ring; + struct dpmaif_bat_info *bat_info; + struct mtk_dpmaif_ctlb *dcb; + int ret; + + bat_ring = container_of(work, struct dpmaif_bat_ring, reload_work); + + if (bat_ring->type == NORMAL_BAT) + bat_info = container_of(bat_ring, struct dpmaif_bat_info, normal_bat_ring); + else + bat_info = container_of(bat_ring, struct dpmaif_bat_info, frag_bat_ring); + + dcb = bat_info->dcb; + + if (bat_ring->type == NORMAL_BAT) { + /* Recycle normal bat and reload rx normal buffer. */ + ret = mtk_dpmaif_reload_bat(dcb, bat_ring); + if (unlikely(ret < 0)) { + dev_err(DCB_TO_DEV(dcb), + "Failed to recycle normal bat and reload rx buffer\n"); + return; + } + + if (bat_ring->bat_cnt_err_intr_set) { + bat_ring->bat_cnt_err_intr_set = false; + mtk_dpmaif_drv_intr_complete(dcb->drv_info, + DPMAIF_INTR_DL_BATCNT_LEN_ERR, 0, 0); + } + } else { + /* Recycle frag bat and reload rx page buffer. */ + if (dcb->bat_info.frag_bat_enabled) { + ret = mtk_dpmaif_reload_bat(dcb, bat_ring); + if (unlikely(ret < 0)) { + dev_err(DCB_TO_DEV(dcb), + "Failed to recycle frag bat and reload rx buffer\n"); + return; + } + + if (bat_ring->bat_cnt_err_intr_set) { + bat_ring->bat_cnt_err_intr_set = false; + mtk_dpmaif_drv_intr_complete(dcb->drv_info, + DPMAIF_INTR_DL_FRGCNT_LEN_ERR, 0, 0); + } + } + } +} + +static void mtk_dpmaif_queue_bat_reload_work(struct mtk_dpmaif_ctlb *dcb) +{ + /* Recycle normal bat and reload rx skb buffer. */ + queue_work(dcb->bat_info.reload_wq, &dcb->bat_info.normal_bat_ring.reload_work); + /* Recycle frag bat and reload rx page buffer. */ + if (dcb->bat_info.frag_bat_enabled) + queue_work(dcb->bat_info.reload_wq, &dcb->bat_info.frag_bat_ring.reload_work); +} + +static void mtk_dpmaif_set_bat_buf_size(struct mtk_dpmaif_ctlb *dcb, unsigned int mtu) +{ + struct dpmaif_bat_info *bat_info = &dcb->bat_info; + unsigned int buf_size; + + bat_info->max_mtu = mtu; + + /* Normal and frag BAT buffer size setting. */ + buf_size = mtu + DPMAIF_HW_PKT_ALIGN + DPMAIF_HW_BAT_RSVLEN; + if (buf_size <= DPMAIF_BUF_THRESHOLD) { + bat_info->frag_bat_enabled = false; + bat_info->normal_bat_ring.buf_size = ALIGN(buf_size, DPMAIF_DL_BUF_MIN_SIZE); + bat_info->frag_bat_ring.buf_size = 0; + } else { + bat_info->frag_bat_enabled = true; + bat_info->normal_bat_ring.buf_size = DPMAIF_NORMAL_BUF_SIZE_IN_JUMBO; + bat_info->frag_bat_ring.buf_size = DPMAIF_FRAG_BUF_SIZE_IN_JUMBO; + } +} + +static int mtk_dpmaif_bat_init(struct mtk_dpmaif_ctlb *dcb, + struct dpmaif_bat_ring *bat_ring, + enum dpmaif_bat_type type) +{ + int ret; + + bat_ring->type = type; + if (bat_ring->type == FRAG_BAT) + bat_ring->bat_cnt = dcb->res_cfg->frag_bat_cnt; + else + bat_ring->bat_cnt = dcb->res_cfg->normal_bat_cnt; + + bat_ring->bat_cnt_err_intr_set = false; + bat_ring->bat_rd_idx = 0; + bat_ring->bat_wr_idx = 0; + bat_ring->bat_rel_rd_idx = 0; + + /* Allocate BAT memory for HW and SW. */ + bat_ring->bat_base = dma_alloc_coherent(DCB_TO_DEV(dcb), bat_ring->bat_cnt * + sizeof(*bat_ring->bat_base), + &bat_ring->bat_dma_addr, GFP_KERNEL); + if (!bat_ring->bat_base) { + dev_err(DCB_TO_DEV(dcb), "Failed to allocate bat%d\n", bat_ring->type); + return -ENOMEM; + } + + /* Allocate buffer for SW to record skb information */ + bat_ring->sw_record_base = devm_kcalloc(DCB_TO_DEV(dcb), bat_ring->bat_cnt, + sizeof(*bat_ring->sw_record_base), GFP_KERNEL); + if (!bat_ring->sw_record_base) { + ret = -ENOMEM; + goto err_alloc_bat_buf; + } + + /* Alloc buffer for SW to recycle BAT. */ + bat_ring->mask_tbl = devm_kcalloc(DCB_TO_DEV(dcb), bat_ring->bat_cnt, + sizeof(*bat_ring->mask_tbl), GFP_KERNEL); + if (!bat_ring->mask_tbl) { + ret = -ENOMEM; + goto err_alloc_mask_tbl; + } + + INIT_WORK(&bat_ring->reload_work, mtk_dpmaif_bat_reload_work); + + return 0; + +err_alloc_mask_tbl: + devm_kfree(DCB_TO_DEV(dcb), bat_ring->sw_record_base); + bat_ring->sw_record_base = NULL; + +err_alloc_bat_buf: + dma_free_coherent(DCB_TO_DEV(dcb), bat_ring->bat_cnt * sizeof(*bat_ring->bat_base), + bat_ring->bat_base, bat_ring->bat_dma_addr); + bat_ring->bat_base = NULL; + + return ret; +} + +static void mtk_dpmaif_bat_exit(struct mtk_dpmaif_ctlb *dcb, struct dpmaif_bat_ring *bat_ring, + enum dpmaif_bat_type type) +{ + union dpmaif_bat_record *bat_record; + struct page *page; + unsigned int i; + + flush_work(&bat_ring->reload_work); + + devm_kfree(DCB_TO_DEV(dcb), bat_ring->mask_tbl); + bat_ring->mask_tbl = NULL; + + if (bat_ring->sw_record_base) { + if (type == NORMAL_BAT) { + for (i = 0; i < bat_ring->bat_cnt; i++) { + bat_record = bat_ring->sw_record_base + i; + if (bat_record->normal.skb) { + dma_unmap_single(DCB_TO_DEV(dcb), + bat_record->normal.data_dma_addr, + bat_record->normal.data_len, + DMA_FROM_DEVICE); + dev_kfree_skb_any(bat_record->normal.skb); + } + } + } else { + for (i = 0; i < bat_ring->bat_cnt; i++) { + bat_record = bat_ring->sw_record_base + i; + page = bat_record->frag.page; + if (page) { + dma_unmap_page(DCB_TO_DEV(dcb), + bat_record->frag.data_dma_addr, + bat_record->frag.data_len, + DMA_FROM_DEVICE); + put_page(page); + } + } + } + + devm_kfree(DCB_TO_DEV(dcb), bat_ring->sw_record_base); + bat_ring->sw_record_base = NULL; + } + + if (bat_ring->bat_base) { + dma_free_coherent(DCB_TO_DEV(dcb), + bat_ring->bat_cnt * sizeof(*bat_ring->bat_base), + bat_ring->bat_base, bat_ring->bat_dma_addr); + bat_ring->bat_base = NULL; + } +} + +static void mtk_dpmaif_bat_ring_reset(struct dpmaif_bat_ring *bat_ring) +{ + bat_ring->bat_cnt_err_intr_set = false; + bat_ring->bat_wr_idx = 0; + bat_ring->bat_rd_idx = 0; + bat_ring->bat_rel_rd_idx = 0; + memset(bat_ring->bat_base, 0x00, (bat_ring->bat_cnt * sizeof(*bat_ring->bat_base))); + memset(bat_ring->mask_tbl, 0x00, (bat_ring->bat_cnt * sizeof(*bat_ring->mask_tbl))); +} + +static void mtk_dpmaif_bat_res_reset(struct dpmaif_bat_info *bat_info) +{ + mtk_dpmaif_bat_ring_reset(&bat_info->normal_bat_ring); + if (bat_info->frag_bat_enabled) + mtk_dpmaif_bat_ring_reset(&bat_info->frag_bat_ring); +} + +static int mtk_dpmaif_bat_res_init(struct mtk_dpmaif_ctlb *dcb) +{ + struct dpmaif_bat_info *bat_info = &dcb->bat_info; + int ret; + + bat_info->dcb = dcb; + ret = mtk_dpmaif_bat_init(dcb, &bat_info->normal_bat_ring, NORMAL_BAT); + if (ret < 0) { + dev_err(DCB_TO_DEV(dcb), "Failed to initialize normal bat resource\n"); + goto out; + } + + if (bat_info->frag_bat_enabled) { + ret = mtk_dpmaif_bat_init(dcb, &bat_info->frag_bat_ring, FRAG_BAT); + if (ret < 0) { + dev_err(DCB_TO_DEV(dcb), "Failed to initialize frag bat resource\n"); + goto err_init_frag_bat; + } + } + + bat_info->reload_wq = + alloc_workqueue("dpmaif_bat_reload_wq_%s", WQ_HIGHPRI | WQ_UNBOUND | + WQ_MEM_RECLAIM, FRAG_BAT + 1, DCB_TO_DEV_STR(dcb)); + if (!bat_info->reload_wq) { + dev_err(DCB_TO_DEV(dcb), "Failed to allocate bat reload workqueue\n"); + ret = -ENOMEM; + goto err_init_reload_wq; + } + + return 0; + +err_init_reload_wq: + mtk_dpmaif_bat_exit(dcb, &bat_info->frag_bat_ring, FRAG_BAT); + +err_init_frag_bat: + mtk_dpmaif_bat_exit(dcb, &bat_info->normal_bat_ring, NORMAL_BAT); + +out: + return ret; +} + +static void mtk_dpmaif_bat_res_exit(struct mtk_dpmaif_ctlb *dcb) +{ + struct dpmaif_bat_info *bat_info = &dcb->bat_info; + + if (bat_info->reload_wq) { + flush_workqueue(bat_info->reload_wq); + destroy_workqueue(bat_info->reload_wq); + bat_info->reload_wq = NULL; + } + + if (bat_info->frag_bat_enabled) + mtk_dpmaif_bat_exit(dcb, &bat_info->frag_bat_ring, FRAG_BAT); + + mtk_dpmaif_bat_exit(dcb, &bat_info->normal_bat_ring, NORMAL_BAT); +} + +static int mtk_dpmaif_rxq_init(struct mtk_dpmaif_ctlb *dcb, struct dpmaif_rxq *rxq) +{ + rxq->started = false; + rxq->pit_cnt = dcb->res_cfg->pit_cnt[rxq->id]; + rxq->pit_wr_idx = 0; + rxq->pit_rd_idx = 0; + rxq->pit_rel_rd_idx = 0; + rxq->pit_seq_expect = 0; + rxq->pit_rel_cnt = 0; + rxq->pit_cnt_err_intr_set = false; + rxq->pit_burst_rel_cnt = DPMAIF_PIT_CNT_UPDATE_THRESHOLD; + rxq->intr_coalesce_frame = dcb->intr_coalesce.rx_coalesced_frames; + rxq->pit_seq_fail_cnt = 0; + + memset(&rxq->rx_record, 0x00, sizeof(rxq->rx_record)); + + rxq->pit_base = dma_alloc_coherent(DCB_TO_DEV(dcb), + rxq->pit_cnt * sizeof(*rxq->pit_base), + &rxq->pit_dma_addr, GFP_KERNEL); + if (!rxq->pit_base) { + dev_err(DCB_TO_DEV(dcb), "Failed to allocate rxq%u pit resource\n", rxq->id); + return -ENOMEM; + } + + return 0; +} + +static void mtk_dpmaif_rxq_exit(struct mtk_dpmaif_ctlb *dcb, struct dpmaif_rxq *rxq) +{ + if (rxq->pit_base) { + dma_free_coherent(DCB_TO_DEV(dcb), + rxq->pit_cnt * sizeof(*rxq->pit_base), rxq->pit_base, + rxq->pit_dma_addr); + rxq->pit_base = NULL; + } +} + +static int mtk_dpmaif_sw_stop_rxq(struct mtk_dpmaif_ctlb *dcb, struct dpmaif_rxq *rxq) +{ + /* Rxq done process will check this flag, if rxq->started is false, process will stop. */ + rxq->started = false; + + /* Make sure rxq->started value update done. */ + smp_mb(); + + /* Wait rxq process done. */ + napi_synchronize(&rxq->napi); + + return 0; +} + +static void mtk_dpmaif_sw_stop_rx(struct mtk_dpmaif_ctlb *dcb) +{ + unsigned char rxq_cnt = dcb->res_cfg->rxq_cnt; + struct dpmaif_rxq *rxq; + int i; + + /* Stop all rx process. */ + for (i = 0; i < rxq_cnt; i++) { + rxq = &dcb->rxqs[i]; + mtk_dpmaif_sw_stop_rxq(dcb, rxq); + } +} + +static void mtk_dpmaif_sw_start_rx(struct mtk_dpmaif_ctlb *dcb) +{ + unsigned char rxq_cnt = dcb->res_cfg->rxq_cnt; + struct dpmaif_rxq *rxq; + int i; + + for (i = 0; i < rxq_cnt; i++) { + rxq = &dcb->rxqs[i]; + rxq->started = true; + } +} + +static void mtk_dpmaif_sw_reset_rxq(struct dpmaif_rxq *rxq) +{ + memset(rxq->pit_base, 0x00, (rxq->pit_cnt * sizeof(*rxq->pit_base))); + memset(&rxq->rx_record, 0x00, sizeof(rxq->rx_record)); + + rxq->started = false; + rxq->pit_wr_idx = 0; + rxq->pit_rd_idx = 0; + rxq->pit_rel_rd_idx = 0; + rxq->pit_seq_expect = 0; + rxq->pit_rel_cnt = 0; + rxq->pit_cnt_err_intr_set = false; + rxq->pit_seq_fail_cnt = 0; +} + +static void mtk_dpmaif_rx_res_reset(struct mtk_dpmaif_ctlb *dcb) +{ + unsigned char rxq_cnt = dcb->res_cfg->rxq_cnt; + struct dpmaif_rxq *rxq; + int i; + + for (i = 0; i < rxq_cnt; i++) { + rxq = &dcb->rxqs[i]; + mtk_dpmaif_sw_reset_rxq(rxq); + } +} + +static int mtk_dpmaif_rx_res_init(struct mtk_dpmaif_ctlb *dcb) +{ + unsigned char rxq_cnt = dcb->res_cfg->rxq_cnt; + struct dpmaif_rxq *rxq; + int i, j; + int ret; + + dcb->rxqs = devm_kcalloc(DCB_TO_DEV(dcb), rxq_cnt, sizeof(*rxq), GFP_KERNEL); + if (!dcb->rxqs) + return -ENOMEM; + + for (i = 0; i < rxq_cnt; i++) { + rxq = &dcb->rxqs[i]; + rxq->id = i; + rxq->dcb = dcb; + ret = mtk_dpmaif_rxq_init(dcb, rxq); + if (ret < 0) { + dev_err(DCB_TO_DEV(dcb), "Failed to init rxq%u resource\n", rxq->id); + goto err_init_rxq; + } + } + + return 0; + +err_init_rxq: + for (j = i - 1; j >= 0; j--) + mtk_dpmaif_rxq_exit(dcb, &dcb->rxqs[j]); + + devm_kfree(DCB_TO_DEV(dcb), dcb->rxqs); + dcb->rxqs = NULL; + + return ret; +} + +static void mtk_dpmaif_rx_res_exit(struct mtk_dpmaif_ctlb *dcb) +{ + unsigned char rxq_cnt = dcb->res_cfg->rxq_cnt; + int i; + + for (i = 0; i < rxq_cnt; i++) + mtk_dpmaif_rxq_exit(dcb, &dcb->rxqs[i]); + + devm_kfree(DCB_TO_DEV(dcb), dcb->rxqs); + dcb->rxqs = NULL; +} + +static void mtk_dpmaif_tx_doorbell(struct work_struct *work) +{ + struct delayed_work *dwork = to_delayed_work(work); + struct mtk_dpmaif_ctlb *dcb; + unsigned int to_submit_cnt; + struct dpmaif_txq *txq; + int ret; + + txq = container_of(dwork, struct dpmaif_txq, doorbell_work); + dcb = txq->dcb; + + to_submit_cnt = atomic_read(&txq->to_submit_cnt); + + if (to_submit_cnt > 0) { + ret = mtk_dpmaif_drv_send_doorbell(dcb->drv_info, DPMAIF_DRB, + txq->id, to_submit_cnt); + if (unlikely(ret < 0)) { + dev_err(DCB_TO_DEV(dcb), "Failed to send txq%u doorbell\n", txq->id); + mtk_dpmaif_common_err_handle(dcb, true); + } + + atomic_sub(to_submit_cnt, &txq->to_submit_cnt); + } +} + +static unsigned int mtk_dpmaif_poll_tx_drb(struct dpmaif_txq *txq) +{ + unsigned short old_sw_rd_idx, new_hw_rd_idx; + struct mtk_dpmaif_ctlb *dcb = txq->dcb; + unsigned int drb_cnt; + int ret; + + old_sw_rd_idx = txq->drb_rd_idx; + ret = mtk_dpmaif_drv_get_ring_idx(dcb->drv_info, DPMAIF_DRB_RIDX, txq->id); + if (unlikely(ret < 0)) { + dev_err(DCB_TO_DEV(dcb), "Failed to read txq%u drb_rd_idx, ret=%d\n", txq->id, ret); + mtk_dpmaif_common_err_handle(dcb, true); + return 0; + } + + new_hw_rd_idx = ret; + + if (old_sw_rd_idx <= new_hw_rd_idx) + drb_cnt = new_hw_rd_idx - old_sw_rd_idx; + else + drb_cnt = txq->drb_cnt - old_sw_rd_idx + new_hw_rd_idx; + + txq->drb_rd_idx = new_hw_rd_idx; + + return drb_cnt; +} + +static int mtk_dpmaif_tx_rel_internal(struct dpmaif_txq *txq, + unsigned int rel_cnt, unsigned int *real_rel_cnt) +{ + struct dpmaif_pd_drb *cur_drb = NULL, *drb_base = txq->drb_base; + struct mtk_dpmaif_ctlb *dcb = txq->dcb; + struct dpmaif_drb_skb *cur_drb_skb; + struct dpmaif_msg_drb *msg_drb; + struct sk_buff *skb_free; + unsigned short cur_idx; + unsigned int i; + + cur_idx = txq->drb_rel_rd_idx; + for (i = 0 ; i < rel_cnt; i++) { + cur_drb = drb_base + cur_idx; + cur_drb_skb = txq->sw_drb_base + cur_idx; + if (FIELD_GET(DRB_PD_DTYP, le32_to_cpu(cur_drb->pd_header)) == PD_DRB) { + mtk_dma_unmap_single(DCB_TO_MDEV(dcb), cur_drb_skb->data_dma_addr, + cur_drb_skb->data_len, DMA_TO_DEVICE); + + /* The last one drb entry of one tx packet, so, skb will be released. */ + if (FIELD_GET(DRB_PD_CONT, le32_to_cpu(cur_drb->pd_header)) == + DPMAIF_DRB_LASTONE) { + skb_free = cur_drb_skb->skb; + if (!skb_free) { + dev_err(DCB_TO_DEV(dcb), + "txq%u pkt(%u), drb check fail, drb-w/r/rel-%u,%u,%u\n", + txq->id, cur_idx, txq->drb_wr_idx, + txq->drb_rd_idx, txq->drb_rel_rd_idx); + dev_err(DCB_TO_DEV(dcb), "release_cnt=%u, cur_id=%u\n", + rel_cnt, i); + + mtk_dpmaif_common_err_handle(dcb, false); + return -DATA_FLOW_CHK_ERR; + } + + dev_kfree_skb_any(skb_free); + dcb->traffic_stats.tx_hw_packets[txq->id]++; + } + } else { + msg_drb = (struct dpmaif_msg_drb *)cur_drb; + txq->last_ch_id = FIELD_GET(DRB_MSG_CHNL_ID, + le32_to_cpu(msg_drb->msg_header2)); + } + + cur_drb_skb->skb = NULL; + cur_idx = mtk_dpmaif_ring_buf_get_next_idx(txq->drb_cnt, cur_idx); + txq->drb_rel_rd_idx = cur_idx; + + atomic_inc(&txq->budget); + } + + *real_rel_cnt = i; + + return 0; +} + +static int mtk_dpmaif_tx_rel(struct dpmaif_txq *txq) +{ + struct mtk_dpmaif_ctlb *dcb = txq->dcb; + unsigned int real_rel_cnt = 0; + int ret = 0, rel_cnt; + + /* Update drb_rd_idx. */ + mtk_dpmaif_poll_tx_drb(txq); + + rel_cnt = mtk_dpmaif_ring_buf_releasable(txq->drb_cnt, txq->drb_rel_rd_idx, + txq->drb_rd_idx); + if (likely(rel_cnt > 0)) { + /* Release tx data buffer. */ + ret = mtk_dpmaif_tx_rel_internal(txq, rel_cnt, &real_rel_cnt); + dcb->traffic_stats.tx_done_last_cnt[txq->id] = real_rel_cnt; + } + + return ret; +} + +static void mtk_dpmaif_tx_done(struct work_struct *work) +{ + struct delayed_work *dwork = to_delayed_work(work); + struct mtk_dpmaif_ctlb *dcb; + struct dpmaif_txq *txq; + + txq = container_of(dwork, struct dpmaif_txq, tx_done_work); + dcb = txq->dcb; + + dcb->traffic_stats.tx_done_last_time[txq->id] = local_clock(); + + /* Recycle drb and release hardware tx done buffer around drb. */ + mtk_dpmaif_tx_rel(txq); + + /* try best to recycle drb */ + if (mtk_dpmaif_poll_tx_drb(txq) > 0) { + mtk_dpmaif_drv_clear_ip_busy(dcb->drv_info); + mtk_dpmaif_drv_intr_complete(dcb->drv_info, DPMAIF_INTR_UL_DONE, + txq->id, DPMAIF_CLEAR_INTR); + queue_delayed_work(dcb->tx_done_wq, &txq->tx_done_work, msecs_to_jiffies(0)); + } else { + mtk_dpmaif_drv_clear_ip_busy(dcb->drv_info); + mtk_dpmaif_drv_intr_complete(dcb->drv_info, DPMAIF_INTR_UL_DONE, + txq->id, DPMAIF_UNMASK_INTR); + } +} + +static int mtk_dpmaif_txq_init(struct mtk_dpmaif_ctlb *dcb, struct dpmaif_txq *txq) +{ + unsigned int drb_cnt = dcb->res_cfg->drb_cnt[txq->id]; + int ret; + + atomic_set(&txq->budget, drb_cnt); + atomic_set(&txq->to_submit_cnt, 0); + txq->drb_cnt = drb_cnt; + txq->drb_wr_idx = 0; + txq->drb_rd_idx = 0; + txq->drb_rel_rd_idx = 0; + txq->dma_map_errs = 0; + txq->last_ch_id = 0; + txq->doorbell_delay = dcb->res_cfg->txq_doorbell_delay[txq->id]; + txq->intr_coalesce_frame = dcb->intr_coalesce.tx_coalesced_frames; + + /* Allocate DRB memory for HW and SW. */ + txq->drb_base = dma_alloc_coherent(DCB_TO_DEV(dcb), + txq->drb_cnt * sizeof(*txq->drb_base), + &txq->drb_dma_addr, GFP_KERNEL); + if (!txq->drb_base) { + dev_err(DCB_TO_DEV(dcb), "Failed to allocate txq%u drb resource\n", txq->id); + return -ENOMEM; + } + + /* Allocate buffer for SW to record the skb information. */ + txq->sw_drb_base = devm_kcalloc(DCB_TO_DEV(dcb), txq->drb_cnt, + sizeof(*txq->sw_drb_base), GFP_KERNEL); + if (!txq->sw_drb_base) { + ret = -ENOMEM; + goto err_alloc_drb_buf; + } + + /* It belongs to dcb->tx_done_wq. */ + INIT_DELAYED_WORK(&txq->tx_done_work, mtk_dpmaif_tx_done); + + /* It belongs to dcb->tx_doorbell_wq. */ + INIT_DELAYED_WORK(&txq->doorbell_work, mtk_dpmaif_tx_doorbell); + + return 0; + +err_alloc_drb_buf: + dma_free_coherent(DCB_TO_DEV(dcb), txq->drb_cnt * sizeof(*txq->drb_base), + txq->drb_base, txq->drb_dma_addr); + txq->drb_base = NULL; + + return ret; +} + +static void mtk_dpmaif_txq_exit(struct mtk_dpmaif_ctlb *dcb, struct dpmaif_txq *txq) +{ + struct dpmaif_drb_skb *drb_skb; + int i; + + if (txq->drb_base) { + dma_free_coherent(DCB_TO_DEV(dcb), txq->drb_cnt * sizeof(*txq->drb_base), + txq->drb_base, txq->drb_dma_addr); + txq->drb_base = NULL; + } + + if (txq->sw_drb_base) { + for (i = 0; i < txq->drb_cnt; i++) { + drb_skb = txq->sw_drb_base + i; + if (drb_skb->skb) { + /* Verify msg drb or payload drb, + * and only payload drb need to unmap dma. + */ + if (drb_skb->data_dma_addr) + mtk_dma_unmap_single(DCB_TO_MDEV(dcb), + drb_skb->data_dma_addr, + drb_skb->data_len, DMA_TO_DEVICE); + if (drb_skb->is_last) { + dev_kfree_skb_any(drb_skb->skb); + drb_skb->skb = NULL; + } + } + } + + devm_kfree(DCB_TO_DEV(dcb), txq->sw_drb_base); + txq->sw_drb_base = NULL; + } +} + +static int mtk_dpmaif_sw_wait_txq_stop(struct mtk_dpmaif_ctlb *dcb, struct dpmaif_txq *txq) +{ + /* Wait tx done work done. */ + flush_delayed_work(&txq->tx_done_work); + + /* Wait tx doorbell work done. */ + flush_delayed_work(&txq->doorbell_work); + + return 0; +} + +static void mtk_dpmaif_sw_wait_tx_stop(struct mtk_dpmaif_ctlb *dcb) +{ + unsigned char txq_cnt = dcb->res_cfg->txq_cnt; + int i; + + /* Wait all tx handle complete */ + for (i = 0; i < txq_cnt; i++) + mtk_dpmaif_sw_wait_txq_stop(dcb, &dcb->txqs[i]); +} + +static void mtk_dpmaif_sw_reset_txq(struct dpmaif_txq *txq) +{ + struct dpmaif_drb_skb *drb_skb; + int i; + + /* Drop all tx buffer around drb. */ + for (i = 0; i < txq->drb_cnt; i++) { + drb_skb = txq->sw_drb_base + i; + if (drb_skb->skb) { + mtk_dma_unmap_single(DCB_TO_MDEV(txq->dcb), drb_skb->data_dma_addr, + drb_skb->data_len, DMA_TO_DEVICE); + if (drb_skb->is_last) { + dev_kfree_skb_any(drb_skb->skb); + drb_skb->skb = NULL; + } + } + } + + /* Reset all txq resource. */ + memset(txq->drb_base, 0x00, (txq->drb_cnt * sizeof(*txq->drb_base))); + memset(txq->sw_drb_base, 0x00, (txq->drb_cnt * sizeof(*txq->sw_drb_base))); + + atomic_set(&txq->budget, txq->drb_cnt); + atomic_set(&txq->to_submit_cnt, 0); + txq->drb_rd_idx = 0; + txq->drb_wr_idx = 0; + txq->drb_rel_rd_idx = 0; + txq->last_ch_id = 0; +} + +static void mtk_dpmaif_tx_res_reset(struct mtk_dpmaif_ctlb *dcb) +{ + unsigned char txq_cnt = dcb->res_cfg->txq_cnt; + struct dpmaif_txq *txq; + int i; + + for (i = 0; i < txq_cnt; i++) { + txq = &dcb->txqs[i]; + mtk_dpmaif_sw_reset_txq(txq); + } +} + +static int mtk_dpmaif_tx_res_init(struct mtk_dpmaif_ctlb *dcb) +{ + unsigned char txq_cnt = dcb->res_cfg->txq_cnt; + struct dpmaif_txq *txq; + int i, j; + int ret; + + dcb->txqs = devm_kcalloc(DCB_TO_DEV(dcb), txq_cnt, sizeof(*txq), GFP_KERNEL); + if (!dcb->txqs) + return -ENOMEM; + + for (i = 0; i < txq_cnt; i++) { + txq = &dcb->txqs[i]; + txq->id = i; + txq->dcb = dcb; + ret = mtk_dpmaif_txq_init(dcb, txq); + if (ret < 0) { + dev_err(DCB_TO_DEV(dcb), "Failed to init txq%d resource\n", txq->id); + goto err_init_txq; + } + } + + dcb->tx_done_wq = alloc_workqueue("dpmaif_tx_done_wq_%s", + WQ_UNBOUND | WQ_MEM_RECLAIM | WQ_HIGHPRI, + txq_cnt, DCB_TO_DEV_STR(dcb)); + if (!dcb->tx_done_wq) { + dev_err(DCB_TO_DEV(dcb), "Failed to allocate tx done workqueue\n"); + ret = -ENOMEM; + goto err_init_txq; + } + + dcb->tx_doorbell_wq = alloc_workqueue("dpmaif_tx_doorbell_wq_%s", + WQ_FREEZABLE | WQ_UNBOUND | + WQ_MEM_RECLAIM | WQ_HIGHPRI, + txq_cnt, DCB_TO_DEV_STR(dcb)); + if (!dcb->tx_doorbell_wq) { + dev_err(DCB_TO_DEV(dcb), "Failed to allocate tx doorbell workqueue\n"); + ret = -ENOMEM; + goto err_alloc_tx_doorbell_wq; + } + + return 0; + +err_alloc_tx_doorbell_wq: + flush_workqueue(dcb->tx_done_wq); + destroy_workqueue(dcb->tx_done_wq); + +err_init_txq: + for (j = i - 1; j >= 0; j--) + mtk_dpmaif_txq_exit(dcb, &dcb->txqs[j]); + + devm_kfree(DCB_TO_DEV(dcb), dcb->txqs); + dcb->txqs = NULL; + + return ret; +} + +static void mtk_dpmaif_tx_res_exit(struct mtk_dpmaif_ctlb *dcb) +{ + unsigned char txq_cnt = dcb->res_cfg->txq_cnt; + struct dpmaif_txq *txq; + int i; + + for (i = 0; i < txq_cnt; i++) { + txq = &dcb->txqs[i]; + flush_delayed_work(&txq->tx_done_work); + flush_delayed_work(&txq->doorbell_work); + } + + if (dcb->tx_doorbell_wq) { + flush_workqueue(dcb->tx_doorbell_wq); + destroy_workqueue(dcb->tx_doorbell_wq); + dcb->tx_doorbell_wq = NULL; + } + + if (dcb->tx_done_wq) { + flush_workqueue(dcb->tx_done_wq); + destroy_workqueue(dcb->tx_done_wq); + dcb->tx_done_wq = NULL; + } + + for (i = 0; i < txq_cnt; i++) + mtk_dpmaif_txq_exit(dcb, &dcb->txqs[i]); + + devm_kfree(DCB_TO_DEV(dcb), dcb->txqs); + dcb->txqs = NULL; +} + +static int mtk_dpmaif_sw_res_init(struct mtk_dpmaif_ctlb *dcb) +{ + int ret; + + ret = mtk_dpmaif_bat_res_init(dcb); + if (ret < 0) { + dev_err(DCB_TO_DEV(dcb), "Failed to initialize bat reource, ret=%d\n", ret); + goto err_init_bat_res; + } + + ret = mtk_dpmaif_rx_res_init(dcb); + if (ret < 0) { + dev_err(DCB_TO_DEV(dcb), "Failed to initialize rx reource, ret=%d\n", ret); + goto err_init_rx_res; + } + + ret = mtk_dpmaif_tx_res_init(dcb); + if (ret < 0) { + dev_err(DCB_TO_DEV(dcb), "Failed to initialize tx reource, ret=%d\n", ret); + goto err_init_tx_res; + } + + return 0; + +err_init_tx_res: + mtk_dpmaif_rx_res_exit(dcb); + +err_init_rx_res: + mtk_dpmaif_bat_res_exit(dcb); + +err_init_bat_res: + return ret; +} + +static void mtk_dpmaif_sw_res_exit(struct mtk_dpmaif_ctlb *dcb) +{ + mtk_dpmaif_tx_res_exit(dcb); + mtk_dpmaif_rx_res_exit(dcb); + mtk_dpmaif_bat_res_exit(dcb); +} + +static int mtk_dpmaif_bm_pool_init(struct mtk_dpmaif_ctlb *dcb) +{ + int ret; + + dcb->skb_pool = mtk_bm_pool_create(DCB_TO_MDEV(dcb), MTK_BUFF_SKB, + dcb->bat_info.normal_bat_ring.buf_size, + dcb->res_cfg->normal_bat_cnt, + MTK_BM_HIGH_PRIO); + if (!dcb->skb_pool) { + dev_err(DCB_TO_DEV(dcb), "Failed to create skb bm pool\n"); + return -ENOMEM; + } + + if (dcb->bat_info.frag_bat_enabled) { + dcb->page_pool = mtk_bm_pool_create(DCB_TO_MDEV(dcb), MTK_BUFF_PAGE, + dcb->bat_info.frag_bat_ring.buf_size, + dcb->res_cfg->frag_bat_cnt, + MTK_BM_HIGH_PRIO); + if (!dcb->page_pool) { + dev_err(DCB_TO_DEV(dcb), "Failed to create page bm pool\n"); + ret = -ENOMEM; + goto err_create_page_pool; + } + } + + return 0; + +err_create_page_pool: + mtk_bm_pool_destroy(DCB_TO_MDEV(dcb), dcb->skb_pool); + dcb->skb_pool = NULL; + + return ret; +} + +static void mtk_dpmaif_bm_pool_exit(struct mtk_dpmaif_ctlb *dcb) +{ + if (dcb->skb_pool) + mtk_bm_pool_destroy(DCB_TO_MDEV(dcb), dcb->skb_pool); + + if (dcb->bat_info.frag_bat_enabled) { + if (dcb->page_pool) + mtk_bm_pool_destroy(DCB_TO_MDEV(dcb), dcb->page_pool); + } +} + +static bool mtk_dpmaif_all_vqs_empty(struct dpmaif_tx_srv *tx_srv) +{ + bool is_empty = true; + struct dpmaif_vq *vq; + int i; + + for (i = 0; i < tx_srv->vq_cnt; i++) { + vq = tx_srv->vq[i]; + if (!skb_queue_empty(&vq->list)) { + is_empty = false; + break; + } + } + + return is_empty; +} + +static bool mtk_dpmaif_all_txqs_drb_lack(struct dpmaif_tx_srv *tx_srv) +{ + return !!tx_srv->txq_drb_lack_sta; +} + +static void mtk_dpmaif_set_drb_msg(struct mtk_dpmaif_ctlb *dcb, unsigned char q_id, + unsigned short cur_idx, unsigned int pkt_len, + unsigned short count_l, unsigned char channel_id, + unsigned short network_type) +{ + struct dpmaif_msg_drb *drb = (struct dpmaif_msg_drb *)dcb->txqs[q_id].drb_base + cur_idx; + + drb->msg_header1 = cpu_to_le32(FIELD_PREP(DRB_MSG_DTYP, MSG_DRB) | + FIELD_PREP(DRB_MSG_CONT, DPMAIF_DRB_MORE) | + FIELD_PREP(DRB_MSG_PKT_LEN, pkt_len)); + drb->msg_header2 = cpu_to_le32(FIELD_PREP(DRB_MSG_COUNT_L, count_l) | + FIELD_PREP(DRB_MSG_CHNL_ID, channel_id) | + FIELD_PREP(DRB_MSG_L4_CHK, 1) | + FIELD_PREP(DRB_MSG_NET_TYPE, 0)); +} + +static void mtk_dpmaif_set_drb_payload(struct mtk_dpmaif_ctlb *dcb, unsigned char q_id, + unsigned short cur_idx, unsigned long long data_addr, + unsigned int pkt_size, char last_one) +{ + struct dpmaif_pd_drb *drb = dcb->txqs[q_id].drb_base + cur_idx; + + drb->pd_header = cpu_to_le32(FIELD_PREP(DRB_PD_DTYP, PD_DRB)); + if (last_one) + drb->pd_header |= cpu_to_le32(FIELD_PREP(DRB_PD_CONT, DPMAIF_DRB_LASTONE)); + else + drb->pd_header |= cpu_to_le32(FIELD_PREP(DRB_PD_CONT, DPMAIF_DRB_MORE)); + + drb->pd_header |= cpu_to_le32(FIELD_PREP(DRB_PD_DATA_LEN, pkt_size)); + drb->addr_low = cpu_to_le32(lower_32_bits(data_addr)); + drb->addr_high = cpu_to_le32(upper_32_bits(data_addr)); +} + +static void mtk_dpmaif_record_drb_skb(struct mtk_dpmaif_ctlb *dcb, unsigned char q_id, + unsigned short cur_idx, struct sk_buff *skb, + unsigned short is_msg, unsigned short is_frag, + unsigned short is_last, dma_addr_t data_dma_addr, + unsigned int data_len) +{ + struct dpmaif_drb_skb *drb_skb = dcb->txqs[q_id].sw_drb_base + cur_idx; + + drb_skb->skb = skb; + drb_skb->data_dma_addr = data_dma_addr; + drb_skb->data_len = data_len; + drb_skb->drb_idx = cur_idx; + drb_skb->is_msg = is_msg; + drb_skb->is_frag = is_frag; + drb_skb->is_last = is_last; +} + +static int mtk_dpmaif_tx_fill_drb(struct mtk_dpmaif_ctlb *dcb, + unsigned char q_id, struct sk_buff *skb) +{ + unsigned short cur_idx, cur_backup_idx, is_frag, is_last; + unsigned int send_drb_cnt, wt_cnt, payload_cnt; + struct dpmaif_txq *txq = &dcb->txqs[q_id]; + struct dpmaif_drb_skb *cur_drb_skb; + struct skb_shared_info *info; + unsigned int data_len; + dma_addr_t data_dma_addr; + skb_frag_t *frag; + void *data_addr; + int i, ret; + + info = skb_shinfo(skb); + send_drb_cnt = DPMAIF_SKB_CB(skb)->drb_cnt; + payload_cnt = send_drb_cnt - 1; + cur_idx = txq->drb_wr_idx; + cur_backup_idx = cur_idx; + + /* Update tx drb, a msg drb first, then payload drb. */ + /* Update and record payload drb information. */ + mtk_dpmaif_set_drb_msg(dcb, txq->id, cur_idx, skb->len, 0, DPMAIF_SKB_CB(skb)->intf_id, + be16_to_cpu(skb->protocol)); + mtk_dpmaif_record_drb_skb(dcb, txq->id, cur_idx, skb, 1, 0, 0, 0, 0); + + /* Payload drb: skb->data + frags[]. */ + cur_idx = mtk_dpmaif_ring_buf_get_next_idx(txq->drb_cnt, cur_idx); + for (wt_cnt = 0; wt_cnt < payload_cnt; wt_cnt++) { + /* Get data_addr and data_len. */ + if (wt_cnt == 0) { + data_len = skb_headlen(skb); + data_addr = skb->data; + is_frag = 0; + } else { + frag = info->frags + wt_cnt - 1; + data_len = skb_frag_size(frag); + data_addr = skb_frag_address(frag); + is_frag = 1; + } + + if (wt_cnt == payload_cnt - 1) + is_last = 1; + else + is_last = 0; + + ret = mtk_dma_map_single(DCB_TO_MDEV(dcb), &data_dma_addr, + data_addr, data_len, DMA_TO_DEVICE); + if (unlikely(ret < 0)) { + txq->dma_map_errs++; + ret = -DATA_DMA_MAP_ERR; + goto err_dma_map; + } + + /* Update and record payload drb information. */ + mtk_dpmaif_set_drb_payload(dcb, txq->id, cur_idx, data_dma_addr, data_len, is_last); + mtk_dpmaif_record_drb_skb(dcb, txq->id, cur_idx, skb, 0, is_frag, is_last, + data_dma_addr, data_len); + + cur_idx = mtk_dpmaif_ring_buf_get_next_idx(txq->drb_cnt, cur_idx); + } + + txq->drb_wr_idx += send_drb_cnt; + if (txq->drb_wr_idx >= txq->drb_cnt) + txq->drb_wr_idx -= txq->drb_cnt; + + /* Make sure host write memory done before adding to_submit_cnt */ + smp_mb(); + + atomic_sub(send_drb_cnt, &txq->budget); + atomic_add(send_drb_cnt, &txq->to_submit_cnt); + + return 0; + +err_dma_map: + cur_drb_skb = txq->sw_drb_base + cur_backup_idx; + mtk_dpmaif_record_drb_skb(dcb, txq->id, cur_idx, NULL, 0, 0, 0, 0, 0); + cur_backup_idx = mtk_dpmaif_ring_buf_get_next_idx(txq->drb_cnt, cur_backup_idx); + for (i = 0; i < wt_cnt; i++) { + cur_drb_skb = txq->sw_drb_base + cur_backup_idx; + + mtk_dma_unmap_single(DCB_TO_MDEV(dcb), + cur_drb_skb->data_dma_addr, cur_drb_skb->data_len, + DMA_TO_DEVICE); + + cur_backup_idx = mtk_dpmaif_ring_buf_get_next_idx(txq->drb_cnt, cur_backup_idx); + mtk_dpmaif_record_drb_skb(dcb, txq->id, cur_idx, NULL, 0, 0, 0, 0, 0); + } + + return ret; +} + +static int mtk_dpmaif_tx_update_ring(struct mtk_dpmaif_ctlb *dcb, struct dpmaif_tx_srv *tx_srv, + struct dpmaif_vq *vq) +{ + struct dpmaif_txq *txq = &dcb->txqs[vq->q_id]; + unsigned char q_id = vq->q_id; + unsigned char skb_drb_cnt; + int i, drb_available_cnt; + struct sk_buff *skb; + int ret; + + drb_available_cnt = mtk_dpmaif_ring_buf_writable(txq->drb_cnt, + txq->drb_rel_rd_idx, txq->drb_wr_idx); + + clear_bit(q_id, &tx_srv->txq_drb_lack_sta); + for (i = 0; i < DPMAIF_SKB_TX_WEIGHT; i++) { + skb = skb_dequeue(&vq->list); + if (!skb) { + ret = 0; + break; + } + + skb_drb_cnt = DPMAIF_SKB_CB(skb)->drb_cnt; + if (drb_available_cnt < skb_drb_cnt) { + skb_queue_head(&vq->list, skb); + set_bit(q_id, &tx_srv->txq_drb_lack_sta); + ret = -DATA_LOW_MEM_DRB; + break; + } + + ret = mtk_dpmaif_tx_fill_drb(dcb, q_id, skb); + if (ret < 0) { + skb_queue_head(&vq->list, skb); + break; + } + drb_available_cnt -= skb_drb_cnt; + dcb->traffic_stats.tx_sw_packets[q_id]++; + } + + return ret; +} + +static struct dpmaif_vq *mtk_dpmaif_srv_select_vq(struct dpmaif_tx_srv *tx_srv) +{ + struct dpmaif_vq *vq; + int i; + + /* Round robin select tx vqs. */ + for (i = 0; i < tx_srv->vq_cnt; i++) { + tx_srv->cur_vq_id = tx_srv->cur_vq_id % tx_srv->vq_cnt; + vq = tx_srv->vq[tx_srv->cur_vq_id]; + tx_srv->cur_vq_id++; + if (!skb_queue_empty(&vq->list)) + return vq; + } + + return NULL; +} + +static void mtk_dpmaif_tx(struct dpmaif_tx_srv *tx_srv) +{ + struct mtk_dpmaif_ctlb *dcb = tx_srv->dcb; + struct dpmaif_txq *txq; + struct dpmaif_vq *vq; + int ret; + + do { + vq = mtk_dpmaif_srv_select_vq(tx_srv); + if (!vq) + break; + + ret = mtk_dpmaif_tx_update_ring(dcb, tx_srv, vq); + if (unlikely(ret < 0)) { + if (ret == -DATA_LOW_MEM_DRB && + mtk_dpmaif_all_txqs_drb_lack(tx_srv)) { + usleep_range(50, 100); + } + } + + /* Kick off tx doorbell workqueue. */ + txq = &dcb->txqs[vq->q_id]; + if (atomic_read(&txq->to_submit_cnt) > 0) { + queue_delayed_work(dcb->tx_doorbell_wq, &txq->doorbell_work, + msecs_to_jiffies(txq->doorbell_delay)); + } + + if (need_resched()) + cond_resched(); + } while (!kthread_should_stop() && (dcb->dpmaif_state == DPMAIF_STATE_PWRON)); +} + +static int mtk_dpmaif_tx_thread(void *arg) +{ + struct dpmaif_tx_srv *tx_srv = arg; + struct mtk_dpmaif_ctlb *dcb; + int ret; + + dcb = tx_srv->dcb; + set_user_nice(current, tx_srv->prio); + while (!kthread_should_stop()) { + if (mtk_dpmaif_all_vqs_empty(tx_srv) || + dcb->dpmaif_state != DPMAIF_STATE_PWRON) { + ret = wait_event_interruptible(tx_srv->wait, + (!mtk_dpmaif_all_vqs_empty(tx_srv) && + (dcb->dpmaif_state == DPMAIF_STATE_PWRON)) || + kthread_should_stop()); + if (ret == -ERESTARTSYS) + continue; + } + + if (kthread_should_stop()) + break; + + /* Send packets of all tx virtual queues belong to the tx service. */ + mtk_dpmaif_tx(tx_srv); + } + + return 0; +} + +static int mtk_dpmaif_tx_srvs_start(struct mtk_dpmaif_ctlb *dcb) +{ + unsigned char srvs_cnt = dcb->res_cfg->tx_srv_cnt; + struct dpmaif_tx_srv *tx_srv; + int i, j, ret; + + for (i = 0; i < srvs_cnt; i++) { + tx_srv = &dcb->tx_srvs[i]; + tx_srv->cur_vq_id = 0; + tx_srv->txq_drb_lack_sta = 0; + if (tx_srv->srv) { + dev_err(DCB_TO_DEV(dcb), "The tx_srv:%d existed\n", i); + continue; + } + tx_srv->srv = kthread_run(mtk_dpmaif_tx_thread, + tx_srv, "dpmaif_tx_srv%u_%s", + tx_srv->id, DCB_TO_DEV_STR(dcb)); + if (IS_ERR(tx_srv->srv)) { + dev_err(DCB_TO_DEV(dcb), "Failed to alloc dpmaif tx_srv%u\n", tx_srv->id); + ret = PTR_ERR(tx_srv->srv); + goto err_init_tx_srvs; + } + } + + return 0; + +err_init_tx_srvs: + for (j = i - 1; j >= 0; j--) { + if (tx_srv->srv) + kthread_stop(tx_srv->srv); + tx_srv->srv = NULL; + } + + return ret; +} + +static void mtk_dpmaif_tx_srvs_stop(struct mtk_dpmaif_ctlb *dcb) +{ + unsigned char srvs_cnt = dcb->res_cfg->tx_srv_cnt; + struct dpmaif_tx_srv *tx_srv; + int i; + + for (i = 0; i < srvs_cnt; i++) { + tx_srv = &dcb->tx_srvs[i]; + if (tx_srv->srv) + kthread_stop(tx_srv->srv); + + tx_srv->srv = NULL; + } +} + +static int mtk_dpmaif_tx_srvs_init(struct mtk_dpmaif_ctlb *dcb) +{ + unsigned char srvs_cnt = dcb->res_cfg->tx_srv_cnt; + unsigned char vqs_cnt = dcb->res_cfg->tx_vq_cnt; + struct dpmaif_tx_srv *tx_srv; + struct dpmaif_vq *tx_vq; + int i, j, vq_id; + int ret; + + /* Initialize all data packet tx vitrual queue. */ + dcb->tx_vqs = devm_kcalloc(DCB_TO_DEV(dcb), vqs_cnt, sizeof(*dcb->tx_vqs), GFP_KERNEL); + if (!dcb->tx_vqs) + return -ENOMEM; + + for (i = 0; i < vqs_cnt; i++) { + tx_vq = &dcb->tx_vqs[i]; + tx_vq->q_id = i; + tx_vq->max_len = DEFAULT_TX_QUEUE_LEN; + skb_queue_head_init(&tx_vq->list); + } + + /* Initialize all data packet tx services. */ + dcb->tx_srvs = devm_kcalloc(DCB_TO_DEV(dcb), srvs_cnt, sizeof(*dcb->tx_srvs), GFP_KERNEL); + if (!dcb->tx_srvs) { + ret = -ENOMEM; + goto err_alloc_tx_srvs; + } + + for (i = 0; i < srvs_cnt; i++) { + tx_srv = &dcb->tx_srvs[i]; + tx_srv->dcb = dcb; + tx_srv->id = i; + tx_srv->prio = dcb->res_cfg->srv_prio_tbl[i]; + tx_srv->cur_vq_id = 0; + tx_srv->txq_drb_lack_sta = 0; + init_waitqueue_head(&tx_srv->wait); + + /* Set virtual queues and tx service mapping. */ + vq_id = 0; + for (j = 0; j < vqs_cnt; j++) { + if (tx_srv->id == dcb->res_cfg->tx_vq_srv_map[j]) { + tx_srv->vq[vq_id] = &dcb->tx_vqs[j]; + vq_id++; + } + } + + tx_srv->vq_cnt = vq_id; + if (tx_srv->vq_cnt == 0) + dev_err(DCB_TO_DEV(dcb), + "Invalid vq_cnt of tx_srv%u\n", tx_srv->id); + } + + return 0; + +err_alloc_tx_srvs: + devm_kfree(DCB_TO_DEV(dcb), dcb->tx_vqs); + dcb->tx_vqs = NULL; + + return ret; +} + +static void mtk_dpmaif_tx_vqs_reset(struct mtk_dpmaif_ctlb *dcb) +{ + unsigned char vqs_cnt = dcb->res_cfg->tx_vq_cnt; + struct dpmaif_vq *tx_vq; + int i; + + /* Drop all packet in tx virtual queues. */ + for (i = 0; i < vqs_cnt; i++) { + tx_vq = &dcb->tx_vqs[i]; + if (tx_vq) + skb_queue_purge(&tx_vq->list); + } +} + +static void mtk_dpmaif_tx_srvs_exit(struct mtk_dpmaif_ctlb *dcb) +{ + mtk_dpmaif_tx_srvs_stop(dcb); + + devm_kfree(DCB_TO_DEV(dcb), dcb->tx_srvs); + dcb->tx_srvs = NULL; + + /* Drop all packet in tx virtual queues. */ + mtk_dpmaif_tx_vqs_reset(dcb); + + devm_kfree(DCB_TO_DEV(dcb), dcb->tx_vqs); + dcb->tx_vqs = NULL; +} + +static void mtk_dpmaif_trans_enable(struct mtk_dpmaif_ctlb *dcb) +{ + mtk_dpmaif_sw_start_rx(dcb); + mtk_dpmaif_enable_irq(dcb); + if (!mtk_hw_mmio_check(DCB_TO_MDEV(dcb))) { + if (mtk_dpmaif_drv_start_queue(dcb->drv_info, DPMAIF_RX) < 0) { + dev_err(DCB_TO_DEV(dcb), "Failed to start dpmaif hw rx\n"); + mtk_dpmaif_common_err_handle(dcb, true); + return; + } + + if (mtk_dpmaif_drv_start_queue(dcb->drv_info, DPMAIF_TX) < 0) { + dev_err(DCB_TO_DEV(dcb), "Failed to start dpmaif hw tx\n"); + mtk_dpmaif_common_err_handle(dcb, true); + return; + } + } +} + +static void mtk_dpmaif_trans_disable(struct mtk_dpmaif_ctlb *dcb) +{ + bool io_err = false; + + /* Wait tx doorbell and tx done work complete */ + mtk_dpmaif_sw_wait_tx_stop(dcb); + + /* Stop dpmaif hw tx and rx. */ + if (!mtk_hw_mmio_check(DCB_TO_MDEV(dcb))) { + if (mtk_dpmaif_drv_stop_queue(dcb->drv_info, DPMAIF_TX) < 0) { + io_err = true; + dev_err(DCB_TO_DEV(dcb), "Failed to stop dpmaif hw tx\n"); + } + + if (mtk_dpmaif_drv_stop_queue(dcb->drv_info, DPMAIF_RX) < 0) { + io_err = true; + dev_err(DCB_TO_DEV(dcb), "Failed to stop dpmaif hw rx\n"); + } + + if (io_err) + mtk_dpmaif_common_err_handle(dcb, true); + } + + /* Disable all dpmaif L1 interrupt. */ + mtk_dpmaif_disable_irq(dcb); + + /* Stop and wait rx handle done */ + mtk_dpmaif_sw_stop_rx(dcb); + + /* Wait bat reload work done */ + flush_workqueue(dcb->bat_info.reload_wq); +} + +static void mtk_dpmaif_trans_ctl(struct mtk_dpmaif_ctlb *dcb, bool enable) +{ + mutex_lock(&dcb->trans_ctl_lock); + if (enable) { + if (!dcb->trans_enabled) { + if (dcb->dpmaif_state == DPMAIF_STATE_PWRON && + dcb->dpmaif_user_ready) { + mtk_dpmaif_trans_enable(dcb); + dcb->trans_enabled = true; + } + } + } else { + if (dcb->trans_enabled) { + if (!(dcb->dpmaif_state == DPMAIF_STATE_PWRON) || + !dcb->dpmaif_user_ready) { + mtk_dpmaif_trans_disable(dcb); + dcb->trans_enabled = false; + } + } + } + mutex_unlock(&dcb->trans_ctl_lock); +} + +static void mtk_dpmaif_cmd_trans_ctl(struct mtk_dpmaif_ctlb *dcb, void *data) +{ + struct mtk_data_trans_ctl *trans_ctl = data; + + dcb->dpmaif_user_ready = trans_ctl->enable; + + /* Try best to drop all tx vq packets when disable trans */ + if (!trans_ctl->enable) + mtk_dpmaif_tx_vqs_reset(dcb); + + mtk_dpmaif_trans_ctl(dcb, trans_ctl->enable); +} + +static void mtk_dpmaif_cmd_intr_coalesce_write(struct mtk_dpmaif_ctlb *dcb, + unsigned int qid, enum dpmaif_drv_dir dir) +{ + struct dpmaif_drv_intr drv_intr; + + if (dir == DPMAIF_TX) { + drv_intr.pkt_threshold = dcb->txqs[qid].intr_coalesce_frame; + drv_intr.time_threshold = dcb->intr_coalesce.tx_coalesce_usecs; + } else { + drv_intr.pkt_threshold = dcb->rxqs[qid].intr_coalesce_frame; + drv_intr.time_threshold = dcb->intr_coalesce.rx_coalesce_usecs; + } + + drv_intr.dir = dir; + drv_intr.q_mask = BIT(qid); + + drv_intr.mode = 0; + if (drv_intr.pkt_threshold) + drv_intr.mode |= DPMAIF_INTR_EN_PKT; + if (drv_intr.time_threshold) + drv_intr.mode |= DPMAIF_INTR_EN_TIME; + dcb->drv_info->drv_ops->feature_cmd(dcb->drv_info, DATA_HW_INTR_COALESCE_SET, &drv_intr); +} + +static int mtk_dpmaif_cmd_intr_coalesce_set(struct mtk_dpmaif_ctlb *dcb, void *data) +{ + struct mtk_data_intr_coalesce *dpmaif_intr_cfg = &dcb->intr_coalesce; + struct mtk_data_intr_coalesce *user_intr_cfg = data; + int i; + + memcpy(dpmaif_intr_cfg, data, sizeof(*dpmaif_intr_cfg)); + + for (i = 0; i < dcb->res_cfg->rxq_cnt; i++) { + dcb->rxqs[i].intr_coalesce_frame = user_intr_cfg->rx_coalesced_frames; + mtk_dpmaif_cmd_intr_coalesce_write(dcb, i, DPMAIF_RX); + } + + for (i = 0; i < dcb->res_cfg->txq_cnt; i++) { + dcb->txqs[i].intr_coalesce_frame = user_intr_cfg->tx_coalesced_frames; + mtk_dpmaif_cmd_intr_coalesce_write(dcb, i, DPMAIF_TX); + } + + return 0; +} + +static int mtk_dpmaif_cmd_intr_coalesce_get(struct mtk_dpmaif_ctlb *dcb, void *data) +{ + struct mtk_data_intr_coalesce *dpmaif_intr_cfg = &dcb->intr_coalesce; + + memcpy(data, dpmaif_intr_cfg, sizeof(*dpmaif_intr_cfg)); + + return 0; +} + +static int mtk_dpmaif_cmd_rxfh_set(struct mtk_dpmaif_ctlb *dcb, void *data) +{ + struct mtk_data_rxfh *indir_rxfh = data; + int ret; + + if (indir_rxfh->key) { + ret = mtk_dpmaif_drv_feature_cmd(dcb->drv_info, DATA_HW_HASH_SET, indir_rxfh->key); + if (ret < 0) { + dev_err(DCB_TO_DEV(dcb), "Failed to set hash key\n"); + return ret; + } + } + + if (indir_rxfh->indir) { + ret = mtk_dpmaif_drv_feature_cmd(dcb->drv_info, DATA_HW_INDIR_SET, + indir_rxfh->indir); + if (ret < 0) { + dev_err(DCB_TO_DEV(dcb), "Failed to set indirection table\n"); + return ret; + } + } + + return 0; +} + +static int mtk_dpmaif_cmd_rxfh_get(struct mtk_dpmaif_ctlb *dcb, void *data) +{ + struct mtk_data_rxfh *indir_rxfh = data; + int ret; + + if (indir_rxfh->key) { + ret = mtk_dpmaif_drv_feature_cmd(dcb->drv_info, DATA_HW_HASH_GET, indir_rxfh->key); + if (ret < 0) { + dev_err(DCB_TO_DEV(dcb), "Failed to get hash key\n"); + return ret; + } + } + + if (indir_rxfh->indir) { + ret = mtk_dpmaif_drv_feature_cmd(dcb->drv_info, DATA_HW_INDIR_GET, + indir_rxfh->indir); + if (ret < 0) { + dev_err(DCB_TO_DEV(dcb), "Failed to get indirection table\n"); + return ret; + } + } + + return 0; +} + +static void mtk_dpmaif_cmd_rxq_num_get(struct mtk_dpmaif_ctlb *dcb, void *data) +{ + *(unsigned int *)data = dcb->res_cfg->rxq_cnt; +} + +#define DATA_TRANS_STRING_LEN 32 + +static const char dpmaif_tx_stats[][DATA_TRANS_STRING_LEN] = { + "txq%u.tx_sw_packets", "txq%u.tx_hw_packets", "txq%u.tx_done_last_time", + "txq%u.tx_done_last_cnt", "txq%u.irq_evt.ul_done", "txq%u.irq_evt.ul_drb_empty", +}; + +static const char dpmaif_rx_stats[][DATA_TRANS_STRING_LEN] = { + "rxq%u.rx_packets", "rxq%u.rx_errors", "rxq%u.rx_dropped", + "rxq%u.rx_hw_ind_dropped", "rxq%u.rx_done_last_time", + "rxq%u.rx_done_last_cnt", "rxq%u.irq_evt.dl_done", + "rxq%u.irq_evt.pit_len_err", +}; + +static const char dpmaif_irq_stats[][DATA_TRANS_STRING_LEN] = { + "irq%u.irq_total_cnt", "irq%u.irq_last_time", +}; + +static const char dpmaif_misc_stats[][DATA_TRANS_STRING_LEN] = { + "ul_md_notready", "ul_md_pwr_notready", "ul_len_err", "dl_skb_len_err", + "dl_bat_cnt_len_err", "dl_pkt_empty", "dl_frag_empty", + "dl_mtu_err", "dl_frag_cnt_len_err", "hpc_ent_type_err", +}; + +#define DATA_TX_STATS_LEN ARRAY_SIZE(dpmaif_tx_stats) +#define DATA_RX_STATS_LEN ARRAY_SIZE(dpmaif_rx_stats) +#define DATA_IRQ_STATS_LEN ARRAY_SIZE(dpmaif_irq_stats) +#define DATA_MISC_STATS_LEN ARRAY_SIZE(dpmaif_misc_stats) + +static unsigned int mtk_dpmaif_describe_stats(struct mtk_dpmaif_ctlb *dcb, u8 *strings) +{ + unsigned int i, j, n_stats = 0; + + for (i = 0; i < dcb->res_cfg->txq_cnt; i++) { + n_stats += DATA_TX_STATS_LEN; + if (strings) { + for (j = 0; j < DATA_TX_STATS_LEN; j++) { + snprintf(strings, DATA_TRANS_STRING_LEN, dpmaif_tx_stats[j], i); + strings += DATA_TRANS_STRING_LEN; + } + } + } + + for (i = 0; i < dcb->res_cfg->rxq_cnt; i++) { + n_stats += DATA_RX_STATS_LEN; + if (strings) { + for (j = 0; j < DATA_RX_STATS_LEN; j++) { + snprintf(strings, DATA_TRANS_STRING_LEN, dpmaif_rx_stats[j], i); + strings += DATA_TRANS_STRING_LEN; + } + } + } + + for (i = 0; i < dcb->res_cfg->irq_cnt; i++) { + n_stats += DATA_IRQ_STATS_LEN; + if (strings) { + for (j = 0; j < DATA_IRQ_STATS_LEN; j++) { + snprintf(strings, DATA_TRANS_STRING_LEN, dpmaif_irq_stats[j], i); + strings += DATA_TRANS_STRING_LEN; + } + } + } + + n_stats += DATA_MISC_STATS_LEN; + if (strings) + memcpy(strings, *dpmaif_misc_stats, sizeof(dpmaif_misc_stats)); + + return n_stats; +} + +static void mtk_dpmaif_read_stats(struct mtk_dpmaif_ctlb *dcb, u64 *data) +{ + unsigned int i, j = 0; + + for (i = 0; i < dcb->res_cfg->txq_cnt; i++) { + data[j++] = dcb->traffic_stats.tx_sw_packets[i]; + data[j++] = dcb->traffic_stats.tx_hw_packets[i]; + data[j++] = dcb->traffic_stats.tx_done_last_time[i]; + data[j++] = dcb->traffic_stats.tx_done_last_cnt[i]; + data[j++] = dcb->traffic_stats.irq_tx_evt[i].ul_done; + data[j++] = dcb->traffic_stats.irq_tx_evt[i].ul_drb_empty; + } + + for (i = 0; i < dcb->res_cfg->rxq_cnt; i++) { + data[j++] = dcb->traffic_stats.rx_packets[i]; + data[j++] = dcb->traffic_stats.rx_errors[i]; + data[j++] = dcb->traffic_stats.rx_dropped[i]; + data[j++] = dcb->traffic_stats.rx_hw_ind_dropped[i]; + data[j++] = dcb->traffic_stats.rx_done_last_time[i]; + data[j++] = dcb->traffic_stats.rx_done_last_cnt[i]; + data[j++] = dcb->traffic_stats.irq_rx_evt[i].dl_done; + data[j++] = dcb->traffic_stats.irq_rx_evt[i].pit_len_err; + } + + for (i = 0; i < dcb->res_cfg->irq_cnt; i++) { + data[j++] = dcb->traffic_stats.irq_total_cnt[i]; + data[j++] = dcb->traffic_stats.irq_last_time[i]; + } + + data[j++] = dcb->traffic_stats.irq_other_evt.ul_md_notready; + data[j++] = dcb->traffic_stats.irq_other_evt.ul_md_pwr_notready; + data[j++] = dcb->traffic_stats.irq_other_evt.ul_len_err; + data[j++] = dcb->traffic_stats.irq_other_evt.dl_skb_len_err; + data[j++] = dcb->traffic_stats.irq_other_evt.dl_bat_cnt_len_err; + data[j++] = dcb->traffic_stats.irq_other_evt.dl_pkt_empty; + data[j++] = dcb->traffic_stats.irq_other_evt.dl_frag_empty; + data[j++] = dcb->traffic_stats.irq_other_evt.dl_mtu_err; + data[j++] = dcb->traffic_stats.irq_other_evt.dl_frag_cnt_len_err; + data[j++] = dcb->traffic_stats.irq_other_evt.hpc_ent_type_err; +} + +static void mtk_dpmaif_cmd_string_cnt_get(struct mtk_dpmaif_ctlb *dcb, void *data) +{ + *(unsigned int *)data = mtk_dpmaif_describe_stats(dcb, NULL); +} + +static void mtk_dpmaif_cmd_handle(struct dpmaif_cmd_srv *srv) +{ + struct mtk_dpmaif_ctlb *dcb = srv->dcb; + struct dpmaif_vq *cmd_vq = srv->vq; + struct mtk_data_cmd *cmd_info; + struct sk_buff *skb; + int ret; + + while ((skb = skb_dequeue(&cmd_vq->list))) { + ret = 0; + cmd_info = SKB_TO_CMD(skb); + if (dcb->dpmaif_state == DPMAIF_STATE_PWRON) { + switch (cmd_info->cmd) { + case DATA_CMD_TRANS_CTL: + mtk_dpmaif_cmd_trans_ctl(dcb, CMD_TO_DATA(cmd_info)); + break; + case DATA_CMD_INTR_COALESCE_GET: + ret = mtk_dpmaif_cmd_intr_coalesce_get(dcb, + CMD_TO_DATA(cmd_info)); + break; + case DATA_CMD_INTR_COALESCE_SET: + ret = mtk_dpmaif_cmd_intr_coalesce_set(dcb, + CMD_TO_DATA(cmd_info)); + break; + case DATA_CMD_RXFH_GET: + ret = mtk_dpmaif_cmd_rxfh_get(dcb, + CMD_TO_DATA(cmd_info)); + break; + case DATA_CMD_RXFH_SET: + ret = mtk_dpmaif_cmd_rxfh_set(dcb, + CMD_TO_DATA(cmd_info)); + break; + case DATA_CMD_INDIR_SIZE_GET: + ret = mtk_dpmaif_drv_feature_cmd(dcb->drv_info, + DATA_HW_INDIR_SIZE_GET, + CMD_TO_DATA(cmd_info)); + break; + case DATA_CMD_HKEY_SIZE_GET: + ret = mtk_dpmaif_drv_feature_cmd(dcb->drv_info, + DATA_HW_HASH_KEY_SIZE_GET, + CMD_TO_DATA(cmd_info)); + break; + case DATA_CMD_RXQ_NUM_GET: + mtk_dpmaif_cmd_rxq_num_get(dcb, CMD_TO_DATA(cmd_info)); + break; + case DATA_CMD_STRING_CNT_GET: + mtk_dpmaif_cmd_string_cnt_get(dcb, CMD_TO_DATA(cmd_info)); + break; + case DATA_CMD_STRING_GET: + mtk_dpmaif_describe_stats(dcb, (u8 *)CMD_TO_DATA(cmd_info)); + break; + case DATA_CMD_TRANS_DUMP: + mtk_dpmaif_read_stats(dcb, (u64 *)CMD_TO_DATA(cmd_info)); + break; + default: + ret = -EOPNOTSUPP; + break; + } + } + cmd_info->ret = ret; + if (cmd_info->data_complete) + cmd_info->data_complete(skb); + } +} + +static void mtk_dpmaif_cmd_srv(struct work_struct *work) +{ + struct dpmaif_cmd_srv *srv = container_of(work, struct dpmaif_cmd_srv, work); + + mtk_dpmaif_cmd_handle(srv); +} + +static int mtk_dpmaif_cmd_srvs_init(struct mtk_dpmaif_ctlb *dcb) +{ + struct dpmaif_cmd_srv *cmd_srv = &dcb->cmd_srv; + struct dpmaif_vq *cmd_vq = &dcb->cmd_vq; + + cmd_vq->max_len = DEFAULT_TX_QUEUE_LEN; + skb_queue_head_init(&cmd_vq->list); + + cmd_srv->dcb = dcb; + cmd_srv->vq = cmd_vq; + + /* The cmd handle work will be scheduled by schedule_work(), use system workqueue. */ + INIT_WORK(&cmd_srv->work, mtk_dpmaif_cmd_srv); + + return 0; +} + +static void mtk_dpmaif_cmd_srvs_exit(struct mtk_dpmaif_ctlb *dcb) +{ + flush_work(&dcb->cmd_srv.work); + skb_queue_purge(&dcb->cmd_vq.list); +} + +static int mtk_dpmaif_drv_res_init(struct mtk_dpmaif_ctlb *dcb) +{ + int ret = 0; + + dcb->drv_info = devm_kzalloc(DCB_TO_DEV(dcb), sizeof(*dcb->drv_info), GFP_KERNEL); + if (!dcb->drv_info) + return -ENOMEM; + + dcb->drv_info->mdev = DCB_TO_MDEV(dcb); + + if (DPMAIF_GET_HW_VER(dcb) == 0x0800) { + dcb->drv_info->drv_ops = &dpmaif_drv_ops_t800; + } else { + devm_kfree(DCB_TO_DEV(dcb), dcb->drv_info); + dev_err(DCB_TO_DEV(dcb), "Unsupported mdev, hw_ver=0x%x\n", DPMAIF_GET_HW_VER(dcb)); + ret = -EFAULT; + } + + return ret; +} + +static void mtk_dpmaif_drv_res_exit(struct mtk_dpmaif_ctlb *dcb) +{ + devm_kfree(DCB_TO_DEV(dcb), dcb->drv_info); + dcb->drv_info = NULL; +} + +static void mtk_dpmaif_irq_tx_done(struct mtk_dpmaif_ctlb *dcb, unsigned int q_mask) +{ + unsigned int ulq_done; + int i; + + /* All txq done share one interrupt, and then, + * one interrupt will check all ulq done status and schedule corresponding bottom half. + */ + for (i = 0; i < dcb->res_cfg->txq_cnt; i++) { + ulq_done = q_mask & (1 << i); + if (ulq_done) { + queue_delayed_work(dcb->tx_done_wq, + &dcb->txqs[i].tx_done_work, + msecs_to_jiffies(0)); + + dcb->traffic_stats.irq_tx_evt[i].ul_done++; + } + } +} + +static void mtk_dpmaif_irq_rx_done(struct mtk_dpmaif_ctlb *dcb, unsigned int q_mask) +{ + struct dpmaif_rxq *rxq; + unsigned int dlq_done; + int i; + + /* RSS: one dlq done belongs to one interrupt, and then, + * one interrupt will only check one dlq done status and schedule bottom half. + */ + for (i = 0; i < dcb->res_cfg->rxq_cnt; i++) { + dlq_done = q_mask & (1 << i); + if (dlq_done) { + dcb->traffic_stats.rx_done_last_time[i] = local_clock(); + rxq = &dcb->rxqs[i]; + dcb->traffic_stats.rx_done_last_cnt[i] = 0; + napi_schedule(&rxq->napi); + dcb->traffic_stats.irq_rx_evt[i].dl_done++; + break; + } + } +} + +static void mtk_dpmaif_irq_pit_len_err(struct mtk_dpmaif_ctlb *dcb, unsigned int q_mask) +{ + unsigned int pit_len_err; + int i; + + for (i = 0; i < dcb->res_cfg->rxq_cnt; i++) { + pit_len_err = q_mask & (1 << i); + if (pit_len_err) + break; + } + + dcb->traffic_stats.irq_rx_evt[i].pit_len_err++; + dcb->rxqs[i].pit_cnt_err_intr_set = true; +} + +static int mtk_dpmaif_irq_handle(int irq_id, void *data) +{ + struct dpmaif_drv_intr_info intr_info; + struct dpmaif_irq_param *irq_param; + struct dpmaif_bat_ring *bat_ring; + struct mtk_dpmaif_ctlb *dcb; + int ret; + int i; + + irq_param = data; + dcb = irq_param->dcb; + dcb->traffic_stats.irq_last_time[irq_param->idx] = local_clock(); + dcb->traffic_stats.irq_total_cnt[irq_param->idx]++; + + if (unlikely(dcb->dpmaif_state != DPMAIF_STATE_PWRON)) + goto out; + + memset(&intr_info, 0x00, sizeof(struct dpmaif_drv_intr_info)); + ret = mtk_dpmaif_drv_intr_handle(dcb->drv_info, &intr_info, irq_param->dpmaif_irq_src); + if (ret < 0) { + dev_err(DCB_TO_DEV(dcb), "Failed to get dpmaif drv irq info\n"); + goto err_get_drv_irq_info; + } + + for (i = 0; i < intr_info.intr_cnt; i++) { + switch (intr_info.intr_types[i]) { + case DPMAIF_INTR_UL_DONE: + mtk_dpmaif_irq_tx_done(dcb, intr_info.intr_queues[i]); + break; + case DPMAIF_INTR_DL_BATCNT_LEN_ERR: + dcb->traffic_stats.irq_other_evt.dl_bat_cnt_len_err++; + bat_ring = &dcb->bat_info.normal_bat_ring; + bat_ring->bat_cnt_err_intr_set = true; + queue_work(dcb->bat_info.reload_wq, &bat_ring->reload_work); + break; + case DPMAIF_INTR_DL_FRGCNT_LEN_ERR: + dcb->traffic_stats.irq_other_evt.dl_frag_cnt_len_err++; + bat_ring = &dcb->bat_info.frag_bat_ring; + bat_ring->bat_cnt_err_intr_set = true; + if (dcb->bat_info.frag_bat_enabled) + queue_work(dcb->bat_info.reload_wq, &bat_ring->reload_work); + break; + case DPMAIF_INTR_DL_PITCNT_LEN_ERR: + mtk_dpmaif_irq_pit_len_err(dcb, intr_info.intr_queues[i]); + break; + case DPMAIF_INTR_DL_DONE: + mtk_dpmaif_irq_rx_done(dcb, intr_info.intr_queues[i]); + break; + default: + break; + } + } + +err_get_drv_irq_info: + mtk_hw_clear_irq(DCB_TO_MDEV(dcb), irq_param->dev_irq_id); + mtk_hw_unmask_irq(DCB_TO_MDEV(dcb), irq_param->dev_irq_id); +out: + return IRQ_HANDLED; +} + +static int mtk_dpmaif_irq_init(struct mtk_dpmaif_ctlb *dcb) +{ + unsigned char irq_cnt = dcb->res_cfg->irq_cnt; + struct dpmaif_irq_param *irq_param; + enum mtk_irq_src irq_src; + int ret = 0; + int i, j; + + dcb->irq_params = devm_kcalloc(DCB_TO_DEV(dcb), irq_cnt, sizeof(*irq_param), GFP_KERNEL); + if (!dcb->irq_params) + return -ENOMEM; + + for (i = 0; i < irq_cnt; i++) { + irq_param = &dcb->irq_params[i]; + irq_param->idx = i; + irq_param->dcb = dcb; + irq_src = dcb->res_cfg->irq_src[i]; + irq_param->dpmaif_irq_src = irq_src; + irq_param->dev_irq_id = mtk_hw_get_irq_id(DCB_TO_MDEV(dcb), irq_src); + if (irq_param->dev_irq_id < 0) { + dev_err(DCB_TO_DEV(dcb), "Failed to allocate irq id, irq_src=%d\n", + irq_src); + ret = -EINVAL; + goto err_reg_irq; + } + + ret = mtk_hw_register_irq(DCB_TO_MDEV(dcb), irq_param->dev_irq_id, + mtk_dpmaif_irq_handle, irq_param); + if (ret < 0) { + dev_err(DCB_TO_DEV(dcb), "Failed to register irq, irq_src=%d\n", irq_src); + goto err_reg_irq; + } + } + + /* HW layer default mask dpmaif interrupt. */ + dcb->irq_enabled = false; + + return 0; +err_reg_irq: + for (j = i - 1; j >= 0; j--) { + irq_param = &dcb->irq_params[j]; + mtk_hw_unregister_irq(DCB_TO_MDEV(dcb), irq_param->dev_irq_id); + } + + devm_kfree(DCB_TO_DEV(dcb), dcb->irq_params); + dcb->irq_params = NULL; + + return ret; +} + +static int mtk_dpmaif_irq_exit(struct mtk_dpmaif_ctlb *dcb) +{ + unsigned char irq_cnt = dcb->res_cfg->irq_cnt; + struct dpmaif_irq_param *irq_param; + int i; + + for (i = 0; i < irq_cnt; i++) { + irq_param = &dcb->irq_params[i]; + mtk_hw_unregister_irq(DCB_TO_MDEV(dcb), irq_param->dev_irq_id); + } + + devm_kfree(DCB_TO_DEV(dcb), dcb->irq_params); + dcb->irq_params = NULL; + + return 0; +} + +static int mtk_dpmaif_hw_init(struct mtk_dpmaif_ctlb *dcb) +{ + struct dpmaif_bat_ring *bat_ring; + struct dpmaif_drv_cfg drv_cfg; + struct dpmaif_rxq *rxq; + struct dpmaif_txq *txq; + int ret; + int i; + + memset(&drv_cfg, 0x00, sizeof(struct dpmaif_drv_cfg)); + + bat_ring = &dcb->bat_info.normal_bat_ring; + drv_cfg.normal_bat_base = bat_ring->bat_dma_addr; + drv_cfg.normal_bat_cnt = bat_ring->bat_cnt; + drv_cfg.normal_bat_buf_size = bat_ring->buf_size; + + if (dcb->bat_info.frag_bat_enabled) { + drv_cfg.features |= DATA_HW_F_FRAG; + bat_ring = &dcb->bat_info.frag_bat_ring; + drv_cfg.frag_bat_base = bat_ring->bat_dma_addr; + drv_cfg.frag_bat_cnt = bat_ring->bat_cnt; + drv_cfg.frag_bat_buf_size = bat_ring->buf_size; + } + + if (dcb->res_cfg->cap & DATA_HW_F_LRO && dpmaif_lro_enable) + drv_cfg.features |= DATA_HW_F_LRO; + + drv_cfg.max_mtu = dcb->bat_info.max_mtu; + + for (i = 0; i < dcb->res_cfg->rxq_cnt; i++) { + rxq = &dcb->rxqs[i]; + drv_cfg.pit_base[i] = rxq->pit_dma_addr; + drv_cfg.pit_cnt[i] = rxq->pit_cnt; + } + + for (i = 0; i < dcb->res_cfg->txq_cnt; i++) { + txq = &dcb->txqs[i]; + drv_cfg.drb_base[i] = txq->drb_dma_addr; + drv_cfg.drb_cnt[i] = txq->drb_cnt; + } + + ret = mtk_dpmaif_drv_init(dcb->drv_info, &drv_cfg); + if (ret < 0) + dev_err(DCB_TO_DEV(dcb), "Failed to initialize dpmaif hw\n"); + + return ret; +} + +static int mtk_dpmaif_start(struct mtk_dpmaif_ctlb *dcb) +{ + struct dpmaif_bat_ring *bat_ring; + unsigned int normal_buf_cnt; + unsigned int frag_buf_cnt; + int ret; + + if (dcb->dpmaif_state == DPMAIF_STATE_PWRON) { + dev_err(DCB_TO_DEV(dcb), "Invalid parameters, dpmaif_state in PWRON\n"); + ret = -EINVAL; + goto out; + } + + /* Reload all buffer around normal bat. */ + bat_ring = &dcb->bat_info.normal_bat_ring; + normal_buf_cnt = bat_ring->bat_cnt - 1; + ret = mtk_dpmaif_reload_rx_buf(dcb, bat_ring, normal_buf_cnt, false); + if (ret < 0) { + dev_err(DCB_TO_DEV(dcb), "Failed to reload normal bat buffer\n"); + goto out; + } + + /* Reload all buffer around normal bat. */ + if (dcb->bat_info.frag_bat_enabled) { + bat_ring = &dcb->bat_info.frag_bat_ring; + frag_buf_cnt = bat_ring->bat_cnt - 1; + ret = mtk_dpmaif_reload_rx_buf(dcb, bat_ring, frag_buf_cnt, false); + if (ret < 0) { + dev_err(DCB_TO_DEV(dcb), "Failed to reload frag bat buffer\n"); + goto out; + } + } + + /* Initialize dpmaif hw. */ + ret = mtk_dpmaif_hw_init(dcb); + if (ret < 0) { + dev_err(DCB_TO_DEV(dcb), "Failed to initialize dpmaif hw\n"); + goto out; + } + + /* Send doorbell to dpmaif HW about normal bat buffer count. */ + ret = mtk_dpmaif_drv_send_doorbell(dcb->drv_info, DPMAIF_BAT, 0, normal_buf_cnt); + if (unlikely(ret < 0)) { + dev_err(DCB_TO_DEV(dcb), + "Failed to send normal bat buffer count doorbell\n"); + mtk_dpmaif_common_err_handle(dcb, true); + goto out; + } + + /* Send doorbell to dpmaif HW about frag bat buffer count. */ + if (dcb->bat_info.frag_bat_enabled) { + ret = mtk_dpmaif_drv_send_doorbell(dcb->drv_info, DPMAIF_FRAG, 0, frag_buf_cnt); + if (unlikely(ret < 0)) { + dev_err(DCB_TO_DEV(dcb), + "Failed to send frag bat buffer count doorbell\n"); + mtk_dpmaif_common_err_handle(dcb, true); + goto out; + } + } + + /* Initialize and run all tx services. */ + ret = mtk_dpmaif_tx_srvs_start(dcb); + if (ret < 0) { + dev_err(DCB_TO_DEV(dcb), "Failed to start all tx srvs\n"); + goto out; + } + + dcb->dpmaif_state = DPMAIF_STATE_PWRON; + mtk_dpmaif_disable_irq(dcb); + + return 0; +out: + return ret; +} + +static void mtk_dpmaif_sw_reset(struct mtk_dpmaif_ctlb *dcb) +{ + mtk_dpmaif_tx_res_reset(dcb); + mtk_dpmaif_rx_res_reset(dcb); + mtk_dpmaif_bat_res_reset(&dcb->bat_info); + mtk_dpmaif_tx_vqs_reset(dcb); + skb_queue_purge(&dcb->cmd_vq.list); + memset(&dcb->traffic_stats, 0x00, sizeof(struct dpmaif_traffic_stats)); + dcb->dpmaif_user_ready = false; + dcb->trans_enabled = false; +} + +static int mtk_dpmaif_stop(struct mtk_dpmaif_ctlb *dcb) +{ + if (dcb->dpmaif_state == DPMAIF_STATE_PWROFF) + goto out; + + /* The flow of trans control as follow depends on dpmaif state, + * so change state firstly. + */ + dcb->dpmaif_state = DPMAIF_STATE_PWROFF; + + /* Stop all tx service. */ + mtk_dpmaif_tx_srvs_stop(dcb); + + /* Stop dpmaif tx/rx handle. */ + mtk_dpmaif_trans_ctl(dcb, false); +out: + return 0; +} + +static void mtk_dpmaif_fsm_callback(struct mtk_fsm_param *fsm_param, void *data) +{ + struct mtk_dpmaif_ctlb *dcb = data; + + if (!dcb || !fsm_param) { + pr_warn("Invalid fsm parameter\n"); + return; + } + + switch (fsm_param->to) { + case FSM_STATE_OFF: + mtk_dpmaif_stop(dcb); + + /* Flush all cmd process. */ + flush_work(&dcb->cmd_srv.work); + + /* clear data structure */ + mtk_dpmaif_sw_reset(dcb); + break; + case FSM_STATE_BOOTUP: + if (fsm_param->fsm_flag == FSM_F_MD_HS_START) + mtk_dpmaif_start(dcb); + break; + case FSM_STATE_READY: + break; + case FSM_STATE_MDEE: + if (fsm_param->fsm_flag == FSM_F_MDEE_INIT) + mtk_dpmaif_stop(dcb); + break; + default: + break; + } +} + +static int mtk_dpmaif_fsm_init(struct mtk_dpmaif_ctlb *dcb) +{ + int ret; + + ret = mtk_fsm_notifier_register(DCB_TO_MDEV(dcb), MTK_USER_DPMAIF, + mtk_dpmaif_fsm_callback, dcb, FSM_PRIO_1, false); + if (ret < 0) + dev_err(DCB_TO_DEV(dcb), "Failed to register dpmaif fsm notifier\n"); + + return ret; +} + +static int mtk_dpmaif_fsm_exit(struct mtk_dpmaif_ctlb *dcb) +{ + int ret; + + ret = mtk_fsm_notifier_unregister(DCB_TO_MDEV(dcb), MTK_USER_DPMAIF); + if (ret < 0) + dev_err(DCB_TO_DEV(dcb), "Failed to unregister dpmaif fsm notifier\n"); + + return ret; +} + +static int mtk_dpmaif_sw_init(struct mtk_data_blk *data_blk, const struct dpmaif_res_cfg *res_cfg) +{ + struct mtk_dpmaif_ctlb *dcb; + int ret; + + dcb = devm_kzalloc(data_blk->mdev->dev, sizeof(*dcb), GFP_KERNEL); + if (!dcb) + return -ENOMEM; + + data_blk->dcb = dcb; + dcb->data_blk = data_blk; + dcb->dpmaif_state = DPMAIF_STATE_PWROFF; + dcb->dpmaif_user_ready = false; + dcb->trans_enabled = false; + mutex_init(&dcb->trans_ctl_lock); + dcb->res_cfg = res_cfg; + + /* interrupt coalesce init */ + dcb->intr_coalesce.rx_coalesced_frames = DPMAIF_DFLT_INTR_RX_COA_FRAMES; + dcb->intr_coalesce.tx_coalesced_frames = DPMAIF_DFLT_INTR_TX_COA_FRAMES; + dcb->intr_coalesce.rx_coalesce_usecs = DPMAIF_DFLT_INTR_RX_COA_USECS; + dcb->intr_coalesce.tx_coalesce_usecs = DPMAIF_DFLT_INTR_TX_COA_USECS; + + /* Check and set normal and frag bat buffer size. */ + mtk_dpmaif_set_bat_buf_size(dcb, DPMAIF_DFLT_MTU); + + ret = mtk_dpmaif_bm_pool_init(dcb); + if (ret < 0) { + dev_err(DCB_TO_DEV(dcb), "Failed to initialize bm pool, ret=%d\n", ret); + goto err_init_bm_pool; + } + + ret = mtk_dpmaif_sw_res_init(dcb); + if (ret < 0) { + dev_err(DCB_TO_DEV(dcb), "Failed to initialize dpmaif sw res, ret=%d\n", ret); + goto err_init_sw_res; + } + + ret = mtk_dpmaif_tx_srvs_init(dcb); + if (ret < 0) { + dev_err(DCB_TO_DEV(dcb), "Failed to initialize dpmaif tx res, ret=%d\n", ret); + goto err_init_tx_res; + } + + ret = mtk_dpmaif_cmd_srvs_init(dcb); + if (ret < 0) { + dev_err(DCB_TO_DEV(dcb), "Failed to initialize dpmaif tx cmd res, ret=%d\n", ret); + goto err_init_ctl_res; + } + + ret = mtk_dpmaif_drv_res_init(dcb); + if (ret < 0) { + dev_err(DCB_TO_DEV(dcb), "Failed to initialize dpmaif drv res, ret=%d\n", ret); + goto err_init_drv_res; + } + + ret = mtk_dpmaif_fsm_init(dcb); + if (ret < 0) { + dev_err(DCB_TO_DEV(dcb), "Failed to initialize dpmaif fsm, ret=%d\n", ret); + goto err_init_fsm; + } + + ret = mtk_dpmaif_irq_init(dcb); + if (ret < 0) { + dev_err(DCB_TO_DEV(dcb), "Failed to initialize dpmaif int, ret=%d\n", ret); + goto err_init_irq; + } + + return 0; + +err_init_irq: + mtk_dpmaif_fsm_exit(dcb); +err_init_fsm: + mtk_dpmaif_drv_res_exit(dcb); +err_init_drv_res: + mtk_dpmaif_cmd_srvs_exit(dcb); +err_init_ctl_res: + mtk_dpmaif_tx_srvs_exit(dcb); +err_init_tx_res: + mtk_dpmaif_sw_res_exit(dcb); +err_init_sw_res: + mtk_dpmaif_bm_pool_exit(dcb); +err_init_bm_pool: + devm_kfree(DCB_TO_DEV(dcb), dcb); + data_blk->dcb = NULL; + + return ret; +} + +static int mtk_dpmaif_sw_exit(struct mtk_data_blk *data_blk) +{ + struct mtk_dpmaif_ctlb *dcb = data_blk->dcb; + int ret = 0; + + if (!data_blk->dcb) { + pr_err("Invalid parameter\n"); + return -EINVAL; + } + + mtk_dpmaif_irq_exit(dcb); + mtk_dpmaif_fsm_exit(dcb); + mtk_dpmaif_drv_res_exit(dcb); + mtk_dpmaif_cmd_srvs_exit(dcb); + mtk_dpmaif_tx_srvs_exit(dcb); + mtk_dpmaif_sw_res_exit(dcb); + mtk_dpmaif_bm_pool_exit(dcb); + + devm_kfree(DCB_TO_DEV(dcb), dcb); + + return ret; +} + +static int mtk_dpmaif_poll_rx_pit(struct dpmaif_rxq *rxq) +{ + struct mtk_dpmaif_ctlb *dcb = rxq->dcb; + unsigned int sw_rd_idx, hw_wr_idx; + unsigned int pit_cnt; + int ret; + + sw_rd_idx = rxq->pit_rd_idx; + ret = mtk_dpmaif_drv_get_ring_idx(dcb->drv_info, DPMAIF_PIT_WIDX, rxq->id); + if (unlikely(ret < 0)) { + dev_err(DCB_TO_DEV(dcb), + "Failed to read rxq%u hw pit_wr_idx, ret=%d\n", rxq->id, ret); + mtk_dpmaif_common_err_handle(dcb, true); + goto out; + } + + hw_wr_idx = ret; + pit_cnt = mtk_dpmaif_ring_buf_readable(rxq->pit_cnt, sw_rd_idx, hw_wr_idx); + rxq->pit_wr_idx = hw_wr_idx; + + return pit_cnt; + +out: + return ret; +} + +#define DPMAIF_POLL_STEP 20 +#define DPMAIF_POLL_PIT_CNT_MAX 100 +#define DPMAIF_PIT_SEQ_CHECK_FAIL_CNT 2500 + +static int mtk_dpmaif_check_pit_seq(struct dpmaif_rxq *rxq, struct dpmaif_pd_pit *pit) +{ + unsigned int expect_pit_seq, cur_pit_seq; + unsigned int count = 0; + int ret; + + expect_pit_seq = rxq->pit_seq_expect; + /* The longest check time is 2ms, step is 20us */ + do { + cur_pit_seq = FIELD_GET(PIT_PD_SEQ, le32_to_cpu(pit->pd_footer)); + if (cur_pit_seq > DPMAIF_PIT_SEQ_MAX) { + dev_err(DCB_TO_DEV(rxq->dcb), + "Invalid rxq%u pit sequence number, cur_seq(%u) > max_seq(%u)\n", + rxq->id, cur_pit_seq, DPMAIF_PIT_SEQ_MAX); + break; + } + + if (cur_pit_seq == expect_pit_seq) { + rxq->pit_seq_expect++; + if (rxq->pit_seq_expect >= DPMAIF_PIT_SEQ_MAX) + rxq->pit_seq_expect = 0; + + rxq->pit_seq_fail_cnt = 0; + ret = 0; + + goto out; + } else { + count++; + } + + udelay(DPMAIF_POLL_STEP); + } while (count <= DPMAIF_POLL_PIT_CNT_MAX); + + /* If pit sequence doesn't pass in 5 seconds. */ + ret = -DATA_PIT_SEQ_CHK_FAIL; + rxq->pit_seq_fail_cnt++; + if (rxq->pit_seq_fail_cnt >= DPMAIF_PIT_SEQ_CHECK_FAIL_CNT) { + mtk_dpmaif_common_err_handle(rxq->dcb, true); + rxq->pit_seq_fail_cnt = 0; + } + +out: + return ret; +} + +static void mtk_dpmaif_rx_msg_pit(struct dpmaif_rxq *rxq, struct dpmaif_msg_pit *msg_pit, + struct dpmaif_rx_record *rx_record) +{ + rx_record->cur_ch_id = FIELD_GET(PIT_MSG_CHNL_ID, le32_to_cpu(msg_pit->dword1)); + rx_record->checksum = FIELD_GET(PIT_MSG_CHECKSUM, le32_to_cpu(msg_pit->dword1)); + rx_record->pit_dp = FIELD_GET(PIT_MSG_DP, le32_to_cpu(msg_pit->dword1)); + rx_record->hash = FIELD_GET(PIT_MSG_HASH, le32_to_cpu(msg_pit->dword3)); +} + +static int mtk_dpmaif_pit_bid_check(struct dpmaif_rxq *rxq, unsigned int cur_bid) +{ + union dpmaif_bat_record *cur_bat_record; + struct mtk_dpmaif_ctlb *dcb = rxq->dcb; + struct dpmaif_bat_ring *bat_ring; + int ret = 0; + + bat_ring = &rxq->dcb->bat_info.normal_bat_ring; + cur_bat_record = bat_ring->sw_record_base + cur_bid; + + if (unlikely(!cur_bat_record->normal.skb || cur_bid >= bat_ring->bat_cnt)) { + dev_err(DCB_TO_DEV(dcb), + "Invalid parameter rxq%u bat%d, bid=%u, bat_cnt=%u\n", + rxq->id, bat_ring->type, cur_bid, bat_ring->bat_cnt); + ret = -DATA_FLOW_CHK_ERR; + } + + return ret; +} + +static int mtk_dpmaif_rx_set_data_to_skb(struct dpmaif_rxq *rxq, struct dpmaif_pd_pit *pit_info, + struct dpmaif_rx_record *rx_record) +{ + struct dpmaif_bat_ring *bat_ring = &rxq->dcb->bat_info.normal_bat_ring; + unsigned long long data_dma_addr, data_dma_base_addr; + struct mtk_dpmaif_ctlb *dcb = rxq->dcb; + union dpmaif_bat_record *bat_record; + struct sk_buff *new_skb; + unsigned int *tmp_u32; + unsigned int data_len; + int data_offset; + + bat_record = bat_ring->sw_record_base + mtk_dpmaif_pit_bid(pit_info); + new_skb = bat_record->normal.skb; + data_dma_base_addr = (unsigned long long)bat_record->normal.data_dma_addr; + + dma_unmap_single(dcb->skb_pool->dev, bat_record->normal.data_dma_addr, + bat_record->normal.data_len, DMA_FROM_DEVICE); + + /* Calculate data address and data length. */ + data_dma_addr = le32_to_cpu(pit_info->addr_high); + data_dma_addr = (data_dma_addr << 32) + le32_to_cpu(pit_info->addr_low); + data_offset = (int)(data_dma_addr - data_dma_base_addr); + data_len = FIELD_GET(PIT_PD_DATA_LEN, le32_to_cpu(pit_info->pd_header)); + + /* Only the header_offset of the first packet of lro skb is zero, + * and other packet's header_offset is not zero. + * The data_len is the packet len that has subtracted the packet header length. + */ + if (FIELD_GET(PIT_PD_HD_OFFSET, le32_to_cpu(pit_info->pd_footer)) != 0) + data_len += (FIELD_GET(PIT_PD_HD_OFFSET, le32_to_cpu(pit_info->pd_footer)) * 4); + + /* Check and rebuild skb. */ + new_skb->len = 0; + skb_reset_tail_pointer(new_skb); + skb_reserve(new_skb, data_offset); + if (unlikely((new_skb->tail + data_len) > new_skb->end)) { + dev_err(DCB_TO_DEV(dcb), + "pkt(%u/%u):len=%u, offset=0x%llx-0x%llx\n", + rxq->pit_rd_idx, mtk_dpmaif_pit_bid(pit_info), data_len, + data_dma_addr, data_dma_base_addr); + + if (rxq->pit_rd_idx > 2) { + tmp_u32 = (unsigned int *)(rxq->pit_base + rxq->pit_rd_idx - 2); + dev_err(DCB_TO_DEV(dcb), + "pit(%u): 0x%08x, 0x%08x, 0x%08x,0x%08x\n" + "0x%08x, 0x%08x, 0x%08x, 0x%08x, 0x%08x\n", + rxq->pit_rd_idx - 2, tmp_u32[0], tmp_u32[1], + tmp_u32[2], tmp_u32[3], tmp_u32[4], + tmp_u32[5], tmp_u32[6], + tmp_u32[7], tmp_u32[8]); + } + + return -DATA_FLOW_CHK_ERR; + } + + skb_put(new_skb, data_len); + + /* None first aggregated packet should reduce IP and Protocol header part. */ + if (FIELD_GET(PIT_PD_HD_OFFSET, le32_to_cpu(pit_info->pd_footer)) != 0) + skb_pull(new_skb, + FIELD_GET(PIT_PD_HD_OFFSET, le32_to_cpu(pit_info->pd_footer)) * 4); + + rx_record->cur_skb = new_skb; + bat_record->normal.skb = NULL; + + return 0; +} + +static int mtk_dpmaif_bat_ring_set_mask(struct mtk_dpmaif_ctlb *dcb, enum dpmaif_bat_type type, + unsigned int bat_idx) +{ + struct dpmaif_bat_ring *bat_ring; + int ret = 0; + + if (type == NORMAL_BAT) + bat_ring = &dcb->bat_info.normal_bat_ring; + else + bat_ring = &dcb->bat_info.frag_bat_ring; + + if (likely(bat_ring->mask_tbl[bat_idx] == 0)) { + bat_ring->mask_tbl[bat_idx] = 1; + } else { + dev_err(DCB_TO_DEV(dcb), "Invalid bat%u mask_table[%u] value\n", type, bat_idx); + ret = -DATA_FLOW_CHK_ERR; + } + + return ret; +} + +static void mtk_dpmaif_lro_add_skb(struct mtk_dpmaif_ctlb *dcb, struct dpmaif_rx_record *rx_record) +{ + struct sk_buff *parent = rx_record->lro_parent; + struct sk_buff *last = rx_record->lro_last_skb; + struct sk_buff *cur_skb = rx_record->cur_skb; + + if (!cur_skb) { + dev_err(DCB_TO_DEV(dcb), "Invalid cur_skb\n"); + return; + } + + if (parent) { + /* Update the len, data_len, truesize of the lro skb. */ + parent->len += cur_skb->len; + parent->data_len += cur_skb->len; + parent->truesize += cur_skb->truesize; + if (last) + last->next = cur_skb; + else + skb_shinfo(parent)->frag_list = cur_skb; + + last = cur_skb; + rx_record->lro_last_skb = last; + } else { + parent = cur_skb; + rx_record->lro_parent = parent; + } + + rx_record->lro_pkt_cnt++; +} + +static int mtk_dpmaif_get_rx_pkt(struct dpmaif_rxq *rxq, struct dpmaif_pd_pit *pit_info, + struct dpmaif_rx_record *rx_record) +{ + struct mtk_dpmaif_ctlb *dcb = rxq->dcb; + unsigned int cur_bid; + int ret; + + cur_bid = mtk_dpmaif_pit_bid(pit_info); + + /* Check the bid in pit information, don't exceed bat size. */ + ret = mtk_dpmaif_pit_bid_check(rxq, cur_bid); + if (unlikely(ret < 0)) { + dev_err(DCB_TO_DEV(dcb), "Failed to check rxq%u pit normal bid\n", rxq->id); + goto out; + } + + /* Receive data from bat and save to rx_record. */ + ret = mtk_dpmaif_rx_set_data_to_skb(rxq, pit_info, rx_record); + if (unlikely(ret < 0)) { + dev_err(DCB_TO_DEV(dcb), "Failed to set rxq%u data to skb\n", rxq->id); + goto out; + } + + /* Set bat mask that have been received. */ + ret = mtk_dpmaif_bat_ring_set_mask(dcb, NORMAL_BAT, cur_bid); + if (unlikely(ret < 0)) { + dev_err(DCB_TO_DEV(dcb), "Failed to rxq%u set bat mask\n", rxq->id); + goto out; + } + + /* Set current skb to lro skb. */ + mtk_dpmaif_lro_add_skb(dcb, rx_record); + + return 0; + +out: + return ret; +} + +static int mtk_dpmaif_pit_bid_frag_check(struct dpmaif_rxq *rxq, unsigned int cur_bid) +{ + union dpmaif_bat_record *cur_bat_record; + struct mtk_dpmaif_ctlb *dcb = rxq->dcb; + struct dpmaif_bat_ring *bat_ring; + int ret = 0; + + bat_ring = &rxq->dcb->bat_info.frag_bat_ring; + cur_bat_record = bat_ring->sw_record_base + cur_bid; + + if (unlikely(!cur_bat_record->frag.page || cur_bid >= bat_ring->bat_cnt)) { + dev_err(DCB_TO_DEV(dcb), + "Invalid parameter rxq%u bat%d, bid=%u, bat_cnt=%u\n", + rxq->id, bat_ring->type, cur_bid, bat_ring->bat_cnt); + ret = -DATA_FLOW_CHK_ERR; + } + + return ret; +} + +static void mtk_dpmaif_lro_add_frag(struct dpmaif_rx_record *rx_record, unsigned int frags_len) +{ + struct sk_buff *frags_base_skb = rx_record->cur_skb; + struct sk_buff *parent = rx_record->lro_parent; + + /* The frags item do not belong to the lro parent skb, + * it belongs to the lro frags skb, so, must update the lro parent skb data. + */ + if (parent != frags_base_skb) { + /* Non-linear zone data length(frags[] and frag_list). */ + parent->data_len += frags_len; + /* Non-linear zone data length + linear zone data length. */ + parent->len += frags_len; + /* The all data length. */ + parent->truesize += frags_len; + } +} + +static int mtk_dpmaif_rx_set_frag_to_skb(struct dpmaif_rxq *rxq, struct dpmaif_pd_pit *pit_info, + struct dpmaif_rx_record *rx_record) +{ + struct dpmaif_bat_ring *bat_ring = &rxq->dcb->bat_info.frag_bat_ring; + unsigned long long data_dma_addr, data_dma_base_addr; + struct sk_buff *base_skb = rx_record->cur_skb; + struct mtk_dpmaif_ctlb *dcb = rxq->dcb; + union dpmaif_bat_record *bat_record; + struct page_mapped_t *cur_frag; + unsigned int page_offset; + unsigned int data_len; + struct page *page; + int data_offset; + + bat_record = bat_ring->sw_record_base + mtk_dpmaif_pit_bid(pit_info); + cur_frag = &bat_record->frag; + page = cur_frag->page; + page_offset = cur_frag->offset; + data_dma_base_addr = (unsigned long long)cur_frag->data_dma_addr; + + dma_unmap_page(DCB_TO_DEV(dcb), cur_frag->data_dma_addr, + cur_frag->data_len, DMA_FROM_DEVICE); + + /* Calculate data address and data length. */ + data_dma_addr = le32_to_cpu(pit_info->addr_high); + data_dma_addr = (data_dma_addr << 32) + le32_to_cpu(pit_info->addr_low); + data_offset = (int)(data_dma_addr - data_dma_base_addr); + data_len = FIELD_GET(PIT_PD_DATA_LEN, le32_to_cpu(pit_info->pd_header)); + + /* Add fragment data to cur_skb->frags[]. */ + skb_add_rx_frag(base_skb, skb_shinfo(base_skb)->nr_frags, page, + page_offset + data_offset, data_len, cur_frag->data_len); + + /* Record data length to lro parent. */ + mtk_dpmaif_lro_add_frag(rx_record, data_len); + + cur_frag->page = NULL; + + return 0; +} + +static int mtk_dpmaif_get_rx_frag(struct dpmaif_rxq *rxq, struct dpmaif_pd_pit *pit_info, + struct dpmaif_rx_record *rx_record) +{ + struct mtk_dpmaif_ctlb *dcb = rxq->dcb; + unsigned int cur_bid; + int ret; + + cur_bid = mtk_dpmaif_pit_bid(pit_info); + + /* Check the bid in pit information, don't exceed frag bat size. */ + ret = mtk_dpmaif_pit_bid_frag_check(rxq, cur_bid); + if (unlikely(ret < 0)) { + dev_err(DCB_TO_DEV(dcb), "Failed to check rxq%u pit frag bid\n", rxq->id); + goto out; + } + + /* Receive data from frag bat and save to currunt skb. */ + ret = mtk_dpmaif_rx_set_frag_to_skb(rxq, pit_info, rx_record); + if (unlikely(ret < 0)) { + dev_err(DCB_TO_DEV(dcb), "Failed to set rxq%u frag to skb\n", rxq->id); + goto out; + } + + /* Set bat mask that have been received. */ + ret = mtk_dpmaif_bat_ring_set_mask(dcb, FRAG_BAT, cur_bid); + if (unlikely(ret < 0)) { + dev_err(DCB_TO_DEV(dcb), "Failed to rxq%u set frag bat mask\n", rxq->id); + goto out; + } + + return 0; +out: + return ret; +} + +static void mtk_dpmaif_set_rcsum(struct sk_buff *skb, unsigned int hw_checksum_state) +{ + if (hw_checksum_state == CS_RESULT_PASS) + skb->ip_summed = CHECKSUM_UNNECESSARY; + else + skb->ip_summed = CHECKSUM_NONE; +} + +static void mtk_dpmaif_set_rxhash(struct sk_buff *skb, u32 hw_hash) +{ + skb_set_hash(skb, hw_hash, PKT_HASH_TYPE_L4); +} + +static int mtk_dpmaif_rx_skb(struct dpmaif_rxq *rxq, struct dpmaif_rx_record *rx_record) +{ + struct sk_buff *new_skb = rx_record->lro_parent; + struct mtk_dpmaif_ctlb *dcb = rxq->dcb; + int ret = 0; + + if (unlikely(rx_record->pit_dp)) { + dcb->traffic_stats.rx_hw_ind_dropped[rxq->id]++; + dev_kfree_skb_any(new_skb); + goto out; + } + + /* Check HW rx checksum offload status. */ + mtk_dpmaif_set_rcsum(new_skb, rx_record->checksum); + + /* Set skb hash from HW. */ + mtk_dpmaif_set_rxhash(new_skb, rx_record->hash); + + skb_record_rx_queue(new_skb, rxq->id); + + dcb->traffic_stats.rx_packets[rxq->id]++; +out: + rx_record->lro_parent = NULL; + return ret; +} + +static int mtk_dpmaif_recycle_pit_internal(struct dpmaif_rxq *rxq, unsigned short pit_rel_cnt) +{ + unsigned short old_sw_rel_idx, new_sw_rel_idx, old_hw_wr_idx; + struct mtk_dpmaif_ctlb *dcb = rxq->dcb; + int ret = 0; + + old_sw_rel_idx = rxq->pit_rel_rd_idx; + new_sw_rel_idx = old_sw_rel_idx + pit_rel_cnt; + old_hw_wr_idx = rxq->pit_wr_idx; + + /* Queue is empty and no need to release. */ + if (old_hw_wr_idx == old_sw_rel_idx) + dev_err(DCB_TO_DEV(dcb), "old_hw_wr_idx == old_sw_rel_idx\n"); + + /* pit_rel_rd_idx should not exceed pit_wr_idx. */ + if (old_hw_wr_idx > old_sw_rel_idx) { + if (new_sw_rel_idx > old_hw_wr_idx) + dev_err(DCB_TO_DEV(dcb), "new_rel_idx > old_hw_wr_idx\n"); + } else if (old_hw_wr_idx < old_sw_rel_idx) { + if (new_sw_rel_idx >= rxq->pit_cnt) { + new_sw_rel_idx = new_sw_rel_idx - rxq->pit_cnt; + if (new_sw_rel_idx > old_hw_wr_idx) + dev_err(DCB_TO_DEV(dcb), "new_rel_idx > old_wr_idx\n"); + } + } + + /* Notify the available pit count to HW. */ + ret = mtk_dpmaif_drv_send_doorbell(dcb->drv_info, DPMAIF_PIT, rxq->id, pit_rel_cnt); + if (unlikely(ret < 0)) { + dev_err(DCB_TO_DEV(dcb), + "Failed to send pit doorbell,pit-r/w/rel-%u,%u,%u, rel_pit_cnt=%u, ret=%d\n", + rxq->pit_rd_idx, rxq->pit_wr_idx, + rxq->pit_rel_rd_idx, pit_rel_cnt, ret); + mtk_dpmaif_common_err_handle(dcb, true); + } + + rxq->pit_rel_rd_idx = new_sw_rel_idx; + + return ret; +} + +static int mtk_dpmaif_recycle_rx_ring(struct dpmaif_rxq *rxq) +{ + int ret = 0; + + /* burst recycle check */ + if (rxq->pit_rel_cnt < rxq->pit_burst_rel_cnt) + return 0; + + if (unlikely(rxq->pit_rel_cnt > rxq->pit_cnt)) { + dev_err(DCB_TO_DEV(rxq->dcb), "Invalid rxq%u pit release count, %u>%u\n", + rxq->id, rxq->pit_rel_cnt, rxq->pit_cnt); + ret = -DATA_FLOW_CHK_ERR; + goto out; + } + + ret = mtk_dpmaif_recycle_pit_internal(rxq, rxq->pit_rel_cnt); + if (unlikely(ret < 0)) { + dev_err(DCB_TO_DEV(rxq->dcb), "Failed to rxq%u recycle pit, ret=%d\n", + rxq->id, ret); + } + + rxq->pit_rel_cnt = 0; + + mtk_dpmaif_queue_bat_reload_work(rxq->dcb); + + if (rxq->pit_cnt_err_intr_set) { + rxq->pit_cnt_err_intr_set = false; + mtk_dpmaif_drv_intr_complete(rxq->dcb->drv_info, + DPMAIF_INTR_DL_PITCNT_LEN_ERR, rxq->id, 0); + } + +out: + return ret; +} + +static int mtk_dpmaif_rx_data_collect_internal(struct dpmaif_rxq *rxq, int pit_cnt, int budget, + unsigned int *pkt_cnt) +{ + struct dpmaif_rx_record *rx_record = &rxq->rx_record; + struct mtk_dpmaif_ctlb *dcb = rxq->dcb; + struct dpmaif_pd_pit *pit_info; + unsigned int recv_pkt_cnt = 0; + unsigned int rx_cnt, cur_pit; + int ret; + + cur_pit = rxq->pit_rd_idx; + for (rx_cnt = 0; rx_cnt < pit_cnt; rx_cnt++) { + /* Check if reach rx packet budget. */ + if (!rx_record->msg_pit_recv) { + if (recv_pkt_cnt >= budget) + break; + } + + /* Pit sequence check. */ + pit_info = rxq->pit_base + cur_pit; + ret = mtk_dpmaif_check_pit_seq(rxq, pit_info); + if (unlikely(ret < 0)) + break; + + /* Parse message pit. */ + if (FIELD_GET(PIT_PD_PKT_TYPE, le32_to_cpu(pit_info->pd_header)) == MSG_PIT) { + if (unlikely(rx_record->msg_pit_recv)) { + if (rx_record->lro_parent) { + dcb->traffic_stats.rx_errors[rxq->id]++; + dcb->traffic_stats.rx_dropped[rxq->id]++; + dev_kfree_skb_any(rx_record->lro_parent); + } + + memset(&rxq->rx_record, 0x00, sizeof(rxq->rx_record)); + } + + rx_record->msg_pit_recv = true; + mtk_dpmaif_rx_msg_pit(rxq, (struct dpmaif_msg_pit *)pit_info, rx_record); + } else { + /* Parse normal pit or frag pit. */ + if (FIELD_GET(PIT_PD_BUF_TYPE, le32_to_cpu(pit_info->pd_header)) != + FRAG_BAT) { + ret = mtk_dpmaif_get_rx_pkt(rxq, pit_info, rx_record); + } else { + /* Pit sequence: normal pit + frag pit. */ + if (likely(rx_record->cur_skb)) + ret = mtk_dpmaif_get_rx_frag(rxq, pit_info, rx_record); + else + /* Unexpected pit sequence: message pit + frag pit. */ + ret = -DATA_FLOW_CHK_ERR; + } + + if (unlikely(ret < 0)) { + /* Move on pit index to skip error data. */ + rx_record->err_payload = 1; + mtk_dpmaif_common_err_handle(dcb, true); + } + + /* Last one pit of a packet. */ + if (FIELD_GET(PIT_PD_CONT, le32_to_cpu(pit_info->pd_header)) == + DPMAIF_PIT_LASTONE) { + if (likely(rx_record->err_payload == 0)) { + mtk_dpmaif_rx_skb(rxq, rx_record); + } else { + if (rx_record->cur_skb) { + dcb->traffic_stats.rx_errors[rxq->id]++; + dcb->traffic_stats.rx_dropped[rxq->id]++; + dev_kfree_skb_any(rx_record->lro_parent); + rx_record->lro_parent = NULL; + } + } + memset(&rxq->rx_record, 0x00, sizeof(rxq->rx_record)); + recv_pkt_cnt++; + } + } + + cur_pit = mtk_dpmaif_ring_buf_get_next_idx(rxq->pit_cnt, cur_pit); + rxq->pit_rd_idx = cur_pit; + + rxq->pit_rel_cnt++; + } + + *pkt_cnt = recv_pkt_cnt; + + /* Recycle pit and reload bat in batches. */ + ret = mtk_dpmaif_recycle_rx_ring(rxq); + if (unlikely(ret < 0)) + dev_err(DCB_TO_DEV(dcb), "Failed to recycle rxq%u pit\n", rxq->id); + + return ret; +} + +static int mtk_dpmaif_rx_data_collect(struct dpmaif_rxq *rxq, int budget, unsigned int *pkt_cnt) +{ + struct mtk_dpmaif_ctlb *dcb = rxq->dcb; + unsigned int pit_cnt; + int ret; + + /* Get pit count that will be collected and update pit_wr_idx from hardware. */ + ret = mtk_dpmaif_poll_rx_pit(rxq); + if (unlikely(ret < 0)) + goto out; + + pit_cnt = ret; + if (likely(pit_cnt > 0)) { + ret = mtk_dpmaif_rx_data_collect_internal(rxq, pit_cnt, budget, pkt_cnt); + if (ret <= -DATA_DL_ONCE_MORE) { + ret = -DATA_DL_ONCE_MORE; + } else if (ret <= -DATA_ERR_STOP_MAX) { + ret = -DATA_ERR_STOP_MAX; + mtk_dpmaif_common_err_handle(dcb, true); + } else { + ret = 0; + } + } + +out: + return ret; +} + +static int mtk_dpmaif_rx_data_collect_more(struct dpmaif_rxq *rxq, int budget, int *work_done) +{ + unsigned int total_pkt_cnt = 0, pkt_cnt; + int each_budget; + int ret = 0; + + do { + each_budget = budget - total_pkt_cnt; + pkt_cnt = 0; + ret = mtk_dpmaif_rx_data_collect(rxq, each_budget, &pkt_cnt); + total_pkt_cnt += pkt_cnt; + if (ret < 0) + break; + } while (total_pkt_cnt < budget && pkt_cnt > 0 && rxq->started); + + *work_done = total_pkt_cnt; + + return ret; +} + +static int mtk_dpmaif_rx_napi_poll(struct napi_struct *napi, int budget) +{ + struct dpmaif_rxq *rxq = container_of(napi, struct dpmaif_rxq, napi); + struct dpmaif_traffic_stats *stats = &rxq->dcb->traffic_stats; + struct mtk_dpmaif_ctlb *dcb = rxq->dcb; + int work_done = 0; + int ret; + + if (likely(rxq->started)) { + ret = mtk_dpmaif_rx_data_collect_more(rxq, budget, &work_done); + stats->rx_done_last_cnt[rxq->id] += work_done; + if (ret == -DATA_DL_ONCE_MORE) { + napi_gro_flush(napi, false); + work_done = budget; + } + } + + if (work_done < budget) { + napi_complete_done(napi, work_done); + mtk_dpmaif_drv_clear_ip_busy(dcb->drv_info); + mtk_dpmaif_drv_intr_complete(dcb->drv_info, DPMAIF_INTR_DL_DONE, rxq->id, 0); + } + + return work_done; +} + +enum dpmaif_pkt_type { + PKT_UNKNOWN, + PKT_EMPTY_ACK, + PKT_ECHO +}; + +static enum dpmaif_pkt_type mtk_dpmaif_check_skb_type(struct sk_buff *skb, enum mtk_pkt_type type) +{ + int ret = PKT_UNKNOWN; + struct tcphdr *tcph; + int inner_offset; + __be16 frag_off; + u32 total_len; + u32 pkt_type; + u8 nexthdr; + + union { + struct iphdr *v4; + struct ipv6hdr *v6; + unsigned char *hdr; + } ip; + union { + struct icmphdr *v4; + struct icmp6hdr *v6; + unsigned char *hdr; + } icmp; + + pkt_type = skb->data[0] & 0xF0; + if (pkt_type == IPV4_VERSION) { + ip.v4 = (struct iphdr *)(skb->data); + if (ip.v4->protocol == IPPROTO_ICMP) { + icmp.v4 = (struct icmphdr *)(skb->data + (ip.v4->ihl << 2)); + if (icmp.v4->type == ICMP_ECHO) + ret = PKT_ECHO; + } else if (ip.v4->protocol == IPPROTO_TCP) { + tcph = (struct tcphdr *)(skb->data + (ip.v4->ihl << 2)); + if (((ip.v4->ihl << 2) + (tcph->doff << 2)) == (ntohs(ip.v4->tot_len)) && + !tcph->syn && !tcph->fin && !tcph->rst) + ret = PKT_EMPTY_ACK; + } + } else if (pkt_type == IPV6_VERSION) { + ip.v6 = (struct ipv6hdr *)skb->data; + nexthdr = ip.v6->nexthdr; + if (ipv6_ext_hdr(nexthdr)) { + /* Now skip over extension headers. */ + inner_offset = ipv6_skip_exthdr(skb, sizeof(struct ipv6hdr), + &nexthdr, &frag_off); + if (inner_offset < 0) + goto out; + } else { + inner_offset = sizeof(struct ipv6hdr); + } + + if (nexthdr == IPPROTO_ICMPV6) { + icmp.v6 = (struct icmp6hdr *)(skb->data + inner_offset); + if (icmp.v6->icmp6_type == ICMPV6_ECHO_REQUEST) + ret = PKT_ECHO; + } else if (nexthdr == IPPROTO_TCP) { + total_len = sizeof(struct ipv6hdr) + ntohs(ip.v6->payload_len); + tcph = (struct tcphdr *)(skb->data + inner_offset); + if (((total_len - inner_offset) == (tcph->doff << 2)) && + !tcph->syn && !tcph->fin && !tcph->rst) + ret = PKT_EMPTY_ACK; + } + } + +out: + return ret; +} + +static int mtk_dpmaif_select_txq_800(struct sk_buff *skb, enum mtk_pkt_type type) +{ + enum dpmaif_pkt_type pkt_type; + __u32 skb_hash; + int q_id; + + if (unlikely(!skb)) { + pr_warn("Invalid parameter\n"); + return -EINVAL; + } + + pkt_type = mtk_dpmaif_check_skb_type(skb, type); + if (pkt_type == PKT_EMPTY_ACK) { + q_id = 1; + } else if (pkt_type == PKT_ECHO) { + q_id = 2; + } else { + skb_hash = skb_get_hash(skb); + q_id = (skb_hash & 0x01) ? 0 : 4; + } + + return q_id; +} + +static void mtk_dpmaif_wake_up_tx_srv(struct dpmaif_tx_srv *tx_srv) +{ + wake_up(&tx_srv->wait); +} + +static int mtk_dpmaif_send_pkt(struct mtk_dpmaif_ctlb *dcb, struct sk_buff *skb, + unsigned char intf_id) +{ + unsigned char vq_id = skb_get_queue_mapping(skb); + struct dpmaif_pkt_info *pkt_info; + unsigned char srv_id; + struct dpmaif_vq *vq; + int ret = 0; + + pkt_info = DPMAIF_SKB_CB(skb); + pkt_info->intf_id = intf_id; + pkt_info->drb_cnt = DPMAIF_GET_DRB_CNT(skb); + + vq = &dcb->tx_vqs[vq_id]; + srv_id = dcb->res_cfg->tx_vq_srv_map[vq_id]; + if (likely(skb_queue_len(&vq->list) < vq->max_len)) + skb_queue_tail(&vq->list, skb); + else + ret = -EBUSY; + + mtk_dpmaif_wake_up_tx_srv(&dcb->tx_srvs[srv_id]); + + return ret; +} + +static int mtk_dpmaif_send_cmd(struct mtk_dpmaif_ctlb *dcb, struct sk_buff *skb) +{ + struct dpmaif_vq *vq = &dcb->cmd_vq; + int ret = 0; + + if (likely(skb_queue_len(&vq->list) < vq->max_len)) + skb_queue_tail(&vq->list, skb); + else + ret = -EBUSY; + + schedule_work(&dcb->cmd_srv.work); + + return ret; +} + +static int mtk_dpmaif_send(struct mtk_data_blk *data_blk, enum mtk_data_type type, + struct sk_buff *skb, u64 data) +{ + int ret; + + if (unlikely(!data_blk || !skb || !data_blk->dcb)) { + pr_warn("Invalid parameter\n"); + return -EINVAL; + } + + if (likely(type == DATA_PKT)) + ret = mtk_dpmaif_send_pkt(data_blk->dcb, skb, data); + else + ret = mtk_dpmaif_send_cmd(data_blk->dcb, skb); + + return ret; +} + +struct mtk_data_trans_ops data_trans_ops = { + .poll = mtk_dpmaif_rx_napi_poll, + .send = mtk_dpmaif_send, +}; + +/* mtk_data_init() - initialize data path + * @mdev: pointer to mtk_md_dev + * Allocate and initialize all software resource of data transction layer and data port layer. + * Return: return value is 0 on success, a negative error code on failure. + */ +int mtk_data_init(struct mtk_md_dev *mdev) +{ + const struct dpmaif_res_cfg *res_cfg; + struct mtk_data_blk *data_blk; + int ret; + + if (!mdev) { + pr_err("Invalid parameter\n"); + return -ENODEV; + } + + data_blk = devm_kzalloc(mdev->dev, sizeof(*data_blk), GFP_KERNEL); + if (!data_blk) + return -ENOMEM; + + data_blk->mdev = mdev; + mdev->data_blk = data_blk; + + if (mdev->hw_ver == 0x0800) { + res_cfg = &res_cfg_t800; + data_trans_ops.select_txq = mtk_dpmaif_select_txq_800; + } else { + dev_err(mdev->dev, "Unsupported mdev, hw_ver=0x%x\n", mdev->hw_ver); + ret = -ENODEV; + goto err_get_hw_ver; + } + + ret = mtk_dpmaif_sw_init(data_blk, res_cfg); + if (ret < 0) { + dev_err(mdev->dev, "Failed to initialize data trans, ret=%d\n", ret); + goto err_get_hw_ver; + } + + return 0; + +err_get_hw_ver: + devm_kfree(mdev->dev, data_blk); + mdev->data_blk = NULL; + + return ret; +} + +/* mtk_data_exit() - deinitialize data path + * @mdev: pointer to mtk_md_dev + * deinitialize and release all software resource of data transction layer and data port layer. + * Return: return value is 0 on success, a negative error code on failure. + */ +int mtk_data_exit(struct mtk_md_dev *mdev) +{ + int ret; + + if (!mdev || !mdev->data_blk) { + pr_err("Invalid parameter\n"); + return -EINVAL; + } + + ret = mtk_dpmaif_sw_exit(mdev->data_blk); + if (ret < 0) + dev_err(mdev->dev, "Failed to exit data trans, ret=%d\n", ret); + + devm_kfree(mdev->dev, mdev->data_blk); + mdev->data_blk = NULL; + + return ret; +} diff --git a/drivers/net/wwan/mediatek/pcie/mtk_pci.c b/drivers/net/wwan/mediatek/pcie/mtk_pci.c index e80d65588101..7c7cb1f733de 100644 --- a/drivers/net/wwan/mediatek/pcie/mtk_pci.c +++ b/drivers/net/wwan/mediatek/pcie/mtk_pci.c @@ -591,6 +591,11 @@ static bool mtk_pci_link_check(struct mtk_md_dev *mdev) return !pci_device_is_present(to_pci_dev(mdev->dev)); } +static bool mtk_pci_mmio_check(struct mtk_md_dev *mdev) +{ + return mtk_pci_mac_read32(mdev->hw_priv, REG_ATR_PCIE_WIN0_T0_SRC_ADDR_LSB) == (u32)-1; +} + static int mtk_pci_get_hp_status(struct mtk_md_dev *mdev) { struct mtk_pci_priv *priv = mdev->hw_priv; @@ -629,6 +634,7 @@ static const struct mtk_hw_ops mtk_pci_ops = { .get_ext_evt_status = mtk_mhccif_get_evt_status, .reset = mtk_pci_reset, .reinit = mtk_pci_reinit, + .mmio_check = mtk_pci_mmio_check, .get_hp_status = mtk_pci_get_hp_status, }; From patchwork Tue Nov 22 11:22:55 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?WWFuY2hhbyBZYW5nICjmnajlvabotoUp?= X-Patchwork-Id: 24306 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp2148287wrr; Tue, 22 Nov 2022 03:41:04 -0800 (PST) X-Google-Smtp-Source: AA0mqf4nj9jxnRFVEQ+x0Iy1ixx8ezVe0GlUGOI3q6anR1E32HjTUIsz5taROTwERrb2cXmcuT2i X-Received: by 2002:a17:906:60d0:b0:78d:3f87:1725 with SMTP id f16-20020a17090660d000b0078d3f871725mr3430824ejk.492.1669117263998; Tue, 22 Nov 2022 03:41:03 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1669117263; cv=none; d=google.com; s=arc-20160816; b=b2BezLieJ2Q9/n1w5/oyBGyBf+ZJcV2uNgUHb7IeNDcTa4ZpFDG0C40IdlaSwZ15VR IWh9//W6DtsO5xKed34V4Is95OwDTVOiFBOXh0OWYyqSH8mlI/AtYGQjcLS6hIotN8qI SoLfQxRXPoXv4gdqq+ynZ4ngFUA5Me5DxdxNisKgyVIgyYLAc70YCGIugU2MBT17Lq63 ZEOFPZ7qY4JCynuF7/rhwOSRR5cJIk54Ql2UhR6u4VXJykdR0NLIoJoDxV2zd9h95dVO EBOpp6ei+U0mGcvhRVkclsMj+chXKjYYUppfnVHJz//1ITXoTN9CVnl7WIqOYWGxiv+j LS1w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:message-id:date:subject:cc:to:from :dkim-signature; bh=9HvcATZjBm1XuTLvZpCeem8pyALh65HbpOwvG4c+haM=; b=kSusuFN+rRxEozTCAFxGHD7RA1e2JnpfW9BSE7/EQvg9SReO+WPiBqCBnmP5MXDFex 75gP0OmAlGdIaX3OngtU6/J6LxATu4dwsY7MY75CjB0UE7tIsiUS89adWUfBaZpXYYIU ncanUPPY73Y3n9ZUb44mTyu7Jpit0pV1Ad7R+idAztpUcKl/Yz2u2Se5vyvvkdhu6O9r KmQA0oYRGb1v1Qd8vX2rKLU5XzGK0/q5vHdlxy1IyLi5fdViNcfmeD88veBKN/t+Olhu qaYFWbIDxm3i81nWyaIru9PugP4fkcJaznDYkVlkh2w2e0umo4P+GN7ntC40QDuZEgU0 HX4A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@mediatek.com header.s=dk header.b=KLFjM0nl; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=mediatek.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id qf5-20020a1709077f0500b00787e1d77943si12798740ejc.49.2022.11.22.03.40.39; Tue, 22 Nov 2022 03:41:03 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@mediatek.com header.s=dk header.b=KLFjM0nl; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=mediatek.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233133AbiKVLae (ORCPT + 99 others); Tue, 22 Nov 2022 06:30:34 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48986 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229639AbiKVLaF (ORCPT ); Tue, 22 Nov 2022 06:30:05 -0500 Received: from mailgw01.mediatek.com (unknown [60.244.123.138]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5CF6D2C10B; Tue, 22 Nov 2022 03:23:10 -0800 (PST) X-UUID: c56a013eb83b4dcaa11013980d777a70-20221122 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=mediatek.com; s=dk; h=Content-Type:MIME-Version:Message-ID:Date:Subject:CC:To:From; bh=9HvcATZjBm1XuTLvZpCeem8pyALh65HbpOwvG4c+haM=; b=KLFjM0nlYPDmh+Z2MI5p1hvfk3Yvjy/fRIVkzpvJLg/yHvmtA5NyvbYQX+TG97F5Ipzf/6xYrhouNKQ5tivY/Ow7YUVXaahdJx8qtWlwTCqNi23UhF9OzleHpJAhKdnh6vZTSJXoPhp9L6nYzHNxXS7j8LJjCjaulVNQHdm0c/Y=; X-CID-P-RULE: Release_Ham X-CID-O-INFO: VERSION:1.1.13,REQID:f1d66f97-e022-47fe-ad01-6a0224309ef5,IP:0,U RL:0,TC:0,Content:-25,EDM:0,RT:0,SF:0,FILE:0,BULK:0,RULE:Release_Ham,ACTIO N:release,TS:-25 X-CID-META: VersionHash:d12e911,CLOUDID:8d10eadb-6ad4-42ff-91f3-18e0272db660,B ulkID:nil,BulkQuantity:0,Recheck:0,SF:102,TC:nil,Content:0,EDM:-3,IP:nil,U RL:11|1,File:nil,Bulk:nil,QS:nil,BEC:nil,COL:0 X-UUID: c56a013eb83b4dcaa11013980d777a70-20221122 Received: from mtkmbs11n2.mediatek.inc [(172.21.101.187)] by mailgw01.mediatek.com (envelope-from ) (Generic MTA with TLSv1.2 ECDHE-RSA-AES256-GCM-SHA384 256/256) with ESMTP id 1685713096; Tue, 22 Nov 2022 19:23:04 +0800 Received: from mtkmbs11n2.mediatek.inc (172.21.101.187) by mtkmbs13n1.mediatek.inc (172.21.101.193) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.792.15; Tue, 22 Nov 2022 19:23:02 +0800 Received: from mcddlt001.gcn.mediatek.inc (10.19.240.15) by mtkmbs11n2.mediatek.inc (172.21.101.73) with Microsoft SMTP Server id 15.2.792.15 via Frontend Transport; Tue, 22 Nov 2022 19:23:00 +0800 From: Yanchao Yang To: Loic Poulain , Sergey Ryazanov , Johannes Berg , "David S . Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , netdev ML , kernel ML CC: MTK ML , Liang Lu , Haijun Liu , Hua Yang , Ting Wang , Felix Chen , Mingliang Xu , Min Dong , Aiden Wang , Guohao Zhang , Chris Feng , Yanchao Yang , Lambert Wang , Mingchuang Qiao , Xiayu Zhang , Haozhe Chang , MediaTek Corporation Subject: [PATCH net-next v1 10/13] net: wwan: tmi: Introduce WWAN interface Date: Tue, 22 Nov 2022 19:22:55 +0800 Message-ID: <20221122112255.160752-1-yanchao.yang@mediatek.com> X-Mailer: git-send-email 2.18.0 MIME-Version: 1.0 X-MTK: N X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_MSPIKE_H2,SPF_HELO_PASS, SPF_PASS,UNPARSEABLE_RELAY autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1750196304426672550?= X-GMAIL-MSGID: =?utf-8?q?1750196304426672550?= From: MediaTek Corporation Creates the WWAN interface which implements the wwan_ops for registration with the WWAN framework. WWAN interface also implements the net_device_ops functions used by the network devices. Network device operations include open, stop, start transmission and get states. Signed-off-by: Hua Yang Signed-off-by: MediaTek Corporation --- drivers/net/wwan/mediatek/Makefile | 4 +- drivers/net/wwan/mediatek/mtk_data_plane.h | 25 +- drivers/net/wwan/mediatek/mtk_dpmaif.c | 78 ++- drivers/net/wwan/mediatek/mtk_dpmaif_drv.h | 10 +- drivers/net/wwan/mediatek/mtk_ethtool.c | 179 ++++++ drivers/net/wwan/mediatek/mtk_wwan.c | 665 +++++++++++++++++++++ 6 files changed, 946 insertions(+), 15 deletions(-) create mode 100644 drivers/net/wwan/mediatek/mtk_ethtool.c create mode 100644 drivers/net/wwan/mediatek/mtk_wwan.c diff --git a/drivers/net/wwan/mediatek/Makefile b/drivers/net/wwan/mediatek/Makefile index d48c2a0d33d9..72655ce948bf 100644 --- a/drivers/net/wwan/mediatek/Makefile +++ b/drivers/net/wwan/mediatek/Makefile @@ -13,7 +13,9 @@ mtk_tmi-y = \ mtk_port.o \ mtk_port_io.o \ mtk_fsm.o \ - mtk_dpmaif.o + mtk_dpmaif.o \ + mtk_wwan.o \ + mtk_ethtool.o ccflags-y += -I$(srctree)/$(src)/ ccflags-y += -I$(srctree)/$(src)/pcie/ diff --git a/drivers/net/wwan/mediatek/mtk_data_plane.h b/drivers/net/wwan/mediatek/mtk_data_plane.h index 4daf3ec32c91..40c48b01e02c 100644 --- a/drivers/net/wwan/mediatek/mtk_data_plane.h +++ b/drivers/net/wwan/mediatek/mtk_data_plane.h @@ -22,11 +22,12 @@ enum mtk_data_feature { DATA_F_RXFH = BIT(1), DATA_F_INTR_COALESCE = BIT(2), DATA_F_MULTI_NETDEV = BIT(16), - DATA_F_ETH_PDN = BIT(17), + DATA_F_ETH_PDN = BIT(17) }; struct mtk_data_blk { struct mtk_md_dev *mdev; + struct mtk_wwan_ctlb *wcb; struct mtk_dpmaif_ctlb *dcb; }; @@ -85,6 +86,16 @@ struct mtk_data_trans_ops { struct sk_buff *skb, u64 data); }; +enum mtk_data_evt { + DATA_EVT_MIN, + DATA_EVT_TX_START, + DATA_EVT_TX_STOP, + DATA_EVT_RX_STOP, + DATA_EVT_REG_DEV, + DATA_EVT_UNREG_DEV, + DATA_EVT_MAX +}; + struct mtk_data_trans_info { u32 cap; unsigned char rxq_cnt; @@ -93,9 +104,21 @@ struct mtk_data_trans_info { struct napi_struct **napis; }; +struct mtk_data_port_ops { + int (*init)(struct mtk_data_blk *data_blk, struct mtk_data_trans_info *trans_info); + void (*exit)(struct mtk_data_blk *data_blk); + int (*recv)(struct mtk_data_blk *data_blk, struct sk_buff *skb, + unsigned char q_id, unsigned char if_id); + void (*notify)(struct mtk_data_blk *data_blk, enum mtk_data_evt evt, u64 data); +}; + +void mtk_ethtool_set_ops(struct net_device *dev); +int mtk_wwan_cmd_execute(struct net_device *dev, enum mtk_data_cmd_type cmd, void *data); +u16 mtk_wwan_select_queue(struct net_device *dev, struct sk_buff *skb, struct net_device *sb_dev); int mtk_data_init(struct mtk_md_dev *mdev); int mtk_data_exit(struct mtk_md_dev *mdev); +extern struct mtk_data_port_ops data_port_ops; extern struct mtk_data_trans_ops data_trans_ops; #endif /* __MTK_DATA_PLANE_H__ */ diff --git a/drivers/net/wwan/mediatek/mtk_dpmaif.c b/drivers/net/wwan/mediatek/mtk_dpmaif.c index a8b23b2cf448..27b7a5dee707 100644 --- a/drivers/net/wwan/mediatek/mtk_dpmaif.c +++ b/drivers/net/wwan/mediatek/mtk_dpmaif.c @@ -425,6 +425,7 @@ enum dpmaif_dump_flag { struct mtk_dpmaif_ctlb { struct mtk_data_blk *data_blk; + struct mtk_data_port_ops *port_ops; struct dpmaif_drv_info *drv_info; struct napi_struct *napi[DPMAIF_RXQ_CNT_MAX]; @@ -1353,7 +1354,7 @@ static unsigned int mtk_dpmaif_poll_tx_drb(struct dpmaif_txq *txq) old_sw_rd_idx = txq->drb_rd_idx; ret = mtk_dpmaif_drv_get_ring_idx(dcb->drv_info, DPMAIF_DRB_RIDX, txq->id); if (unlikely(ret < 0)) { - dev_err(DCB_TO_DEV(dcb), "Failed to read txq%u drb_rd_idx, ret=%d\n", txq->id, ret); + dev_err(DCB_TO_DEV(dcb), "Failed to read txq%u drb_rd_idx, ret=%d", txq->id, ret); mtk_dpmaif_common_err_handle(dcb, true); return 0; } @@ -1419,6 +1420,8 @@ static int mtk_dpmaif_tx_rel_internal(struct dpmaif_txq *txq, txq->drb_rel_rd_idx = cur_idx; atomic_inc(&txq->budget); + if (atomic_read(&txq->budget) > txq->drb_cnt / 8) + dcb->port_ops->notify(dcb->data_blk, DATA_EVT_TX_START, (u64)1 << txq->id); } *real_rel_cnt = i; @@ -2271,6 +2274,7 @@ static void mtk_dpmaif_trans_disable(struct mtk_dpmaif_ctlb *dcb) static void mtk_dpmaif_trans_ctl(struct mtk_dpmaif_ctlb *dcb, bool enable) { mutex_lock(&dcb->trans_ctl_lock); + if (enable) { if (!dcb->trans_enabled) { if (dcb->dpmaif_state == DPMAIF_STATE_PWRON && @@ -2637,8 +2641,7 @@ static int mtk_dpmaif_drv_res_init(struct mtk_dpmaif_ctlb *dcb) if (DPMAIF_GET_HW_VER(dcb) == 0x0800) { dcb->drv_info->drv_ops = &dpmaif_drv_ops_t800; } else { - devm_kfree(DCB_TO_DEV(dcb), dcb->drv_info); - dev_err(DCB_TO_DEV(dcb), "Unsupported mdev, hw_ver=0x%x\n", DPMAIF_GET_HW_VER(dcb)); + dev_err(DCB_TO_DEV(dcb), "Unsupported mdev, hw_ver=0x%x", DPMAIF_GET_HW_VER(dcb)); ret = -EFAULT; } @@ -2788,8 +2791,7 @@ static int mtk_dpmaif_irq_init(struct mtk_dpmaif_ctlb *dcb) irq_param->dpmaif_irq_src = irq_src; irq_param->dev_irq_id = mtk_hw_get_irq_id(DCB_TO_MDEV(dcb), irq_src); if (irq_param->dev_irq_id < 0) { - dev_err(DCB_TO_DEV(dcb), "Failed to allocate irq id, irq_src=%d\n", - irq_src); + dev_err(DCB_TO_DEV(dcb), "Failed to allocate irq id, irq_src=%d", irq_src); ret = -EINVAL; goto err_reg_irq; } @@ -2835,6 +2837,40 @@ static int mtk_dpmaif_irq_exit(struct mtk_dpmaif_ctlb *dcb) return 0; } +static int mtk_dpmaif_port_init(struct mtk_dpmaif_ctlb *dcb) +{ + struct mtk_data_trans_info trans_info; + struct dpmaif_rxq *rxq; + int ret; + int i; + + memset(&trans_info, 0x00, sizeof(struct mtk_data_trans_info)); + trans_info.cap = dcb->res_cfg->cap; + trans_info.txq_cnt = dcb->res_cfg->txq_cnt; + trans_info.rxq_cnt = dcb->res_cfg->rxq_cnt; + trans_info.max_mtu = dcb->bat_info.max_mtu; + + for (i = 0; i < trans_info.rxq_cnt; i++) { + rxq = &dcb->rxqs[i]; + dcb->napi[i] = &rxq->napi; + } + trans_info.napis = dcb->napi; + + /* Initialize data port layer. */ + dcb->port_ops = &data_port_ops; + ret = dcb->port_ops->init(dcb->data_blk, &trans_info); + if (ret < 0) + dev_err(DCB_TO_DEV(dcb), + "Failed to initialize data port layer, ret=%d\n", ret); + + return ret; +} + +static void mtk_dpmaif_port_exit(struct mtk_dpmaif_ctlb *dcb) +{ + dcb->port_ops->exit(dcb->data_blk); +} + static int mtk_dpmaif_hw_init(struct mtk_dpmaif_ctlb *dcb) { struct dpmaif_bat_ring *bat_ring; @@ -2980,11 +3016,18 @@ static int mtk_dpmaif_stop(struct mtk_dpmaif_ctlb *dcb) */ dcb->dpmaif_state = DPMAIF_STATE_PWROFF; + /* Stop data port layer tx. */ + dcb->port_ops->notify(dcb->data_blk, DATA_EVT_TX_STOP, 0xff); + /* Stop all tx service. */ mtk_dpmaif_tx_srvs_stop(dcb); /* Stop dpmaif tx/rx handle. */ mtk_dpmaif_trans_ctl(dcb, false); + + /* Stop data port layer rx. */ + dcb->port_ops->notify(dcb->data_blk, DATA_EVT_RX_STOP, 0xff); + out: return 0; } @@ -3002,6 +3045,11 @@ static void mtk_dpmaif_fsm_callback(struct mtk_fsm_param *fsm_param, void *data) case FSM_STATE_OFF: mtk_dpmaif_stop(dcb); + /* Unregister data port, because data port will be + * registered again in FSM_STATE_READY stage. + */ + dcb->port_ops->notify(dcb->data_blk, DATA_EVT_UNREG_DEV, 0); + /* Flush all cmd process. */ flush_work(&dcb->cmd_srv.work); @@ -3013,6 +3061,7 @@ static void mtk_dpmaif_fsm_callback(struct mtk_fsm_param *fsm_param, void *data) mtk_dpmaif_start(dcb); break; case FSM_STATE_READY: + dcb->port_ops->notify(dcb->data_blk, DATA_EVT_REG_DEV, 0); break; case FSM_STATE_MDEE: if (fsm_param->fsm_flag == FSM_F_MDEE_INIT) @@ -3102,6 +3151,12 @@ static int mtk_dpmaif_sw_init(struct mtk_data_blk *data_blk, const struct dpmaif goto err_init_drv_res; } + ret = mtk_dpmaif_port_init(dcb); + if (ret < 0) { + dev_err(DCB_TO_DEV(dcb), "Failed to initialize data port, ret=%d\n", ret); + goto err_init_port; + } + ret = mtk_dpmaif_fsm_init(dcb); if (ret < 0) { dev_err(DCB_TO_DEV(dcb), "Failed to initialize dpmaif fsm, ret=%d\n", ret); @@ -3119,6 +3174,8 @@ static int mtk_dpmaif_sw_init(struct mtk_data_blk *data_blk, const struct dpmaif err_init_irq: mtk_dpmaif_fsm_exit(dcb); err_init_fsm: + mtk_dpmaif_port_exit(dcb); +err_init_port: mtk_dpmaif_drv_res_exit(dcb); err_init_drv_res: mtk_dpmaif_cmd_srvs_exit(dcb); @@ -3147,6 +3204,7 @@ static int mtk_dpmaif_sw_exit(struct mtk_data_blk *data_blk) mtk_dpmaif_irq_exit(dcb); mtk_dpmaif_fsm_exit(dcb); + mtk_dpmaif_port_exit(dcb); mtk_dpmaif_drv_res_exit(dcb); mtk_dpmaif_cmd_srvs_exit(dcb); mtk_dpmaif_tx_srvs_exit(dcb); @@ -3431,7 +3489,6 @@ static int mtk_dpmaif_pit_bid_frag_check(struct dpmaif_rxq *rxq, unsigned int cu bat_ring = &rxq->dcb->bat_info.frag_bat_ring; cur_bat_record = bat_ring->sw_record_base + cur_bid; - if (unlikely(!cur_bat_record->frag.page || cur_bid >= bat_ring->bat_cnt)) { dev_err(DCB_TO_DEV(dcb), "Invalid parameter rxq%u bat%d, bid=%u, bat_cnt=%u\n", @@ -3569,6 +3626,8 @@ static int mtk_dpmaif_rx_skb(struct dpmaif_rxq *rxq, struct dpmaif_rx_record *rx skb_record_rx_queue(new_skb, rxq->id); + /* Send skb to data port. */ + ret = dcb->port_ops->recv(dcb->data_blk, new_skb, rxq->id, rx_record->cur_ch_id); dcb->traffic_stats.rx_packets[rxq->id]++; out: rx_record->lro_parent = NULL; @@ -3931,10 +3990,13 @@ static int mtk_dpmaif_send_pkt(struct mtk_dpmaif_ctlb *dcb, struct sk_buff *skb, vq = &dcb->tx_vqs[vq_id]; srv_id = dcb->res_cfg->tx_vq_srv_map[vq_id]; - if (likely(skb_queue_len(&vq->list) < vq->max_len)) + if (likely(skb_queue_len(&vq->list) < vq->max_len)) { skb_queue_tail(&vq->list, skb); - else + } else { + /* Notify to data port layer, data port should carry off the net device tx queue. */ + dcb->port_ops->notify(dcb->data_blk, DATA_EVT_TX_STOP, (u64)1 << vq_id); ret = -EBUSY; + } mtk_dpmaif_wake_up_tx_srv(&dcb->tx_srvs[srv_id]); diff --git a/drivers/net/wwan/mediatek/mtk_dpmaif_drv.h b/drivers/net/wwan/mediatek/mtk_dpmaif_drv.h index 29b6c99bba42..34ec846e6336 100644 --- a/drivers/net/wwan/mediatek/mtk_dpmaif_drv.h +++ b/drivers/net/wwan/mediatek/mtk_dpmaif_drv.h @@ -84,12 +84,12 @@ enum mtk_drv_err { enum { DPMAIF_CLEAR_INTR, - DPMAIF_UNMASK_INTR, + DPMAIF_UNMASK_INTR }; enum dpmaif_drv_dlq_id { DPMAIF_DLQ0 = 0, - DPMAIF_DLQ1, + DPMAIF_DLQ1 }; struct dpmaif_drv_dlq { @@ -132,7 +132,7 @@ enum dpmaif_drv_ring_type { DPMAIF_PIT, DPMAIF_BAT, DPMAIF_FRAG, - DPMAIF_DRB, + DPMAIF_DRB }; enum dpmaif_drv_ring_idx { @@ -143,7 +143,7 @@ enum dpmaif_drv_ring_idx { DPMAIF_FRAG_WIDX, DPMAIF_FRAG_RIDX, DPMAIF_DRB_WIDX, - DPMAIF_DRB_RIDX, + DPMAIF_DRB_RIDX }; struct dpmaif_drv_irq_en_mask { @@ -184,7 +184,7 @@ enum dpmaif_drv_intr_type { DPMAIF_INTR_DL_FRGCNT_LEN_ERR, DPMAIF_INTR_DL_PITCNT_LEN_ERR, DPMAIF_INTR_DL_DONE, - DPMAIF_INTR_MAX + DPMAIF_INTR_MAX, }; #define DPMAIF_INTR_COUNT ((DPMAIF_INTR_MAX) - (DPMAIF_INTR_MIN) - 1) diff --git a/drivers/net/wwan/mediatek/mtk_ethtool.c b/drivers/net/wwan/mediatek/mtk_ethtool.c new file mode 100644 index 000000000000..b052d41027c2 --- /dev/null +++ b/drivers/net/wwan/mediatek/mtk_ethtool.c @@ -0,0 +1,179 @@ +// SPDX-License-Identifier: BSD-3-Clause-Clear +/* + * Copyright (c) 2022, MediaTek Inc. + */ + +#include +#include + +#include "mtk_data_plane.h" + +#define MTK_MAX_COALESCE_TIME 3 +#define MTK_MAX_COALESCE_FRAMES 1000 + +static int mtk_ethtool_cmd_execute(struct net_device *dev, enum mtk_data_cmd_type cmd, void *data) +{ + return mtk_wwan_cmd_execute(dev, cmd, data); +} + +static void mtk_ethtool_get_strings(struct net_device *dev, u32 sset, u8 *data) +{ + if (sset != ETH_SS_STATS) + return; + + mtk_ethtool_cmd_execute(dev, DATA_CMD_STRING_GET, data); +} + +static int mtk_ethtool_get_sset_count(struct net_device *dev, int sset) +{ + int s_count = 0; + int ret; + + if (sset != ETH_SS_STATS) + return -EOPNOTSUPP; + + ret = mtk_ethtool_cmd_execute(dev, DATA_CMD_STRING_CNT_GET, &s_count); + + if (ret) + return ret; + + return s_count; +} + +static void mtk_ethtool_get_stats(struct net_device *dev, + struct ethtool_stats *stats, u64 *data) +{ + mtk_ethtool_cmd_execute(dev, DATA_CMD_TRANS_DUMP, data); +} + +static int mtk_ethtool_get_coalesce(struct net_device *dev, + struct ethtool_coalesce *ec, + struct kernel_ethtool_coalesce *kec, + struct netlink_ext_ack *ack) +{ + struct mtk_data_intr_coalesce intr_get; + int ret; + + ret = mtk_ethtool_cmd_execute(dev, DATA_CMD_INTR_COALESCE_GET, &intr_get); + + if (ret) + return ret; + + ec->rx_coalesce_usecs = intr_get.rx_coalesce_usecs; + ec->tx_coalesce_usecs = intr_get.tx_coalesce_usecs; + ec->rx_max_coalesced_frames = intr_get.rx_coalesced_frames; + ec->tx_max_coalesced_frames = intr_get.tx_coalesced_frames; + + return 0; +} + +static int mtk_ethtool_set_coalesce(struct net_device *dev, + struct ethtool_coalesce *ec, + struct kernel_ethtool_coalesce *kec, + struct netlink_ext_ack *ack) +{ + struct mtk_data_intr_coalesce intr_set; + + if (ec->rx_coalesce_usecs > MTK_MAX_COALESCE_TIME) + return -EINVAL; + if (ec->tx_coalesce_usecs > MTK_MAX_COALESCE_TIME) + return -EINVAL; + if (ec->rx_max_coalesced_frames > MTK_MAX_COALESCE_FRAMES) + return -EINVAL; + if (ec->tx_max_coalesced_frames > MTK_MAX_COALESCE_FRAMES) + return -EINVAL; + + intr_set.rx_coalesce_usecs = ec->rx_coalesce_usecs; + intr_set.tx_coalesce_usecs = ec->tx_coalesce_usecs; + intr_set.rx_coalesced_frames = ec->rx_max_coalesced_frames; + intr_set.tx_coalesced_frames = ec->tx_max_coalesced_frames; + + return mtk_ethtool_cmd_execute(dev, DATA_CMD_INTR_COALESCE_SET, &intr_set); +} + +static int mtk_ethtool_get_rxfh(struct net_device *dev, u32 *indir, u8 *key, u8 *hfunc) +{ + struct mtk_data_rxfh rxfh; + + if (!indir && !key) + return 0; + + if (hfunc) + *hfunc = ETH_RSS_HASH_TOP; + + rxfh.indir = indir; + rxfh.key = key; + + return mtk_ethtool_cmd_execute(dev, DATA_CMD_RXFH_GET, &rxfh); +} + +static int mtk_ethtool_set_rxfh(struct net_device *dev, const u32 *indir, + const u8 *key, const u8 hfunc) +{ + struct mtk_data_rxfh rxfh; + + if (hfunc != ETH_RSS_HASH_NO_CHANGE) + return -EOPNOTSUPP; + + if (!indir && !key) + return 0; + + rxfh.indir = (u32 *)indir; + rxfh.key = (u8 *)key; + + return mtk_ethtool_cmd_execute(dev, DATA_CMD_RXFH_SET, &rxfh); +} + +static int mtk_ethtool_get_rxfhc(struct net_device *dev, + struct ethtool_rxnfc *rxnfc, u32 *rule_locs) +{ + u32 rx_rings; + int ret; + + /* Only supported %ETHTOOL_GRXRINGS */ + if (!rxnfc || rxnfc->cmd != ETHTOOL_GRXRINGS) + return -EOPNOTSUPP; + + ret = mtk_ethtool_cmd_execute(dev, DATA_CMD_RXQ_NUM_GET, &rx_rings); + if (!ret) + rxnfc->data = rx_rings; + + return ret; +} + +static u32 mtk_ethtool_get_indir_size(struct net_device *dev) +{ + u32 indir_size = 0; + + mtk_ethtool_cmd_execute(dev, DATA_CMD_INDIR_SIZE_GET, &indir_size); + + return indir_size; +} + +static u32 mtk_ethtool_get_hkey_size(struct net_device *dev) +{ + u32 hkey_size = 0; + + mtk_ethtool_cmd_execute(dev, DATA_CMD_HKEY_SIZE_GET, &hkey_size); + + return hkey_size; +} + +static const struct ethtool_ops mtk_wwan_ethtool_ops = { + .supported_coalesce_params = ETHTOOL_COALESCE_USECS | ETHTOOL_COALESCE_MAX_FRAMES, + .get_ethtool_stats = mtk_ethtool_get_stats, + .get_sset_count = mtk_ethtool_get_sset_count, + .get_strings = mtk_ethtool_get_strings, + .get_coalesce = mtk_ethtool_get_coalesce, + .set_coalesce = mtk_ethtool_set_coalesce, + .get_rxfh = mtk_ethtool_get_rxfh, + .set_rxfh = mtk_ethtool_set_rxfh, + .get_rxnfc = mtk_ethtool_get_rxfhc, + .get_rxfh_indir_size = mtk_ethtool_get_indir_size, + .get_rxfh_key_size = mtk_ethtool_get_hkey_size, +}; + +void mtk_ethtool_set_ops(struct net_device *dev) +{ + dev->ethtool_ops = &mtk_wwan_ethtool_ops; +} diff --git a/drivers/net/wwan/mediatek/mtk_wwan.c b/drivers/net/wwan/mediatek/mtk_wwan.c new file mode 100644 index 000000000000..27c3f52ea7f2 --- /dev/null +++ b/drivers/net/wwan/mediatek/mtk_wwan.c @@ -0,0 +1,665 @@ +// SPDX-License-Identifier: BSD-3-Clause-Clear +/* + * Copyright (c) 2022, MediaTek Inc. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "mtk_data_plane.h" +#include "mtk_dev.h" + +#define MTK_NETDEV_MAX 20 +#define MTK_DFLT_INTF_ID 0 +#define MTK_NETDEV_WDT (HZ) +#define MTK_CMD_WDT (HZ) +#define MTK_MAX_INTF_ID (MTK_NETDEV_MAX - 1) +#define MTK_NAPI_POLL_WEIGHT 128 + +static unsigned int napi_budget = MTK_NAPI_POLL_WEIGHT; + +/* struct mtk_wwan_instance - This is netdevice's private data, + * contains information about netdevice. + * @wcb: Contains all information about WWAN port layer. + * @stats: Statistics of netdevice's tx/rx packets. + * @tx_busy: Statistics of netdevice's busy counts. + * @netdev: Pointer to netdevice structure. + * @intf_id: The netdevice's interface id + */ +struct mtk_wwan_instance { + struct mtk_wwan_ctlb *wcb; + struct rtnl_link_stats64 stats; + unsigned long long tx_busy; + struct net_device *netdev; + unsigned int intf_id; +}; + +/* struct mtk_wwan_ctlb - Contains WWAN port layer information and save trans information needed. + * @data_blk: Contains data port, trans layer, md_dev structure. + * @mdev: Pointer of mtk_md_dev. + * @trans_ops: Contains trans layer ops: send, select_txq, napi_poll. + * @napis: Trans layer alloc napi structure by rx queue. + * @dummy_dev: Used for multiple network devices share one napi. + * @cap: Contains different hardware capabilities. + * @max_mtu: The max MTU supported. + * @napi_enable: Mark for napi state. + * @active_cnt: The counter of network devices that are UP. + * @txq_num: Total TX qdisc number. + * @rxq_num: Total RX qdisc number. + * @reg_done: Mark for ntwork devices register state. + */ +struct mtk_wwan_ctlb { + struct mtk_data_blk *data_blk; + struct mtk_md_dev *mdev; + struct mtk_data_trans_ops *trans_ops; + struct mtk_wwan_instance __rcu *wwan_inst[MTK_NETDEV_MAX]; + struct napi_struct **napis; + struct net_device dummy_dev; + + u32 cap; + atomic_t napi_enabled; + unsigned int max_mtu; + unsigned int active_cnt; + unsigned char txq_num; + unsigned char rxq_num; + bool reg_done; +}; + +static void mtk_wwan_set_skb(struct sk_buff *skb, struct net_device *netdev) +{ + unsigned int pkt_type; + + pkt_type = skb->data[0] & 0xF0; + + if (pkt_type == IPV4_VERSION) + skb->protocol = htons(ETH_P_IP); + else + skb->protocol = htons(ETH_P_IPV6); + + skb->dev = netdev; +} + +/* mtk_wwan_data_recv - Collect data packet. + * @data_blk: Save netdev information. + * @q_id: RX queue id, used for select NAPI. + * @intf_id: Interface id, determine this skb belong to which netdev. + */ +static int mtk_wwan_data_recv(struct mtk_data_blk *data_blk, struct sk_buff *skb, + unsigned char q_id, unsigned char intf_id) +{ + struct mtk_wwan_instance *wwan_inst; + struct net_device *netdev; + struct napi_struct *napi; + + if (unlikely(!data_blk || !data_blk->wcb)) + goto err_rx; + + if (intf_id > MTK_MAX_INTF_ID) { + dev_err(data_blk->mdev->dev, "Invalid interface id=%d\n", intf_id); + goto err_rx; + } + + rcu_read_lock(); + wwan_inst = rcu_dereference(data_blk->wcb->wwan_inst[intf_id]); + + if (unlikely(!wwan_inst)) { + dev_err(data_blk->mdev->dev, "Invalid pointer wwan_inst is NULL\n"); + rcu_read_unlock(); + goto err_rx; + } + + napi = data_blk->wcb->napis[q_id]; + netdev = wwan_inst->netdev; + + mtk_wwan_set_skb(skb, netdev); + + wwan_inst->stats.rx_packets++; + wwan_inst->stats.rx_bytes += skb->len; + + napi_gro_receive(napi, skb); + + rcu_read_unlock(); + return 0; + +err_rx: + dev_kfree_skb_any(skb); + return -EINVAL; +} + +static void mtk_wwan_napi_enable(struct mtk_wwan_ctlb *wcb) +{ + int i; + + if (atomic_cmpxchg(&wcb->napi_enabled, 0, 1) == 0) { + for (i = 0; i < wcb->rxq_num; i++) + napi_enable(wcb->napis[i]); + } +} + +static void mtk_wwan_napi_disable(struct mtk_wwan_ctlb *wcb) +{ + int i; + + if (atomic_cmpxchg(&wcb->napi_enabled, 1, 0) == 1) { + for (i = 0; i < wcb->rxq_num; i++) { + napi_synchronize(wcb->napis[i]); + napi_disable(wcb->napis[i]); + } + } +} + +static int mtk_wwan_open(struct net_device *dev) +{ + struct mtk_wwan_instance *wwan_inst = wwan_netdev_drvpriv(dev); + struct mtk_wwan_ctlb *wcb = wwan_inst->wcb; + struct mtk_data_trans_ctl trans_ctl; + int ret; + + if (wcb->active_cnt == 0) { + mtk_wwan_napi_enable(wcb); + trans_ctl.enable = true; + ret = mtk_wwan_cmd_execute(dev, DATA_CMD_TRANS_CTL, &trans_ctl); + if (ret < 0) { + dev_err(wcb->mdev->dev, "Failed to enable trans\n"); + goto err_ctl; + } + } + + wcb->active_cnt++; + + netif_tx_start_all_queues(dev); + netif_carrier_on(dev); + + return 0; + +err_ctl: + mtk_wwan_napi_disable(wcb); + return ret; +} + +static int mtk_wwan_stop(struct net_device *dev) +{ + struct mtk_wwan_instance *wwan_inst = wwan_netdev_drvpriv(dev); + struct mtk_wwan_ctlb *wcb = wwan_inst->wcb; + struct mtk_data_trans_ctl trans_ctl; + int ret; + + netif_carrier_off(dev); + netif_tx_disable(dev); + + if (wcb->active_cnt == 1) { + trans_ctl.enable = false; + ret = mtk_wwan_cmd_execute(dev, DATA_CMD_TRANS_CTL, &trans_ctl); + if (ret < 0) + dev_err(wcb->mdev->dev, "Failed to disable trans\n"); + mtk_wwan_napi_disable(wcb); + } + wcb->active_cnt--; + + return 0; +} + +static void mtk_wwan_select_txq(struct mtk_wwan_instance *wwan_inst, struct sk_buff *skb, + enum mtk_pkt_type pkt_type) +{ + u16 qid; + + qid = wwan_inst->wcb->trans_ops->select_txq(skb, pkt_type); + if (qid > wwan_inst->wcb->txq_num) + qid = 0; + + skb_set_queue_mapping(skb, qid); +} + +static netdev_tx_t mtk_wwan_start_xmit(struct sk_buff *skb, struct net_device *dev) +{ + struct mtk_wwan_instance *wwan_inst = wwan_netdev_drvpriv(dev); + unsigned int intf_id = wwan_inst->intf_id; + unsigned int skb_len = skb->len; + int ret; + + if (unlikely(skb->len > dev->mtu)) { + dev_err(wwan_inst->wcb->mdev->dev, + "Failed to write skb,netdev=%s,len=0x%x,MTU=0x%x\n", + dev->name, skb->len, dev->mtu); + goto err_tx; + } + + /* select trans layer virtual queue */ + mtk_wwan_select_txq(wwan_inst, skb, PURE_IP); + + /* Forward skb to trans layer(DPMAIF). */ + ret = wwan_inst->wcb->trans_ops->send(wwan_inst->wcb->data_blk, DATA_PKT, skb, intf_id); + if (ret == -EBUSY) { + wwan_inst->tx_busy++; + return NETDEV_TX_BUSY; + } else if (ret == -EINVAL) { + goto err_tx; + } + + wwan_inst->stats.tx_packets++; + wwan_inst->stats.tx_bytes += skb_len; + goto out; + +err_tx: + wwan_inst->stats.tx_errors++; + wwan_inst->stats.tx_dropped++; + dev_kfree_skb_any(skb); +out: + return NETDEV_TX_OK; +} + +static void mtk_wwan_get_stats(struct net_device *dev, struct rtnl_link_stats64 *stats) +{ + struct mtk_wwan_instance *wwan_inst = wwan_netdev_drvpriv(dev); + + memcpy(stats, &wwan_inst->stats, sizeof(*stats)); +} + +static const struct net_device_ops mtk_netdev_ops = { + .ndo_open = mtk_wwan_open, + .ndo_stop = mtk_wwan_stop, + .ndo_start_xmit = mtk_wwan_start_xmit, + .ndo_get_stats64 = mtk_wwan_get_stats, +}; + +static void mtk_wwan_cmd_complete(void *data) +{ + struct mtk_data_cmd *event; + struct sk_buff *skb = data; + + event = (struct mtk_data_cmd *)skb->data; + complete(&event->done); +} + +static int mtk_wwan_cmd_check(struct net_device *dev, enum mtk_data_cmd_type cmd) +{ + struct mtk_wwan_instance *wwan_inst = wwan_netdev_drvpriv(dev); + int ret = 0; + + switch (cmd) { + case DATA_CMD_INTR_COALESCE_GET: + fallthrough; + case DATA_CMD_INTR_COALESCE_SET: + if (!(wwan_inst->wcb->cap & DATA_F_INTR_COALESCE)) + ret = -EOPNOTSUPP; + break; + case DATA_CMD_INDIR_SIZE_GET: + fallthrough; + case DATA_CMD_HKEY_SIZE_GET: + fallthrough; + case DATA_CMD_RXFH_GET: + fallthrough; + case DATA_CMD_RXFH_SET: + if (!(wwan_inst->wcb->cap & DATA_F_RXFH)) + ret = -EOPNOTSUPP; + break; + case DATA_CMD_RXQ_NUM_GET: + fallthrough; + case DATA_CMD_TRANS_DUMP: + fallthrough; + case DATA_CMD_STRING_CNT_GET: + fallthrough; + case DATA_CMD_STRING_GET: + break; + case DATA_CMD_TRANS_CTL: + break; + default: + ret = -EOPNOTSUPP; + break; + } + + return ret; +} + +static struct sk_buff *mtk_wwan_cmd_alloc(enum mtk_data_cmd_type cmd, unsigned int len) + +{ + struct mtk_data_cmd *event; + struct sk_buff *skb; + + skb = dev_alloc_skb(sizeof(*event) + len); + if (unlikely(!skb)) + return NULL; + + skb_put(skb, len + sizeof(*event)); + event = (struct mtk_data_cmd *)skb->data; + event->cmd = cmd; + event->len = len; + + init_completion(&event->done); + event->data_complete = mtk_wwan_cmd_complete; + + return skb; +} + +static int mtk_wwan_cmd_send(struct net_device *dev, struct sk_buff *skb) +{ + struct mtk_wwan_instance *wwan_inst = wwan_netdev_drvpriv(dev); + struct mtk_data_cmd *event = (struct mtk_data_cmd *)skb->data; + int ret; + + ret = wwan_inst->wcb->trans_ops->send(wwan_inst->wcb->data_blk, DATA_CMD, skb, 0); + if (ret < 0) + return ret; + + if (!wait_for_completion_timeout(&event->done, MTK_CMD_WDT)) + return -ETIMEDOUT; + + if (event->ret < 0) + return event->ret; + + return 0; +} + +int mtk_wwan_cmd_execute(struct net_device *dev, + enum mtk_data_cmd_type cmd, void *data) +{ + struct mtk_wwan_instance *wwan_inst; + struct sk_buff *skb; + int ret; + + if (mtk_wwan_cmd_check(dev, cmd)) + return -EOPNOTSUPP; + + skb = mtk_wwan_cmd_alloc(cmd, sizeof(void *)); + if (unlikely(!skb)) + return -ENOMEM; + + SKB_TO_CMD_DATA(skb) = data; + + ret = mtk_wwan_cmd_send(dev, skb); + if (ret < 0) { + wwan_inst = wwan_netdev_drvpriv(dev); + dev_err(wwan_inst->wcb->mdev->dev, + "Failed to excute command:ret=%d,cmd=%d\n", ret, cmd); + } + + if (likely(skb)) + dev_kfree_skb_any(skb); + + return ret; +} + +static int mtk_wwan_start_txq(struct mtk_wwan_ctlb *wcb, u32 qmask) +{ + struct mtk_wwan_instance *wwan_inst; + struct net_device *dev; + int i; + + rcu_read_lock(); + /* All wwan network devices share same HIF queue */ + for (i = 0; i < MTK_NETDEV_MAX; i++) { + wwan_inst = rcu_dereference(wcb->wwan_inst[i]); + if (!wwan_inst) + continue; + + dev = wwan_inst->netdev; + + if (!(dev->flags & IFF_UP)) + continue; + + netif_tx_wake_all_queues(dev); + netif_carrier_on(dev); + } + rcu_read_unlock(); + + return 0; +} + +static int mtk_wwan_stop_txq(struct mtk_wwan_ctlb *wcb, u32 qmask) +{ + struct mtk_wwan_instance *wwan_inst; + struct net_device *dev; + int i; + + rcu_read_lock(); + /* All wwan network devices share same HIF queue */ + for (i = 0; i < MTK_NETDEV_MAX; i++) { + wwan_inst = rcu_dereference(wcb->wwan_inst[i]); + if (!wwan_inst) + continue; + + dev = wwan_inst->netdev; + + if (!(dev->flags & IFF_UP)) + continue; + + netif_carrier_off(dev); + /* the network transmit lock has already been held in the ndo_start_xmit context */ + netif_tx_stop_all_queues(dev); + } + rcu_read_unlock(); + + return 0; +} + +static void mtk_wwan_napi_exit(struct mtk_wwan_ctlb *wcb) +{ + int i; + + for (i = 0; i < wcb->rxq_num; i++) { + if (!wcb->napis[i]) + continue; + netif_napi_del(wcb->napis[i]); + } +} + +static int mtk_wwan_napi_init(struct mtk_wwan_ctlb *wcb, struct net_device *dev) +{ + int i; + + for (i = 0; i < wcb->rxq_num; i++) { + if (!wcb->napis[i]) { + dev_err(wcb->mdev->dev, "Invalid napi pointer, napi=%d", i); + goto err; + } + netif_napi_add_weight(dev, wcb->napis[i], wcb->trans_ops->poll, napi_budget); + } + + return 0; + +err: + for (--i; i >= 0; i--) + netif_napi_del(wcb->napis[i]); + return -EINVAL; +} + +static void mtk_wwan_setup(struct net_device *dev) +{ + dev->watchdog_timeo = MTK_NETDEV_WDT; + dev->mtu = ETH_DATA_LEN; + dev->min_mtu = ETH_MIN_MTU; + + dev->features = NETIF_F_SG; + dev->hw_features = NETIF_F_SG; + + dev->features |= NETIF_F_HW_CSUM; + dev->hw_features |= NETIF_F_HW_CSUM; + + dev->features |= NETIF_F_RXCSUM; + dev->hw_features |= NETIF_F_RXCSUM; + + dev->features |= NETIF_F_GRO; + dev->hw_features |= NETIF_F_GRO; + + dev->features |= NETIF_F_RXHASH; + dev->hw_features |= NETIF_F_RXHASH; + + dev->addr_len = ETH_ALEN; + dev->tx_queue_len = DEFAULT_TX_QUEUE_LEN; + + /* Pure IP device. */ + dev->flags = IFF_NOARP; + dev->type = ARPHRD_NONE; + + dev->needs_free_netdev = true; + + dev->netdev_ops = &mtk_netdev_ops; + mtk_ethtool_set_ops(dev); +} + +static int mtk_wwan_newlink(void *ctxt, struct net_device *dev, u32 intf_id, + struct netlink_ext_ack *extack) +{ + struct mtk_wwan_instance *wwan_inst = wwan_netdev_drvpriv(dev); + struct mtk_wwan_ctlb *wcb = ctxt; + int ret; + + if (intf_id > MTK_MAX_INTF_ID) { + ret = -EINVAL; + goto err; + } + + dev->max_mtu = wcb->max_mtu; + + wwan_inst->wcb = wcb; + wwan_inst->netdev = dev; + wwan_inst->intf_id = intf_id; + + if (rcu_access_pointer(wcb->wwan_inst[intf_id])) { + ret = -EBUSY; + goto err; + } + + ret = register_netdevice(dev); + if (ret) + goto err; + + rcu_assign_pointer(wcb->wwan_inst[intf_id], wwan_inst); + + netif_device_attach(dev); + + return 0; +err: + return ret; +} + +static void mtk_wwan_dellink(void *ctxt, struct net_device *dev, + struct list_head *head) +{ + struct mtk_wwan_instance *wwan_inst = wwan_netdev_drvpriv(dev); + int intf_id = wwan_inst->intf_id; + struct mtk_wwan_ctlb *wcb = ctxt; + + if (WARN_ON(rcu_access_pointer(wcb->wwan_inst[intf_id]) != wwan_inst)) + return; + + RCU_INIT_POINTER(wcb->wwan_inst[intf_id], NULL); + unregister_netdevice_queue(dev, head); +} + +static const struct wwan_ops mtk_wwan_ops = { + .priv_size = sizeof(struct mtk_wwan_instance), + .setup = mtk_wwan_setup, + .newlink = mtk_wwan_newlink, + .dellink = mtk_wwan_dellink, +}; + +static void mtk_wwan_notify(struct mtk_data_blk *data_blk, enum mtk_data_evt evt, u64 data) +{ + struct mtk_wwan_ctlb *wcb; + + if (unlikely(!data_blk || !data_blk->wcb)) + return; + + wcb = data_blk->wcb; + + switch (evt) { + case DATA_EVT_TX_START: + mtk_wwan_start_txq(wcb, data); + break; + case DATA_EVT_TX_STOP: + mtk_wwan_stop_txq(wcb, data); + break; + + case DATA_EVT_RX_STOP: + mtk_wwan_napi_disable(wcb); + break; + + case DATA_EVT_REG_DEV: + if (!wcb->reg_done) { + wwan_register_ops(wcb->mdev->dev, &mtk_wwan_ops, wcb, MTK_DFLT_INTF_ID); + wcb->reg_done = true; + } + break; + + case DATA_EVT_UNREG_DEV: + if (wcb->reg_done) { + wwan_unregister_ops(wcb->mdev->dev); + wcb->reg_done = false; + } + break; + + default: + break; + } +} + +static int mtk_wwan_init(struct mtk_data_blk *data_blk, struct mtk_data_trans_info *trans_info) +{ + struct mtk_wwan_ctlb *wcb; + int ret; + + if (unlikely(!data_blk || !trans_info)) + return -EINVAL; + + wcb = devm_kzalloc(data_blk->mdev->dev, sizeof(*wcb), GFP_KERNEL); + if (unlikely(!wcb)) + return -ENOMEM; + + wcb->trans_ops = &data_trans_ops; + wcb->mdev = data_blk->mdev; + wcb->data_blk = data_blk; + wcb->napis = trans_info->napis; + wcb->max_mtu = trans_info->max_mtu; + wcb->cap = trans_info->cap; + wcb->rxq_num = trans_info->rxq_cnt; + wcb->txq_num = trans_info->txq_cnt; + atomic_set(&wcb->napi_enabled, 0); + init_dummy_netdev(&wcb->dummy_dev); + + data_blk->wcb = wcb; + + /* Multiple virtual network devices share one physical device, + * so we use dummy device to enable NAPI for multiple virtual network devices. + */ + ret = mtk_wwan_napi_init(wcb, &wcb->dummy_dev); + if (ret < 0) + goto err_napi_init; + + return 0; +err_napi_init: + devm_kfree(data_blk->mdev->dev, wcb); + data_blk->wcb = NULL; + + return ret; +} + +static void mtk_wwan_exit(struct mtk_data_blk *data_blk) +{ + struct mtk_wwan_ctlb *wcb; + + if (unlikely(!data_blk || !data_blk->wcb)) + return; + + wcb = data_blk->wcb; + mtk_wwan_napi_exit(wcb); + devm_kfree(data_blk->mdev->dev, wcb); + data_blk->wcb = NULL; +} + +struct mtk_data_port_ops data_port_ops = { + .init = mtk_wwan_init, + .exit = mtk_wwan_exit, + .recv = mtk_wwan_data_recv, + .notify = mtk_wwan_notify, +}; From patchwork Tue Nov 22 11:24:16 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?WWFuY2hhbyBZYW5nICjmnajlvabotoUp?= X-Patchwork-Id: 24309 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp2149063wrr; Tue, 22 Nov 2022 03:42:52 -0800 (PST) X-Google-Smtp-Source: AA0mqf51fyoZFXPc4h5VmyIjsmLJfuuWdLO/0dVvTzlmBosstBoP/4nmBg7DqmQMn7/9zwjUYeoy X-Received: by 2002:a17:90a:a003:b0:214:1a8a:a415 with SMTP id q3-20020a17090aa00300b002141a8aa415mr24956718pjp.197.1669117372320; Tue, 22 Nov 2022 03:42:52 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1669117372; cv=none; d=google.com; s=arc-20160816; b=LrsHyrXhP1zw/foCj89BVTKPj2ne4g8mMMvWw8Kc9NkSdHn02fRI7GzE5hOKBxeQSr HYU76AENi1IMOkRqekTX+9NEDr6UlRH6VdYxL2AoRccCGVO4NetDiWGpsgNKqo9Jw7Th xEG8MNZcNXvlOrVFIcyH6KxkQPjQEWxiwCfRxdarFqqAmy4XxIjocAWSCK0f1CBb5ejv mE0C2l/e/yLjcX1w/LJLPfCPf9ycNNhMXOIVnY7Fq7OCK097R5+o47adauVQJ0QSYLfy rU2Y0TrNQNnHJ6I1Kp2ofGeDiXoyBUYC24XputwwjDA19e+Nhg6VsnTUeBD8YpFsLwok J2ng== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:message-id:date:subject:cc:to:from :dkim-signature; bh=gPQ5B+gEfn3XCjbg4Rjff52wNdxbfyG/GbCFrOA/3Po=; b=WrUuRKw8duf+WC2pC+GiFhOYk6zZUays+F9/Hz7qatGO7+eXs+m8bFcBIi6U/kngZ+ DmkoZBTheNbWfXspSwuj04hSyjtIf+xRGobsC3XvayUmSFlwrCQ/Ikal0jDitgXyuwbu QUkeBkJIO549bAYwlXPgS0Z8/EiYlo4tEn5kbbZz4zF3uePXaPfBy4h86yTsvo3N4cI8 JEnnpDYVVgkiztOe/uBMqCfq1vSSZ2pWf+5ZZFguZ010Lq0dF07m/ou2nHn/PLuGazXL Wkpo4xHR7ZpijUT8uJJZ/nzfYWTJf/hzuEz6XCG2bFy11lkNyQgkP90TPgSBY+UsKqNp i9oA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@mediatek.com header.s=dk header.b=iOZHQgth; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=mediatek.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id j8-20020a634a48000000b0046abab88fd9si13221609pgl.677.2022.11.22.03.42.37; Tue, 22 Nov 2022 03:42:52 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@mediatek.com header.s=dk header.b=iOZHQgth; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=mediatek.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233301AbiKVLbj (ORCPT + 99 others); Tue, 22 Nov 2022 06:31:39 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50044 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233361AbiKVLa4 (ORCPT ); Tue, 22 Nov 2022 06:30:56 -0500 Received: from mailgw01.mediatek.com (unknown [60.244.123.138]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5DDBD654D0; Tue, 22 Nov 2022 03:24:33 -0800 (PST) X-UUID: d887a2e6ce774faeaadd73561ba80f49-20221122 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=mediatek.com; s=dk; h=Content-Type:MIME-Version:Message-ID:Date:Subject:CC:To:From; bh=gPQ5B+gEfn3XCjbg4Rjff52wNdxbfyG/GbCFrOA/3Po=; b=iOZHQgthBocG55wF+xvLPIXXjT7qWfCT/WjvfmFqq/sqWzixFmURT/JjZsggVezsC+ZKgCXH7arozoGE8zngeCJMaCX0tM+w+55gkdlXxZ8//ysgIzS9JX6U81seBWIhQkhMo6QwIsl23GJrk+g2upvPXr1YCx8ea5AAahEc4hc=; X-CID-P-RULE: Release_Ham X-CID-O-INFO: VERSION:1.1.13,REQID:b6b00d61-9e88-40d2-aa86-0137fb28e95c,IP:0,U RL:0,TC:0,Content:-25,EDM:0,RT:0,SF:0,FILE:0,BULK:0,RULE:Release_Ham,ACTIO N:release,TS:-25 X-CID-META: VersionHash:d12e911,CLOUDID:ec96fbf8-3a34-4838-abcf-dfedf9dd068e,B ulkID:nil,BulkQuantity:0,Recheck:0,SF:102,TC:nil,Content:0,EDM:-3,IP:nil,U RL:11|1,File:nil,Bulk:nil,QS:nil,BEC:nil,COL:0 X-UUID: d887a2e6ce774faeaadd73561ba80f49-20221122 Received: from mtkexhb01.mediatek.inc [(172.21.101.102)] by mailgw01.mediatek.com (envelope-from ) (Generic MTA with TLSv1.2 ECDHE-RSA-AES256-SHA384 256/256) with ESMTP id 1999148922; Tue, 22 Nov 2022 19:24:28 +0800 Received: from mtkmbs11n2.mediatek.inc (172.21.101.187) by mtkmbs10n1.mediatek.inc (172.21.101.34) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.792.15; Tue, 22 Nov 2022 19:24:26 +0800 Received: from mcddlt001.gcn.mediatek.inc (10.19.240.15) by mtkmbs11n2.mediatek.inc (172.21.101.73) with Microsoft SMTP Server id 15.2.792.15 via Frontend Transport; Tue, 22 Nov 2022 19:24:24 +0800 From: Yanchao Yang To: Loic Poulain , Sergey Ryazanov , Johannes Berg , "David S . Miller" , Eric Dumazet , "Jakub Kicinski" , Paolo Abeni , netdev ML , kernel ML CC: MTK ML , Liang Lu , Haijun Liu , Hua Yang , Ting Wang , Felix Chen , Mingliang Xu , Min Dong , Aiden Wang , Guohao Zhang , Chris Feng , "Yanchao Yang" , Lambert Wang , Mingchuang Qiao , Xiayu Zhang , Haozhe Chang , MediaTek Corporation Subject: [PATCH net-next v1 11/13] net: wwan: tmi: Add exception handling service Date: Tue, 22 Nov 2022 19:24:16 +0800 Message-ID: <20221122112417.160844-1-yanchao.yang@mediatek.com> X-Mailer: git-send-email 2.18.0 MIME-Version: 1.0 X-MTK: N X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_MSPIKE_H2,SPF_HELO_PASS, SPF_PASS,UNPARSEABLE_RELAY autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1750196417535159934?= X-GMAIL-MSGID: =?utf-8?q?1750196417535159934?= From: MediaTek Corporation The exception handling service aims to recover the entire system when the host driver detects some exceptions. The scenarios that could trigger exceptions include: - Read/Write error from the transaction layer when the PCIe link brokes. - An RGU interrupt is received. - The OS reports PCIe link failure, e.g., an AER is detected. When an exception happens, the exception module will receive an exception event, and it will use FLDR or PLDR to reset the device. The exception module will also start a timer to check if the PCIe link is back by reading the vendor ID of the device, and it will re-initialize the host driver when the PCIe link comes back. Signed-off-by: Mingliang Xu Signed-off-by: MediaTek Corporation --- drivers/net/wwan/mediatek/Makefile | 3 +- drivers/net/wwan/mediatek/mtk_cldma.c | 21 ++- drivers/net/wwan/mediatek/mtk_dev.c | 8 + drivers/net/wwan/mediatek/mtk_dev.h | 78 ++++++++ drivers/net/wwan/mediatek/mtk_dpmaif.c | 16 +- drivers/net/wwan/mediatek/mtk_dpmaif_drv.h | 10 +- drivers/net/wwan/mediatek/mtk_except.c | 176 ++++++++++++++++++ drivers/net/wwan/mediatek/mtk_fsm.c | 2 + .../wwan/mediatek/pcie/mtk_cldma_drv_t800.c | 15 +- drivers/net/wwan/mediatek/pcie/mtk_pci.c | 47 +++++ 10 files changed, 358 insertions(+), 18 deletions(-) create mode 100644 drivers/net/wwan/mediatek/mtk_except.c diff --git a/drivers/net/wwan/mediatek/Makefile b/drivers/net/wwan/mediatek/Makefile index 72655ce948bf..f0601d2eb604 100644 --- a/drivers/net/wwan/mediatek/Makefile +++ b/drivers/net/wwan/mediatek/Makefile @@ -15,7 +15,8 @@ mtk_tmi-y = \ mtk_fsm.o \ mtk_dpmaif.o \ mtk_wwan.o \ - mtk_ethtool.o + mtk_ethtool.o \ + mtk_except.o ccflags-y += -I$(srctree)/$(src)/ ccflags-y += -I$(srctree)/$(src)/pcie/ diff --git a/drivers/net/wwan/mediatek/mtk_cldma.c b/drivers/net/wwan/mediatek/mtk_cldma.c index 723237547650..4c8852f8ae9c 100644 --- a/drivers/net/wwan/mediatek/mtk_cldma.c +++ b/drivers/net/wwan/mediatek/mtk_cldma.c @@ -180,19 +180,29 @@ static int mtk_cldma_submit_tx(void *dev, struct sk_buff *skb) struct tx_req *req; struct virtq *vq; struct txq *txq; + int ret = 0; int err; vq = cd->trans->vq_tbl + trb->vqno; hw = cd->cldma_hw[vq->hif_id & HIF_ID_BITMASK]; txq = hw->txq[vq->txqno]; - if (!txq->req_budget) - return -EAGAIN; + if (!txq->req_budget) { + if (mtk_hw_mmio_check(hw->mdev)) { + mtk_except_report_evt(hw->mdev, EXCEPT_LINK_ERR); + ret = -EFAULT; + } else { + ret = -EAGAIN; + } + goto err; + } err = mtk_dma_map_single(hw->mdev, &data_dma_addr, skb->data, skb->len, DMA_TO_DEVICE); - if (err) - return -EFAULT; + if (err) { + ret = -EFAULT; + goto err; + } mutex_lock(&txq->lock); txq->req_budget--; @@ -213,7 +223,8 @@ static int mtk_cldma_submit_tx(void *dev, struct sk_buff *skb) wmb(); /* ensure GPD setup done before HW start */ - return 0; +err: + return ret; } /* cldma_trb_process() - Dispatch trb request to low-level CLDMA routine diff --git a/drivers/net/wwan/mediatek/mtk_dev.c b/drivers/net/wwan/mediatek/mtk_dev.c index d4472491ce9a..d64cbca5b56d 100644 --- a/drivers/net/wwan/mediatek/mtk_dev.c +++ b/drivers/net/wwan/mediatek/mtk_dev.c @@ -29,6 +29,13 @@ int mtk_dev_init(struct mtk_md_dev *mdev) if (ret) goto err_data_init; + ret = mtk_except_init(mdev); + if (ret) + goto err_except_init; + + return 0; +err_except_init: + mtk_data_exit(mdev); err_data_init: mtk_ctrl_exit(mdev); err_ctrl_init: @@ -46,6 +53,7 @@ void mtk_dev_exit(struct mtk_md_dev *mdev) mtk_data_exit(mdev); mtk_ctrl_exit(mdev); mtk_bm_exit(mdev); + mtk_except_exit(mdev); mtk_fsm_exit(mdev); } diff --git a/drivers/net/wwan/mediatek/mtk_dev.h b/drivers/net/wwan/mediatek/mtk_dev.h index 2739b8068a31..010c789e4dda 100644 --- a/drivers/net/wwan/mediatek/mtk_dev.h +++ b/drivers/net/wwan/mediatek/mtk_dev.h @@ -39,6 +39,7 @@ enum mtk_reset_type { RESET_FLDR, RESET_PLDR, RESET_RGU, + RESET_NONE }; enum mtk_reinit_type { @@ -51,6 +52,15 @@ enum mtk_l1ss_grp { L1SS_EXT_EVT, }; +enum mtk_except_evt { + EXCEPT_LINK_ERR, + EXCEPT_RGU, + EXCEPT_AER_DETECTED, + EXCEPT_AER_RESET, + EXCEPT_AER_RESUME, + EXCEPT_MAX +}; + #define L1SS_BIT_L1(grp) BIT(((grp) << 2) + 1) #define L1SS_BIT_L1_1(grp) BIT(((grp) << 2) + 2) #define L1SS_BIT_L1_2(grp) BIT(((grp) << 2) + 3) @@ -83,6 +93,7 @@ struct mtk_md_dev; * @get_ext_evt_status:Callback to get HW Layer external event status. * @reset: Callback to reset device. * @reinit: Callback to execute device re-initialization. + * @link_check: Callback to execute hardware link check. * @get_hp_status: Callback to get link hotplug status. */ struct mtk_hw_ops { @@ -119,10 +130,18 @@ struct mtk_hw_ops { int (*reset)(struct mtk_md_dev *mdev, enum mtk_reset_type type); int (*reinit)(struct mtk_md_dev *mdev, enum mtk_reinit_type type); + bool (*link_check)(struct mtk_md_dev *mdev); bool (*mmio_check)(struct mtk_md_dev *mdev); int (*get_hp_status)(struct mtk_md_dev *mdev); }; +struct mtk_md_except { + atomic_t flag; + enum mtk_reset_type type; + int pci_ext_irq_id; + struct timer_list timer; +}; + /* mtk_md_dev defines the structure of MTK modem device */ struct mtk_md_dev { struct device *dev; @@ -136,6 +155,7 @@ struct mtk_md_dev { void *ctrl_blk; void *data_blk; struct mtk_bm_ctrl *bm_ctrl; + struct mtk_md_except except; }; int mtk_dev_init(struct mtk_md_dev *mdev); @@ -429,6 +449,17 @@ static inline int mtk_hw_reinit(struct mtk_md_dev *mdev, enum mtk_reinit_type ty return mdev->hw_ops->reinit(mdev, type); } +/* mtk_hw_link_check() -Check if the link is down. + * + * @mdev: Device instance. + * + * Return: 0 indicates link normally, other value indicates link down. + */ +static inline bool mtk_hw_link_check(struct mtk_md_dev *mdev) +{ + return mdev->hw_ops->link_check(mdev); +} + /* mtk_hw_mmio_check() -Check if the PCIe MMIO is ready. * * @mdev: Device instance. @@ -517,4 +548,51 @@ static inline int mtk_dma_unmap_page(struct mtk_md_dev *mdev, return 0; } +/* mtk_except_report_evt() - Report exception event. + * + * @mdev: pointer to mtk_md_dev + * @evt: exception event + * + * Return: + * 0 - OK + * -EFAULT - exception feature is not ready + */ +int mtk_except_report_evt(struct mtk_md_dev *mdev, enum mtk_except_evt evt); + +/* mtk_except_start() - Start exception service. + * + * @mdev: pointer to mtk_md_dev + * + * Return: + * void + */ +void mtk_except_start(struct mtk_md_dev *mdev); + +/* mtk_except_stop() - Stop exception service. + * + * @mdev: pointer to mtk_md_dev + * + * Return: + * void + */ +void mtk_except_stop(struct mtk_md_dev *mdev); + +/* mtk_except_init() - Initialize exception feature. + * + * @mdev: pointer to mtk_md_dev + * + * Return: + * 0 - OK + */ +int mtk_except_init(struct mtk_md_dev *mdev); + +/* mtk_except_exit() - De-Initialize exception feature. + * + * @mdev: pointer to mtk_md_dev + * + * Return: + * 0 - OK + */ +int mtk_except_exit(struct mtk_md_dev *mdev); + #endif /* __MTK_DEV_H__ */ diff --git a/drivers/net/wwan/mediatek/mtk_dpmaif.c b/drivers/net/wwan/mediatek/mtk_dpmaif.c index 27b7a5dee707..b6085110c62a 100644 --- a/drivers/net/wwan/mediatek/mtk_dpmaif.c +++ b/drivers/net/wwan/mediatek/mtk_dpmaif.c @@ -536,10 +536,12 @@ static void mtk_dpmaif_common_err_handle(struct mtk_dpmaif_ctlb *dcb, bool is_hw return; } - if (mtk_hw_mmio_check(DCB_TO_MDEV(dcb))) + if (mtk_hw_mmio_check(DCB_TO_MDEV(dcb))) { dev_err(DCB_TO_DEV(dcb), "Failed to access mmio\n"); - else + mtk_except_report_evt(DCB_TO_MDEV(dcb), EXCEPT_LINK_ERR); + } else { mtk_dpmaif_trigger_dev_exception(dcb); + } } static unsigned int mtk_dpmaif_pit_bid(struct dpmaif_pd_pit *pit_info) @@ -1354,7 +1356,7 @@ static unsigned int mtk_dpmaif_poll_tx_drb(struct dpmaif_txq *txq) old_sw_rd_idx = txq->drb_rd_idx; ret = mtk_dpmaif_drv_get_ring_idx(dcb->drv_info, DPMAIF_DRB_RIDX, txq->id); if (unlikely(ret < 0)) { - dev_err(DCB_TO_DEV(dcb), "Failed to read txq%u drb_rd_idx, ret=%d", txq->id, ret); + dev_err(DCB_TO_DEV(dcb), "Failed to read txq%u drb_rd_idx, ret=%d\n", txq->id, ret); mtk_dpmaif_common_err_handle(dcb, true); return 0; } @@ -2274,7 +2276,6 @@ static void mtk_dpmaif_trans_disable(struct mtk_dpmaif_ctlb *dcb) static void mtk_dpmaif_trans_ctl(struct mtk_dpmaif_ctlb *dcb, bool enable) { mutex_lock(&dcb->trans_ctl_lock); - if (enable) { if (!dcb->trans_enabled) { if (dcb->dpmaif_state == DPMAIF_STATE_PWRON && @@ -2641,7 +2642,8 @@ static int mtk_dpmaif_drv_res_init(struct mtk_dpmaif_ctlb *dcb) if (DPMAIF_GET_HW_VER(dcb) == 0x0800) { dcb->drv_info->drv_ops = &dpmaif_drv_ops_t800; } else { - dev_err(DCB_TO_DEV(dcb), "Unsupported mdev, hw_ver=0x%x", DPMAIF_GET_HW_VER(dcb)); + devm_kfree(DCB_TO_DEV(dcb), dcb->drv_info); + dev_err(DCB_TO_DEV(dcb), "Unsupported mdev, hw_ver=0x%x\n", DPMAIF_GET_HW_VER(dcb)); ret = -EFAULT; } @@ -2791,7 +2793,8 @@ static int mtk_dpmaif_irq_init(struct mtk_dpmaif_ctlb *dcb) irq_param->dpmaif_irq_src = irq_src; irq_param->dev_irq_id = mtk_hw_get_irq_id(DCB_TO_MDEV(dcb), irq_src); if (irq_param->dev_irq_id < 0) { - dev_err(DCB_TO_DEV(dcb), "Failed to allocate irq id, irq_src=%d", irq_src); + dev_err(DCB_TO_DEV(dcb), "Failed to allocate irq id, irq_src=%d\n", + irq_src); ret = -EINVAL; goto err_reg_irq; } @@ -3489,6 +3492,7 @@ static int mtk_dpmaif_pit_bid_frag_check(struct dpmaif_rxq *rxq, unsigned int cu bat_ring = &rxq->dcb->bat_info.frag_bat_ring; cur_bat_record = bat_ring->sw_record_base + cur_bid; + if (unlikely(!cur_bat_record->frag.page || cur_bid >= bat_ring->bat_cnt)) { dev_err(DCB_TO_DEV(dcb), "Invalid parameter rxq%u bat%d, bid=%u, bat_cnt=%u\n", diff --git a/drivers/net/wwan/mediatek/mtk_dpmaif_drv.h b/drivers/net/wwan/mediatek/mtk_dpmaif_drv.h index 34ec846e6336..29b6c99bba42 100644 --- a/drivers/net/wwan/mediatek/mtk_dpmaif_drv.h +++ b/drivers/net/wwan/mediatek/mtk_dpmaif_drv.h @@ -84,12 +84,12 @@ enum mtk_drv_err { enum { DPMAIF_CLEAR_INTR, - DPMAIF_UNMASK_INTR + DPMAIF_UNMASK_INTR, }; enum dpmaif_drv_dlq_id { DPMAIF_DLQ0 = 0, - DPMAIF_DLQ1 + DPMAIF_DLQ1, }; struct dpmaif_drv_dlq { @@ -132,7 +132,7 @@ enum dpmaif_drv_ring_type { DPMAIF_PIT, DPMAIF_BAT, DPMAIF_FRAG, - DPMAIF_DRB + DPMAIF_DRB, }; enum dpmaif_drv_ring_idx { @@ -143,7 +143,7 @@ enum dpmaif_drv_ring_idx { DPMAIF_FRAG_WIDX, DPMAIF_FRAG_RIDX, DPMAIF_DRB_WIDX, - DPMAIF_DRB_RIDX + DPMAIF_DRB_RIDX, }; struct dpmaif_drv_irq_en_mask { @@ -184,7 +184,7 @@ enum dpmaif_drv_intr_type { DPMAIF_INTR_DL_FRGCNT_LEN_ERR, DPMAIF_INTR_DL_PITCNT_LEN_ERR, DPMAIF_INTR_DL_DONE, - DPMAIF_INTR_MAX, + DPMAIF_INTR_MAX }; #define DPMAIF_INTR_COUNT ((DPMAIF_INTR_MAX) - (DPMAIF_INTR_MIN) - 1) diff --git a/drivers/net/wwan/mediatek/mtk_except.c b/drivers/net/wwan/mediatek/mtk_except.c new file mode 100644 index 000000000000..e35592d9d2c3 --- /dev/null +++ b/drivers/net/wwan/mediatek/mtk_except.c @@ -0,0 +1,176 @@ +// SPDX-License-Identifier: BSD-3-Clause-Clear +/* + * Copyright (c) 2022, MediaTek Inc. + */ + +#include +#include +#include + +#include "mtk_dev.h" +#include "mtk_fsm.h" + +#define MTK_EXCEPT_HOST_RESET_TIME (2) +#define MTK_EXCEPT_SELF_RESET_TIME (35) +#define MTK_EXCEPT_RESET_TYPE_PLDR BIT(26) +#define MTK_EXCEPT_RESET_TYPE_FLDR BIT(27) + +static void mtk_except_start_monitor(struct mtk_md_dev *mdev, unsigned long expires) +{ + struct mtk_md_except *except = &mdev->except; + + if (!timer_pending(&except->timer) && !mtk_hw_get_hp_status(mdev)) { + except->timer.expires = jiffies + expires; + add_timer(&except->timer); + dev_info(mdev->dev, "Add timer to monitor PCI link\n"); + } +} + +int mtk_except_report_evt(struct mtk_md_dev *mdev, enum mtk_except_evt evt) +{ + struct mtk_md_except *except = &mdev->except; + int err, val; + + if (atomic_read(&except->flag) != 1) + return -EFAULT; + + switch (evt) { + case EXCEPT_LINK_ERR: + err = mtk_hw_mmio_check(mdev); + if (err) + mtk_fsm_evt_submit(mdev, FSM_EVT_LINKDOWN, FSM_F_DFLT, NULL, 0, 0); + break; + case EXCEPT_RGU: + /* delay 20ms to make sure device ready for reset */ + msleep(20); + + val = mtk_hw_get_dev_state(mdev); + dev_info(mdev->dev, "dev_state:0x%x, hw_ver:0x%x, fsm state:%d\n", + val, mdev->hw_ver, mdev->fsm->state); + + /* Invalid dev state will trigger PLDR */ + if (val & MTK_EXCEPT_RESET_TYPE_PLDR) { + except->type = RESET_PLDR; + } else if (val & MTK_EXCEPT_RESET_TYPE_FLDR) { + except->type = RESET_FLDR; + } else if (mdev->fsm->state >= FSM_STATE_READY) { + dev_info(mdev->dev, "HW reboot\n"); + except->type = RESET_NONE; + } else { + dev_info(mdev->dev, "RGU ignored\n"); + break; + } + mtk_fsm_evt_submit(mdev, FSM_EVT_DEV_RESET_REQ, FSM_F_DFLT, NULL, 0, 0); + break; + case EXCEPT_AER_DETECTED: + mtk_fsm_evt_submit(mdev, FSM_EVT_AER, FSM_F_DFLT, NULL, 0, EVT_MODE_BLOCKING); + break; + case EXCEPT_AER_RESET: + err = mtk_hw_reset(mdev, RESET_FLDR); + if (err) + mtk_hw_reset(mdev, RESET_RGU); + break; + case EXCEPT_AER_RESUME: + mtk_except_start_monitor(mdev, HZ); + break; + default: + break; + } + + return 0; +} + +void mtk_except_start(struct mtk_md_dev *mdev) +{ + struct mtk_md_except *except = &mdev->except; + + mtk_hw_unmask_irq(mdev, except->pci_ext_irq_id); +} + +void mtk_except_stop(struct mtk_md_dev *mdev) +{ + struct mtk_md_except *except = &mdev->except; + + mtk_hw_mask_irq(mdev, except->pci_ext_irq_id); +} + +static void mtk_except_fsm_handler(struct mtk_fsm_param *param, void *data) +{ + struct mtk_md_except *except = data; + enum mtk_reset_type reset_type; + struct mtk_md_dev *mdev; + unsigned long expires; + int err; + + mdev = container_of(except, struct mtk_md_dev, except); + + switch (param->to) { + case FSM_STATE_POSTDUMP: + mtk_hw_mask_irq(mdev, except->pci_ext_irq_id); + mtk_hw_clear_irq(mdev, except->pci_ext_irq_id); + mtk_hw_unmask_irq(mdev, except->pci_ext_irq_id); + break; + case FSM_STATE_OFF: + if (param->evt_id == FSM_EVT_DEV_RESET_REQ) + reset_type = except->type; + else if (param->evt_id == FSM_EVT_LINKDOWN) + reset_type = RESET_FLDR; + else + break; + + if (reset_type == RESET_NONE) { + expires = MTK_EXCEPT_SELF_RESET_TIME * HZ; + } else { + err = mtk_hw_reset(mdev, reset_type); + if (err) + expires = MTK_EXCEPT_SELF_RESET_TIME * HZ; + else + expires = MTK_EXCEPT_HOST_RESET_TIME * HZ; + } + + mtk_except_start_monitor(mdev, expires); + break; + default: + break; + } +} + +static void mtk_except_link_monitor(struct timer_list *timer) +{ + struct mtk_md_except *except = container_of(timer, struct mtk_md_except, timer); + struct mtk_md_dev *mdev = container_of(except, struct mtk_md_dev, except); + int err; + + err = mtk_hw_link_check(mdev); + if (!err) { + mtk_fsm_evt_submit(mdev, FSM_EVT_REINIT, FSM_F_FULL_REINIT, NULL, 0, 0); + del_timer(&except->timer); + } else { + mod_timer(timer, jiffies + HZ); + } +} + +int mtk_except_init(struct mtk_md_dev *mdev) +{ + struct mtk_md_except *except = &mdev->except; + + except->pci_ext_irq_id = mtk_hw_get_irq_id(mdev, MTK_IRQ_SRC_SAP_RGU); + + mtk_fsm_notifier_register(mdev, MTK_USER_EXCEPT, + mtk_except_fsm_handler, except, FSM_PRIO_1, false); + timer_setup(&except->timer, mtk_except_link_monitor, 0); + atomic_set(&except->flag, 1); + + return 0; +} + +int mtk_except_exit(struct mtk_md_dev *mdev) +{ + struct mtk_md_except *except = &mdev->except; + + atomic_set(&except->flag, 0); + del_timer(&except->timer); + mtk_fsm_notifier_unregister(mdev, MTK_USER_EXCEPT); + + return 0; +} diff --git a/drivers/net/wwan/mediatek/mtk_fsm.c b/drivers/net/wwan/mediatek/mtk_fsm.c index d754a34ade6c..4ba83134a149 100644 --- a/drivers/net/wwan/mediatek/mtk_fsm.c +++ b/drivers/net/wwan/mediatek/mtk_fsm.c @@ -516,6 +516,8 @@ static int mtk_fsm_early_bootup_handler(u32 status, void *__fsm) dev_stage = dev_state & REGION_BITMASK; if (dev_stage >= DEV_STAGE_MAX) { dev_err(mdev->dev, "Invalid dev state 0x%x\n", dev_state); + if (mtk_hw_link_check(mdev)) + mtk_except_report_evt(mdev, EXCEPT_LINK_ERR); return -ENXIO; } diff --git a/drivers/net/wwan/mediatek/pcie/mtk_cldma_drv_t800.c b/drivers/net/wwan/mediatek/pcie/mtk_cldma_drv_t800.c index 42b4358f2653..c58ec64a59bf 100644 --- a/drivers/net/wwan/mediatek/pcie/mtk_cldma_drv_t800.c +++ b/drivers/net/wwan/mediatek/pcie/mtk_cldma_drv_t800.c @@ -364,8 +364,10 @@ static void mtk_cldma_tx_done_work(struct work_struct *work) state = mtk_cldma_check_intr_status(mdev, txq->hw->base_addr, DIR_TX, txq->txqno, QUEUE_XFER_DONE); if (state) { - if (unlikely(state == LINK_ERROR_VAL)) + if (unlikely(state == LINK_ERROR_VAL)) { + mtk_except_report_evt(mdev, EXCEPT_LINK_ERR); return; + } mtk_cldma_clr_intr_status(mdev, txq->hw->base_addr, DIR_TX, txq->txqno, QUEUE_XFER_DONE); @@ -452,6 +454,11 @@ static void mtk_cldma_rx_done_work(struct work_struct *work) if (!state) break; + if (unlikely(state == LINK_ERROR_VAL)) { + mtk_except_report_evt(mdev, EXCEPT_LINK_ERR); + return; + } + mtk_cldma_clr_intr_status(mdev, rxq->hw->base_addr, DIR_RX, rxq->rxqno, QUEUE_XFER_DONE); @@ -750,6 +757,9 @@ int mtk_cldma_txq_free_t800(struct cldma_hw *hw, int vqno) devm_kfree(hw->mdev->dev, txq); hw->txq[txqno] = NULL; + if (active == LINK_ERROR_VAL) + mtk_except_report_evt(hw->mdev, EXCEPT_LINK_ERR); + return 0; } @@ -915,6 +925,9 @@ int mtk_cldma_rxq_free_t800(struct cldma_hw *hw, int vqno) devm_kfree(mdev->dev, rxq); hw->rxq[rxqno] = NULL; + if (active == LINK_ERROR_VAL) + mtk_except_report_evt(mdev, EXCEPT_LINK_ERR); + return 0; } diff --git a/drivers/net/wwan/mediatek/pcie/mtk_pci.c b/drivers/net/wwan/mediatek/pcie/mtk_pci.c index 7c7cb1f733de..47727567b0c5 100644 --- a/drivers/net/wwan/mediatek/pcie/mtk_pci.c +++ b/drivers/net/wwan/mediatek/pcie/mtk_pci.c @@ -536,6 +536,8 @@ static int mtk_pci_reset(struct mtk_md_dev *mdev, enum mtk_reset_type type) return mtk_pci_fldr(mdev); case RESET_PLDR: return mtk_pci_pldr(mdev); + default: + break; } return -EINVAL; @@ -547,6 +549,12 @@ static int mtk_pci_reinit(struct mtk_md_dev *mdev, enum mtk_reinit_type type) struct mtk_pci_priv *priv = mdev->hw_priv; int ret, ltr, l1ss; + if (type == REINIT_TYPE_EXP) { + /* We have saved it in probe() */ + pci_load_saved_state(pdev, priv->saved_state); + pci_restore_state(pdev); + } + /* restore ltr */ ltr = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_LTR); if (ltr) { @@ -571,6 +579,9 @@ static int mtk_pci_reinit(struct mtk_md_dev *mdev, enum mtk_reinit_type type) mtk_pci_set_msix_merged(priv, priv->irq_cnt); } + if (type == REINIT_TYPE_EXP) + mtk_pci_clear_irq(mdev, priv->rgu_irq_id); + mtk_pci_unmask_irq(mdev, priv->rgu_irq_id); mtk_pci_unmask_irq(mdev, priv->mhccif_irq_id); @@ -634,6 +645,7 @@ static const struct mtk_hw_ops mtk_pci_ops = { .get_ext_evt_status = mtk_mhccif_get_evt_status, .reset = mtk_pci_reset, .reinit = mtk_pci_reinit, + .link_check = mtk_pci_link_check, .mmio_check = mtk_pci_mmio_check, .get_hp_status = mtk_pci_get_hp_status, }; @@ -654,6 +666,7 @@ static void mtk_mhccif_isr_work(struct work_struct *work) if (unlikely(stat == U32_MAX && mtk_pci_link_check(mdev))) { /* When link failed, we don't need to unmask/clear. */ dev_err(mdev->dev, "Failed to check link in MHCCIF handler.\n"); + mtk_except_report_evt(mdev, EXCEPT_LINK_ERR); return; } @@ -778,6 +791,7 @@ static void mtk_rgu_work(struct work_struct *work) struct mtk_pci_priv *priv; struct mtk_md_dev *mdev; struct pci_dev *pdev; + int ret; priv = container_of(to_delayed_work(work), struct mtk_pci_priv, rgu_work); mdev = priv->mdev; @@ -788,6 +802,10 @@ static void mtk_rgu_work(struct work_struct *work) mtk_pci_mask_irq(mdev, priv->rgu_irq_id); mtk_pci_clear_irq(mdev, priv->rgu_irq_id); + ret = mtk_except_report_evt(mdev, EXCEPT_RGU); + if (ret) + dev_err(mdev->dev, "Failed to report exception with EXCEPT_RGU\n"); + if (!pdev->msix_enabled) return; @@ -800,8 +818,14 @@ static int mtk_rgu_irq_cb(int irq_id, void *data) struct mtk_pci_priv *priv; priv = mdev->hw_priv; + + if (delayed_work_pending(&priv->rgu_work)) + goto exit; + schedule_delayed_work(&priv->rgu_work, msecs_to_jiffies(1)); + dev_info(mdev->dev, "RGU IRQ arrived\n"); +exit: return 0; } @@ -1129,16 +1153,39 @@ static void mtk_pci_remove(struct pci_dev *pdev) static pci_ers_result_t mtk_pci_error_detected(struct pci_dev *pdev, pci_channel_state_t state) { + struct mtk_md_dev *mdev = pci_get_drvdata(pdev); + int ret; + + ret = mtk_except_report_evt(mdev, EXCEPT_AER_DETECTED); + if (ret) + dev_err(mdev->dev, "Failed to call excpetion report API with EXCEPT_AER_DETECTED!\n"); + dev_info(mdev->dev, "AER detected: pci_channel_state_t=%d\n", state); + return PCI_ERS_RESULT_NEED_RESET; } static pci_ers_result_t mtk_pci_slot_reset(struct pci_dev *pdev) { + struct mtk_md_dev *mdev = pci_get_drvdata(pdev); + int ret; + + ret = mtk_except_report_evt(mdev, EXCEPT_AER_RESET); + if (ret) + dev_err(mdev->dev, "Failed to call excpetion report API with EXCEPT_AER_RESET!\n"); + dev_info(mdev->dev, "Slot reset!\n"); + return PCI_ERS_RESULT_RECOVERED; } static void mtk_pci_io_resume(struct pci_dev *pdev) { + struct mtk_md_dev *mdev = pci_get_drvdata(pdev); + int ret; + + ret = mtk_except_report_evt(mdev, EXCEPT_AER_RESUME); + if (ret) + dev_err(mdev->dev, "Failed to call excpetion report API with EXCEPT_AER_RESUME!\n"); + dev_info(mdev->dev, "IO resume!\n"); } static const struct pci_error_handlers mtk_pci_err_handler = { From patchwork Tue Nov 22 11:25:36 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?WWFuY2hhbyBZYW5nICjmnajlvabotoUp?= X-Patchwork-Id: 24310 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp2149213wrr; Tue, 22 Nov 2022 03:43:19 -0800 (PST) X-Google-Smtp-Source: AA0mqf4w4jYoPKegm7oJOWt5NDcwM0W1sH4eep5/z3Gw5wEoi7033zg2XNDODjRi7FP80u4QmFyI X-Received: by 2002:a17:90b:3c0d:b0:20d:478a:9d75 with SMTP id pb13-20020a17090b3c0d00b0020d478a9d75mr31289305pjb.149.1669117399639; Tue, 22 Nov 2022 03:43:19 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1669117399; cv=none; d=google.com; s=arc-20160816; b=kbNekMJHi7d3Wwtwt20yg/nvIwJrbYUtyNbIgsu8aiwKcdMbNHD5+/ux29hMrboIcB 9n483C0ZJJT1g7fcCr7Ucnzf3VxJwkSu1UR7OCPQw1XJNgMSpbcOe8obs/z37oRu+abU 1aKCerua2DfRG4p5XgaADWypmxBHmeoUqrKaUNZWAers8MKY5W98EqnYufM4NvODaPC2 iHwYXAIBShFL57m0waUceIrB579u0sUShSpOuv8RtomRJqFoH6Iri7/z0DVrXOvxn3bt sWe+CXTS3o/4I8vGuKSgxQjtBobHeWxKBnopI87Z7hE6pzaTj7ctsz5m/0AO0ph/fNOp i1wA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:message-id:date:subject:cc:to:from :dkim-signature; bh=Ni/RQIJrvx1xI3r/1sWhxabNEDUolXNZ3xq/Q14FBqc=; b=VXwOaP4lK+jortuH6K2vSWbxNmi1ejvqsleQNrEcCpktYHYXbXsWzzgWW7sWw+T4AD 0CWBqosGBQ++wwPvu71jTCPFrDIn1MZ5jjbVVXAllulbXlWiycKHYv9nq7qEgVetZxUd Oo8IcLIISJlYg15Jz+25gZcLrSNgmecwfgAliimeD2baPfoErYjDhac4TEHUlYv+LJVw BGAMQ7KJqpRVlgcJjTNsc6K5x6zIhD82d5jDdmb06ZHtWb1lXB70efLHcKpCEsY7d7xc +yXX2xUp3rmte2adE442l53FpJOw+QEQJKf7yKxryo8Ju0A5ZfguLFBV9uVwF7QP0b0G Ks8w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@mediatek.com header.s=dk header.b=kL5Oz1lp; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=mediatek.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id jg21-20020a17090326d500b0018905bd4a58si10293264plb.169.2022.11.22.03.43.00; Tue, 22 Nov 2022 03:43:19 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@mediatek.com header.s=dk header.b=kL5Oz1lp; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=mediatek.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232664AbiKVLcL (ORCPT + 99 others); Tue, 22 Nov 2022 06:32:11 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49956 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229639AbiKVLbX (ORCPT ); Tue, 22 Nov 2022 06:31:23 -0500 Received: from mailgw01.mediatek.com (unknown [60.244.123.138]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6432161769; Tue, 22 Nov 2022 03:25:47 -0800 (PST) X-UUID: d931cbfae6bc43dd8cd0a1e7d9d5948d-20221122 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=mediatek.com; s=dk; h=Content-Type:MIME-Version:Message-ID:Date:Subject:CC:To:From; bh=Ni/RQIJrvx1xI3r/1sWhxabNEDUolXNZ3xq/Q14FBqc=; b=kL5Oz1lpZkcWXPtnipzKhRxk8MXMM7y8YgIhvt31jY4aATrUO/isUOSB+H//YYPdevWUHXhg91T/FVSpFwKjYQOVypGlB2y/a/W3ClqlaCLQMeNGjWAnmfGukW8cd6bB+4yVCSQaXw6iYDoSLrMQ8XY8LSnXkBxge7qnQigx5cE=; X-CID-P-RULE: Release_Ham X-CID-O-INFO: VERSION:1.1.13,REQID:35ba27ce-8a3c-4e37-8fdc-72c1cbdc6216,IP:0,U RL:0,TC:0,Content:-25,EDM:0,RT:0,SF:95,FILE:0,BULK:0,RULE:Release_Ham,ACTI ON:release,TS:70 X-CID-INFO: VERSION:1.1.13,REQID:35ba27ce-8a3c-4e37-8fdc-72c1cbdc6216,IP:0,URL :0,TC:0,Content:-25,EDM:0,RT:0,SF:95,FILE:0,BULK:0,RULE:Spam_GS981B3D,ACTI ON:quarantine,TS:70 X-CID-META: VersionHash:d12e911,CLOUDID:466f7f2f-2938-482e-aafd-98d66723b8a9,B ulkID:221122192545AVQCKR8L,BulkQuantity:0,Recheck:0,SF:38|28|17|19|48,TC:n il,Content:0,EDM:-3,IP:nil,URL:0,File:nil,Bulk:nil,QS:nil,BEC:nil,COL:1 X-UUID: d931cbfae6bc43dd8cd0a1e7d9d5948d-20221122 Received: from mtkmbs10n1.mediatek.inc [(172.21.101.34)] by mailgw01.mediatek.com (envelope-from ) (Generic MTA with TLSv1.2 ECDHE-RSA-AES256-GCM-SHA384 256/256) with ESMTP id 1661493404; Tue, 22 Nov 2022 19:25:44 +0800 Received: from mtkmbs11n2.mediatek.inc (172.21.101.187) by mtkmbs11n2.mediatek.inc (172.21.101.187) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.792.15; Tue, 22 Nov 2022 19:25:42 +0800 Received: from mcddlt001.gcn.mediatek.inc (10.19.240.15) by mtkmbs11n2.mediatek.inc (172.21.101.73) with Microsoft SMTP Server id 15.2.792.15 via Frontend Transport; Tue, 22 Nov 2022 19:25:40 +0800 From: Yanchao Yang To: Loic Poulain , Sergey Ryazanov , Johannes Berg , "David S . Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , netdev ML , kernel ML CC: MTK ML , Liang Lu , Haijun Liu , Hua Yang , Ting Wang , Felix Chen , Mingliang Xu , Min Dong , Aiden Wang , Guohao Zhang , Chris Feng , Yanchao Yang , Lambert Wang , Mingchuang Qiao , Xiayu Zhang , Haozhe Chang , MediaTek Corporation Subject: [PATCH net-next v1 12/13] net: wwan: tmi: Add power management support Date: Tue, 22 Nov 2022 19:25:36 +0800 Message-ID: <20221122112536.160930-1-yanchao.yang@mediatek.com> X-Mailer: git-send-email 2.18.0 MIME-Version: 1.0 X-MTK: N X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_MSPIKE_H2,SPF_HELO_PASS, SPF_PASS,UNPARSEABLE_RELAY autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1750196446767335880?= X-GMAIL-MSGID: =?utf-8?q?1750196446767335880?= From: MediaTek Corporation In the TMI driver, both the device and the host system's power management are supported. Regarding the device's power management, the host side has implemented a mechanism to control the device's deep sleep function. If the host side locks the device's deep sleep mode, the device will always be in running state, even though the PCIe link state is in power saving state. If the host side unlocks the device's deep sleep mode, the device may go to low power state by itself while it is still in D0 state from the host side's point of view. To adapt to the host system's power management, some 'dev_pm_ops' callbacks are implemented.They are suspend, resume, freeze, thaw, poweroff, restore, runtime_suspend and runtime_resume. As the device has several hardware modules that need to be set up in different ways during system power management (PM) flows, the driver introduces the 'PM entities' concept. The entities are CLDMA and DPMAIF hardware modules. When a dev_pm_ops function is called, the PM entities list is iterated and the matched function is called for each entry in the list. Signed-off-by: Hua Yang Signed-off-by: MediaTek Corporation --- drivers/net/wwan/mediatek/Makefile | 3 +- drivers/net/wwan/mediatek/mtk_cldma.c | 52 +- drivers/net/wwan/mediatek/mtk_cldma.h | 2 + drivers/net/wwan/mediatek/mtk_ctrl_plane.c | 65 ++ drivers/net/wwan/mediatek/mtk_ctrl_plane.h | 3 + drivers/net/wwan/mediatek/mtk_dev.c | 8 + drivers/net/wwan/mediatek/mtk_dev.h | 115 ++ drivers/net/wwan/mediatek/mtk_dpmaif.c | 130 ++- drivers/net/wwan/mediatek/mtk_pm.c | 1004 +++++++++++++++++ .../wwan/mediatek/pcie/mtk_cldma_drv_t800.c | 43 + .../wwan/mediatek/pcie/mtk_cldma_drv_t800.h | 2 + drivers/net/wwan/mediatek/pcie/mtk_pci.c | 120 ++ drivers/net/wwan/mediatek/pcie/mtk_reg.h | 4 + 13 files changed, 1544 insertions(+), 7 deletions(-) create mode 100644 drivers/net/wwan/mediatek/mtk_pm.c diff --git a/drivers/net/wwan/mediatek/Makefile b/drivers/net/wwan/mediatek/Makefile index f0601d2eb604..8fe44971e69e 100644 --- a/drivers/net/wwan/mediatek/Makefile +++ b/drivers/net/wwan/mediatek/Makefile @@ -16,7 +16,8 @@ mtk_tmi-y = \ mtk_dpmaif.o \ mtk_wwan.o \ mtk_ethtool.o \ - mtk_except.o + mtk_except.o \ + mtk_pm.o ccflags-y += -I$(srctree)/$(src)/ ccflags-y += -I$(srctree)/$(src)/pcie/ diff --git a/drivers/net/wwan/mediatek/mtk_cldma.c b/drivers/net/wwan/mediatek/mtk_cldma.c index 4c8852f8ae9c..dd75f08c3d96 100644 --- a/drivers/net/wwan/mediatek/mtk_cldma.c +++ b/drivers/net/wwan/mediatek/mtk_cldma.c @@ -35,6 +35,8 @@ static int mtk_cldma_init(struct mtk_ctrl_trans *trans) cd->hw_ops.txq_free = mtk_cldma_txq_free_t800; cd->hw_ops.rxq_free = mtk_cldma_rxq_free_t800; cd->hw_ops.start_xfer = mtk_cldma_start_xfer_t800; + cd->hw_ops.suspend = mtk_cldma_suspend_t800; + cd->hw_ops.resume = mtk_cldma_resume_t800; cd->hw_ops.fsm_state_listener = mtk_cldma_fsm_state_listener_t800; trans->dev[CLDMA_CLASS_ID] = cd; @@ -126,6 +128,7 @@ static int mtk_cldma_open(struct cldma_dev *cd, struct sk_buff *skb) * Return: * 0 - OK * -EPIPE - hardware queue is broken + * -EIO - PCI link error */ static int mtk_cldma_tx(struct cldma_dev *cd, struct sk_buff *skb) { @@ -133,6 +136,7 @@ static int mtk_cldma_tx(struct cldma_dev *cd, struct sk_buff *skb) struct cldma_hw *hw; struct virtq *vq; struct txq *txq; + int err = 0; vq = cd->trans->vq_tbl + trb->vqno; hw = cd->cldma_hw[vq->hif_id & HIF_ID_BITMASK]; @@ -140,9 +144,23 @@ static int mtk_cldma_tx(struct cldma_dev *cd, struct sk_buff *skb) if (txq->is_stopping) return -EPIPE; + pm_runtime_get_sync(hw->mdev->dev); + mtk_pm_ds_lock(hw->mdev, MTK_USER_CTRL); + err = mtk_pm_ds_wait_complete(hw->mdev, MTK_USER_CTRL); + if (unlikely(err)) { + dev_err(hw->mdev->dev, "ds wait err:%d\n", err); + goto exit; + } + cd->hw_ops.start_xfer(hw, vq->txqno); - return 0; +exit: + mtk_pm_ds_unlock(hw->mdev, MTK_USER_CTRL); + pm_runtime_put_sync(hw->mdev->dev); + if (err == -EIO) + mtk_except_report_evt(hw->mdev, EXCEPT_LINK_ERR); + + return err; } /* cldma_close() - De-Initialize CLDMA hardware queue @@ -227,6 +245,36 @@ static int mtk_cldma_submit_tx(void *dev, struct sk_buff *skb) return ret; } +static int mtk_cldma_suspend(struct mtk_ctrl_trans *trans) +{ + struct cldma_dev *cd = trans->dev[CLDMA_CLASS_ID]; + struct cldma_hw *hw; + int i; + + for (i = 0; i < NR_CLDMA; i++) { + hw = cd->cldma_hw[i]; + if (hw) + cd->hw_ops.suspend(hw); + } + + return 0; +} + +static int mtk_cldma_resume(struct mtk_ctrl_trans *trans) +{ + struct cldma_dev *cd = trans->dev[CLDMA_CLASS_ID]; + struct cldma_hw *hw; + int i; + + for (i = 0; i < NR_CLDMA; i++) { + hw = cd->cldma_hw[i]; + if (hw) + cd->hw_ops.resume(hw); + } + + return 0; +} + /* cldma_trb_process() - Dispatch trb request to low-level CLDMA routine * * @dev: pointer to CLDMA device @@ -298,6 +346,8 @@ static void mtk_cldma_fsm_state_listener(struct mtk_fsm_param *param, struct mtk struct hif_ops cldma_ops = { .init = mtk_cldma_init, .exit = mtk_cldma_exit, + .suspend = mtk_cldma_suspend, + .resume = mtk_cldma_resume, .trb_process = mtk_cldma_trb_process, .submit_tx = mtk_cldma_submit_tx, .fsm_state_listener = mtk_cldma_fsm_state_listener, diff --git a/drivers/net/wwan/mediatek/mtk_cldma.h b/drivers/net/wwan/mediatek/mtk_cldma.h index c9656aa31455..bbc29ed39823 100644 --- a/drivers/net/wwan/mediatek/mtk_cldma.h +++ b/drivers/net/wwan/mediatek/mtk_cldma.h @@ -135,6 +135,8 @@ struct cldma_hw_ops { int (*txq_free)(struct cldma_hw *hw, int vqno); int (*rxq_free)(struct cldma_hw *hw, int vqno); int (*start_xfer)(struct cldma_hw *hw, int qno); + void (*suspend)(struct cldma_hw *hw); + void (*resume)(struct cldma_hw *hw); void (*fsm_state_listener)(struct mtk_fsm_param *param, struct cldma_hw *hw); }; diff --git a/drivers/net/wwan/mediatek/mtk_ctrl_plane.c b/drivers/net/wwan/mediatek/mtk_ctrl_plane.c index fb1597a22bc7..fd88e20e7d8a 100644 --- a/drivers/net/wwan/mediatek/mtk_ctrl_plane.c +++ b/drivers/net/wwan/mediatek/mtk_ctrl_plane.c @@ -7,6 +7,7 @@ #include #include #include +#include #include #include @@ -280,7 +281,9 @@ int mtk_ctrl_trb_submit(struct mtk_ctrl_blk *blk, struct sk_buff *skb) else skb_queue_tail(&trans->skb_list[vqno], skb); + pm_runtime_get_sync(blk->mdev->dev); wake_up(&trans->trb_srv->trb_waitq); + pm_runtime_put_sync(blk->mdev->dev); return 0; } @@ -361,6 +364,61 @@ static void mtk_ctrl_trans_fsm_state_handler(struct mtk_fsm_param *param, } } +static int mtk_ctrl_pm_suspend(struct mtk_md_dev *mdev, void *param) +{ + struct mtk_ctrl_blk *ctrl_blk = param; + int i; + + kthread_park(ctrl_blk->trans->trb_srv->trb_thread); + + for (i = 0; i < HIF_CLASS_NUM; i++) + ctrl_blk->trans->ops[i]->suspend(ctrl_blk->trans); + + return 0; +} + +static int mtk_ctrl_pm_resume(struct mtk_md_dev *mdev, void *param) +{ + struct mtk_ctrl_blk *ctrl_blk = param; + int i; + + for (i = 0; i < HIF_CLASS_NUM; i++) + ctrl_blk->trans->ops[i]->resume(ctrl_blk->trans); + + kthread_unpark(ctrl_blk->trans->trb_srv->trb_thread); + + return 0; +} + +static int mtk_ctrl_pm_init(struct mtk_ctrl_blk *ctrl_blk) +{ + struct mtk_pm_entity *pm_entity; + int ret; + + pm_entity = &ctrl_blk->pm_entity; + INIT_LIST_HEAD(&pm_entity->entry); + pm_entity->user = MTK_USER_CTRL; + pm_entity->param = ctrl_blk; + pm_entity->suspend = mtk_ctrl_pm_suspend; + pm_entity->resume = mtk_ctrl_pm_resume; + ret = mtk_pm_entity_register(ctrl_blk->mdev, pm_entity); + if (ret < 0) + dev_err(ctrl_blk->mdev->dev, "Failed to register ctrl pm_entity\n"); + + return ret; +} + +static int mtk_ctrl_pm_exit(struct mtk_ctrl_blk *ctrl_blk) +{ + int ret; + + ret = mtk_pm_entity_unregister(ctrl_blk->mdev, &ctrl_blk->pm_entity); + if (ret < 0) + dev_err(ctrl_blk->mdev->dev, "Failed to unregister ctrl pm_entity\n"); + + return ret; +} + static void mtk_ctrl_fsm_state_listener(struct mtk_fsm_param *param, void *data) { struct mtk_ctrl_blk *ctrl_blk = data; @@ -415,8 +473,14 @@ int mtk_ctrl_init(struct mtk_md_dev *mdev) goto err_port_exit; } + err = mtk_ctrl_pm_init(ctrl_blk); + if (err) + goto err_unregister_notifiers; + return 0; +err_unregister_notifiers: + mtk_fsm_notifier_unregister(mdev, MTK_USER_CTRL); err_port_exit: mtk_port_mngr_exit(ctrl_blk); err_destroy_pool_63K: @@ -433,6 +497,7 @@ int mtk_ctrl_exit(struct mtk_md_dev *mdev) { struct mtk_ctrl_blk *ctrl_blk = mdev->ctrl_blk; + mtk_ctrl_pm_exit(ctrl_blk); mtk_fsm_notifier_unregister(mdev, MTK_USER_CTRL); mtk_port_mngr_exit(ctrl_blk); mtk_bm_pool_destroy(mdev, ctrl_blk->bm_pool); diff --git a/drivers/net/wwan/mediatek/mtk_ctrl_plane.h b/drivers/net/wwan/mediatek/mtk_ctrl_plane.h index 87f2f9b5f481..cb2284090cdf 100644 --- a/drivers/net/wwan/mediatek/mtk_ctrl_plane.h +++ b/drivers/net/wwan/mediatek/mtk_ctrl_plane.h @@ -78,6 +78,8 @@ struct virtq { struct hif_ops { int (*init)(struct mtk_ctrl_trans *trans); int (*exit)(struct mtk_ctrl_trans *trans); + int (*suspend)(struct mtk_ctrl_trans *trans); + int (*resume)(struct mtk_ctrl_trans *trans); int (*submit_tx)(void *dev, struct sk_buff *skb); int (*trb_process)(void *dev, struct sk_buff *skb); void (*fsm_state_listener)(struct mtk_fsm_param *param, struct mtk_ctrl_trans *trans); @@ -100,6 +102,7 @@ struct mtk_ctrl_blk { struct mtk_ctrl_trans *trans; struct mtk_bm_pool *bm_pool; struct mtk_bm_pool *bm_pool_63K; + struct mtk_pm_entity pm_entity; }; int mtk_ctrl_vq_search(struct mtk_ctrl_blk *ctrl_blk, unsigned char peer_id, diff --git a/drivers/net/wwan/mediatek/mtk_dev.c b/drivers/net/wwan/mediatek/mtk_dev.c index d64cbca5b56d..79dd02eb7032 100644 --- a/drivers/net/wwan/mediatek/mtk_dev.c +++ b/drivers/net/wwan/mediatek/mtk_dev.c @@ -17,6 +17,10 @@ int mtk_dev_init(struct mtk_md_dev *mdev) if (ret) goto err_fsm_init; + ret = mtk_pm_init(mdev); + if (ret) + goto err_pm_init; + ret = mtk_bm_init(mdev); if (ret) goto err_bm_init; @@ -41,6 +45,8 @@ int mtk_dev_init(struct mtk_md_dev *mdev) err_ctrl_init: mtk_bm_exit(mdev); err_bm_init: + mtk_pm_exit(mdev); +err_pm_init: mtk_fsm_exit(mdev); err_fsm_init: return ret; @@ -50,9 +56,11 @@ void mtk_dev_exit(struct mtk_md_dev *mdev) { mtk_fsm_evt_submit(mdev, FSM_EVT_DEV_RM, 0, NULL, 0, EVT_MODE_BLOCKING | EVT_MODE_TOHEAD); + mtk_pm_exit_early(mdev); mtk_data_exit(mdev); mtk_ctrl_exit(mdev); mtk_bm_exit(mdev); + mtk_pm_exit(mdev); mtk_except_exit(mdev); mtk_fsm_exit(mdev); } diff --git a/drivers/net/wwan/mediatek/mtk_dev.h b/drivers/net/wwan/mediatek/mtk_dev.h index 010c789e4dda..c37ce58ddc99 100644 --- a/drivers/net/wwan/mediatek/mtk_dev.h +++ b/drivers/net/wwan/mediatek/mtk_dev.h @@ -35,6 +35,10 @@ enum mtk_user_id { MTK_USER_MAX }; +enum mtk_d2h_sw_evt { + D2H_SW_EVT_PM_LOCK_ACK = 0, +}; + enum mtk_reset_type { RESET_FLDR, RESET_PLDR, @@ -95,6 +99,7 @@ struct mtk_md_dev; * @reinit: Callback to execute device re-initialization. * @link_check: Callback to execute hardware link check. * @get_hp_status: Callback to get link hotplug status. + * @write_pm_cnt: Callback to write PM counter to notify device. */ struct mtk_hw_ops { /* Read value from MD. For PCIe, it's BAR 2/3 MMIO read */ @@ -118,6 +123,7 @@ struct mtk_hw_ops { int (*mask_irq)(struct mtk_md_dev *mdev, int irq_id); int (*unmask_irq)(struct mtk_md_dev *mdev, int irq_id); int (*clear_irq)(struct mtk_md_dev *mdev, int irq_id); + void (*clear_sw_evt)(struct mtk_md_dev *mdev, enum mtk_d2h_sw_evt evt); /* External event related */ int (*register_ext_evt)(struct mtk_md_dev *mdev, u32 chs, int (*evt_cb)(u32 status, void *data), void *data); @@ -133,6 +139,7 @@ struct mtk_hw_ops { bool (*link_check)(struct mtk_md_dev *mdev); bool (*mmio_check)(struct mtk_md_dev *mdev); int (*get_hp_status)(struct mtk_md_dev *mdev); + void (*write_pm_cnt)(struct mtk_md_dev *mdev, u32 val); }; struct mtk_md_except { @@ -142,6 +149,72 @@ struct mtk_md_except { struct timer_list timer; }; +enum mtk_suspend_flag { + SUSPEND_F_INIT = 0, + SUSPEND_F_SLEEP = 1 +}; + +enum mtk_pm_resume_state { + PM_RESUME_STATE_L3 = 0, + PM_RESUME_STATE_L1, + PM_RESUME_STATE_INIT, + PM_RESUME_STATE_L1_EXCEPT, + PM_RESUME_STATE_L2, + PM_RESUME_STATE_L2_EXCEPT +}; + +struct mtk_pm_cfg { + u32 ds_delayed_unlock_timeout_ms; + u32 ds_lock_wait_timeout_ms; + u32 suspend_wait_timeout_ms; + u32 resume_wait_timeout_ms; + u32 suspend_wait_timeout_sap_ms; + u32 resume_wait_timeout_sap_ms; + u32 ds_lock_polling_max_us; + u32 ds_lock_polling_min_us; + u32 ds_lock_polling_interval_us; + unsigned short runtime_idle_delay; +}; + +struct mtk_md_pm { + struct list_head entities; + /* entity_mtx is to protect concurrently + * read or write of pm entity list. + */ + struct mutex entity_mtx; + int irq_id; + u32 ext_evt_chs; + unsigned long state; + + /* ds_spinlock is to protect concurrently + * ds lock or unlock procedure. + */ + spinlock_t ds_spinlock; + struct completion ds_lock_complete; + atomic_t ds_lock_refcnt; + struct delayed_work ds_unlock_work; + u64 ds_lock_sent; + u64 ds_lock_recv; + + struct completion pm_ack; + struct completion pm_ack_sap; + struct delayed_work resume_work; + + bool resume_from_l3; + struct mtk_pm_cfg cfg; +}; + +struct mtk_pm_entity { + struct list_head entry; + enum mtk_user_id user; + void *param; + + int (*suspend)(struct mtk_md_dev *mdev, void *param); + int (*suspend_late)(struct mtk_md_dev *mdev, void *param); + int (*resume_early)(struct mtk_md_dev *mdev, void *param); + int (*resume)(struct mtk_md_dev *mdev, void *param); +}; + /* mtk_md_dev defines the structure of MTK modem device */ struct mtk_md_dev { struct device *dev; @@ -152,6 +225,7 @@ struct mtk_md_dev { char dev_str[MTK_DEV_STR_LEN]; struct mtk_md_fsm *fsm; + struct mtk_md_pm pm; void *ctrl_blk; void *data_blk; struct mtk_bm_ctrl *bm_ctrl; @@ -162,6 +236,27 @@ int mtk_dev_init(struct mtk_md_dev *mdev); void mtk_dev_exit(struct mtk_md_dev *mdev); int mtk_dev_start(struct mtk_md_dev *mdev); +int mtk_pm_init(struct mtk_md_dev *mdev); +int mtk_pm_exit(struct mtk_md_dev *mdev); +int mtk_pm_entity_register(struct mtk_md_dev *mdev, struct mtk_pm_entity *md_entity); +int mtk_pm_entity_unregister(struct mtk_md_dev *mdev, struct mtk_pm_entity *md_entity); +int mtk_pm_ds_lock(struct mtk_md_dev *mdev, enum mtk_user_id user); +int mtk_pm_ds_unlock(struct mtk_md_dev *mdev, enum mtk_user_id user); +int mtk_pm_ds_wait_complete(struct mtk_md_dev *mdev, enum mtk_user_id user); +int mtk_pm_exit_early(struct mtk_md_dev *mdev); +bool mtk_pm_check_dev_reset(struct mtk_md_dev *mdev); + +int mtk_pm_runtime_idle(struct device *dev); +int mtk_pm_runtime_suspend(struct device *dev); +int mtk_pm_runtime_resume(struct device *dev, bool atr_init); +int mtk_pm_suspend(struct device *dev); +int mtk_pm_resume(struct device *dev, bool atr_init); +int mtk_pm_freeze(struct device *dev); +int mtk_pm_thaw(struct device *dev, bool atr_init); +int mtk_pm_poweroff(struct device *dev); +int mtk_pm_restore(struct device *dev, bool atr_init); +void mtk_pm_shutdown(struct mtk_md_dev *mdev); + /* mtk_hw_read32() -Read dword from register. * * @mdev: Device instance. @@ -345,6 +440,16 @@ static inline int mtk_hw_clear_irq(struct mtk_md_dev *mdev, int irq_id) return mdev->hw_ops->clear_irq(mdev, irq_id); } +/* mtk_hw_clear_sw_evt() -Clear software event. + * + * @mdev: Device instance. + * @evt: Software event to clear. + */ +static inline void mtk_hw_clear_sw_evt(struct mtk_md_dev *mdev, enum mtk_d2h_sw_evt evt) +{ + mdev->hw_ops->clear_sw_evt(mdev, evt); +} + /* mtk_hw_register_ext_evt() -Register callback to external events. * * @mdev: Device instance. @@ -482,6 +587,16 @@ static inline int mtk_hw_get_hp_status(struct mtk_md_dev *mdev) return mdev->hw_ops->get_hp_status(mdev); } +/* mtk_hw_write_pm_cnt() -Write PM counter to device. + * + * @mdev: Device instance. + * @val: The value that host driver wants to write. + */ +static inline void mtk_hw_write_pm_cnt(struct mtk_md_dev *mdev, u32 val) +{ + mdev->hw_ops->write_pm_cnt(mdev, val); +} + static inline void *mtk_dma_alloc_coherent(struct mtk_md_dev *mdev, size_t size, dma_addr_t *addr, gfp_t flag) { diff --git a/drivers/net/wwan/mediatek/mtk_dpmaif.c b/drivers/net/wwan/mediatek/mtk_dpmaif.c index b6085110c62a..b80b71a769e8 100644 --- a/drivers/net/wwan/mediatek/mtk_dpmaif.c +++ b/drivers/net/wwan/mediatek/mtk_dpmaif.c @@ -7,6 +7,7 @@ #include #include #include +#include #include #include #include @@ -427,10 +428,13 @@ struct mtk_dpmaif_ctlb { struct mtk_data_blk *data_blk; struct mtk_data_port_ops *port_ops; struct dpmaif_drv_info *drv_info; + struct mtk_pm_entity pm_entity; struct napi_struct *napi[DPMAIF_RXQ_CNT_MAX]; enum dpmaif_state dpmaif_state; + bool dpmaif_pm_ready; bool dpmaif_user_ready; + bool dpmaif_suspending; bool trans_enabled; /* lock for enable/disable routine */ struct mutex trans_ctl_lock; @@ -927,6 +931,14 @@ static void mtk_dpmaif_bat_reload_work(struct work_struct *work) bat_info = container_of(bat_ring, struct dpmaif_bat_info, frag_bat_ring); dcb = bat_info->dcb; + pm_runtime_get(DCB_TO_DEV(dcb)); + mtk_pm_ds_lock(DCB_TO_MDEV(dcb), MTK_USER_DPMAIF); + ret = mtk_pm_ds_wait_complete(DCB_TO_MDEV(dcb), MTK_USER_DPMAIF); + if (unlikely(ret < 0)) { + dev_err(DCB_TO_DEV(dcb), "Failed to wait ds_lock\n"); + mtk_dpmaif_common_err_handle(dcb, true); + goto out; + } if (bat_ring->type == NORMAL_BAT) { /* Recycle normal bat and reload rx normal buffer. */ @@ -934,7 +946,7 @@ static void mtk_dpmaif_bat_reload_work(struct work_struct *work) if (unlikely(ret < 0)) { dev_err(DCB_TO_DEV(dcb), "Failed to recycle normal bat and reload rx buffer\n"); - return; + goto out; } if (bat_ring->bat_cnt_err_intr_set) { @@ -949,7 +961,7 @@ static void mtk_dpmaif_bat_reload_work(struct work_struct *work) if (unlikely(ret < 0)) { dev_err(DCB_TO_DEV(dcb), "Failed to recycle frag bat and reload rx buffer\n"); - return; + goto out; } if (bat_ring->bat_cnt_err_intr_set) { @@ -959,6 +971,10 @@ static void mtk_dpmaif_bat_reload_work(struct work_struct *work) } } } + +out: + mtk_pm_ds_unlock(DCB_TO_MDEV(dcb), MTK_USER_DPMAIF); + pm_runtime_put(DCB_TO_DEV(dcb)); } static void mtk_dpmaif_queue_bat_reload_work(struct mtk_dpmaif_ctlb *dcb) @@ -1332,6 +1348,16 @@ static void mtk_dpmaif_tx_doorbell(struct work_struct *work) txq = container_of(dwork, struct dpmaif_txq, doorbell_work); dcb = txq->dcb; + pm_runtime_get_sync(DCB_TO_DEV(dcb)); + mtk_pm_ds_lock(DCB_TO_MDEV(dcb), MTK_USER_DPMAIF); + + ret = mtk_pm_ds_wait_complete(DCB_TO_MDEV(dcb), MTK_USER_DPMAIF); + if (unlikely(ret < 0)) { + dev_err(DCB_TO_DEV(dcb), "Failed to wait ds_lock\n"); + mtk_dpmaif_common_err_handle(dcb, true); + goto out; + } + to_submit_cnt = atomic_read(&txq->to_submit_cnt); if (to_submit_cnt > 0) { @@ -1344,6 +1370,10 @@ static void mtk_dpmaif_tx_doorbell(struct work_struct *work) atomic_sub(to_submit_cnt, &txq->to_submit_cnt); } + +out: + mtk_pm_ds_unlock(DCB_TO_MDEV(dcb), MTK_USER_DPMAIF); + pm_runtime_put_sync(DCB_TO_DEV(dcb)); } static unsigned int mtk_dpmaif_poll_tx_drb(struct dpmaif_txq *txq) @@ -1476,6 +1506,8 @@ static void mtk_dpmaif_tx_done(struct work_struct *work) mtk_dpmaif_drv_intr_complete(dcb->drv_info, DPMAIF_INTR_UL_DONE, txq->id, DPMAIF_UNMASK_INTR); } + + pm_runtime_put(DCB_TO_DEV(dcb)); } static int mtk_dpmaif_txq_init(struct mtk_dpmaif_ctlb *dcb, struct dpmaif_txq *txq) @@ -1567,7 +1599,8 @@ static int mtk_dpmaif_sw_wait_txq_stop(struct mtk_dpmaif_ctlb *dcb, struct dpmai flush_delayed_work(&txq->tx_done_work); /* Wait tx doorbell work done. */ - flush_delayed_work(&txq->doorbell_work); + if (!dcb->dpmaif_suspending) + flush_delayed_work(&txq->doorbell_work); return 0; } @@ -2278,7 +2311,8 @@ static void mtk_dpmaif_trans_ctl(struct mtk_dpmaif_ctlb *dcb, bool enable) mutex_lock(&dcb->trans_ctl_lock); if (enable) { if (!dcb->trans_enabled) { - if (dcb->dpmaif_state == DPMAIF_STATE_PWRON && + if (dcb->dpmaif_pm_ready && + dcb->dpmaif_state == DPMAIF_STATE_PWRON && dcb->dpmaif_user_ready) { mtk_dpmaif_trans_enable(dcb); dcb->trans_enabled = true; @@ -2286,7 +2320,8 @@ static void mtk_dpmaif_trans_ctl(struct mtk_dpmaif_ctlb *dcb, bool enable) } } else { if (dcb->trans_enabled) { - if (!(dcb->dpmaif_state == DPMAIF_STATE_PWRON) || + if (!dcb->dpmaif_pm_ready || + !(dcb->dpmaif_state == DPMAIF_STATE_PWRON) || !dcb->dpmaif_user_ready) { mtk_dpmaif_trans_disable(dcb); dcb->trans_enabled = false; @@ -2602,8 +2637,21 @@ static void mtk_dpmaif_cmd_handle(struct dpmaif_cmd_srv *srv) static void mtk_dpmaif_cmd_srv(struct work_struct *work) { struct dpmaif_cmd_srv *srv = container_of(work, struct dpmaif_cmd_srv, work); + struct mtk_dpmaif_ctlb *dcb = srv->dcb; + int ret; + + pm_runtime_get_sync(DCB_TO_DEV(dcb)); + mtk_pm_ds_lock(DCB_TO_MDEV(dcb), MTK_USER_DPMAIF); + ret = mtk_pm_ds_wait_complete(DCB_TO_MDEV(dcb), MTK_USER_DPMAIF); + if (unlikely(ret < 0)) { + /* Exception scenario, but should always do command handler. */ + mtk_dpmaif_common_err_handle(dcb, true); + } mtk_dpmaif_cmd_handle(srv); + + mtk_pm_ds_unlock(DCB_TO_MDEV(dcb), MTK_USER_DPMAIF); + pm_runtime_put_sync(DCB_TO_DEV(dcb)); } static int mtk_dpmaif_cmd_srvs_init(struct mtk_dpmaif_ctlb *dcb) @@ -3005,6 +3053,7 @@ static void mtk_dpmaif_sw_reset(struct mtk_dpmaif_ctlb *dcb) mtk_dpmaif_tx_vqs_reset(dcb); skb_queue_purge(&dcb->cmd_vq.list); memset(&dcb->traffic_stats, 0x00, sizeof(struct dpmaif_traffic_stats)); + dcb->dpmaif_pm_ready = true; dcb->dpmaif_user_ready = false; dcb->trans_enabled = false; } @@ -3098,6 +3147,64 @@ static int mtk_dpmaif_fsm_exit(struct mtk_dpmaif_ctlb *dcb) return ret; } +static int mtk_dpmaif_suspend(struct mtk_md_dev *mdev, void *param) +{ + struct mtk_dpmaif_ctlb *dcb = param; + + dcb->dpmaif_pm_ready = false; + dcb->dpmaif_suspending = true; + mtk_dpmaif_trans_ctl(dcb, false); + dcb->dpmaif_suspending = false; + + return 0; +} + +static int mtk_dpmaif_resume(struct mtk_md_dev *mdev, void *param) +{ + bool dev_is_reset = mtk_pm_check_dev_reset(mdev); + struct mtk_dpmaif_ctlb *dcb = param; + + /* If device resume after device power off, we don't need to enable trans. + * Since host driver will run re-init flow, we will get back to normal. + */ + if (!dev_is_reset) { + dcb->dpmaif_pm_ready = true; + mtk_dpmaif_trans_ctl(dcb, true); + } + + return 0; +} + +static int mtk_dpmaif_pm_init(struct mtk_dpmaif_ctlb *dcb) +{ + struct mtk_pm_entity *pm_entity; + int ret; + + pm_entity = &dcb->pm_entity; + INIT_LIST_HEAD(&pm_entity->entry); + pm_entity->user = MTK_USER_DPMAIF; + pm_entity->param = dcb; + pm_entity->suspend = &mtk_dpmaif_suspend; + pm_entity->resume = &mtk_dpmaif_resume; + + ret = mtk_pm_entity_register(DCB_TO_MDEV(dcb), pm_entity); + if (ret < 0) + dev_err(DCB_TO_DEV(dcb), "Failed to register dpmaif pm_entity\n"); + + return ret; +} + +static int mtk_dpmaif_pm_exit(struct mtk_dpmaif_ctlb *dcb) +{ + int ret; + + ret = mtk_pm_entity_unregister(DCB_TO_MDEV(dcb), &dcb->pm_entity); + if (ret < 0) + dev_err(DCB_TO_DEV(dcb), "Failed to unregister dpmaif pm_entity\n"); + + return ret; +} + static int mtk_dpmaif_sw_init(struct mtk_data_blk *data_blk, const struct dpmaif_res_cfg *res_cfg) { struct mtk_dpmaif_ctlb *dcb; @@ -3110,6 +3217,7 @@ static int mtk_dpmaif_sw_init(struct mtk_data_blk *data_blk, const struct dpmaif data_blk->dcb = dcb; dcb->data_blk = data_blk; dcb->dpmaif_state = DPMAIF_STATE_PWROFF; + dcb->dpmaif_pm_ready = true; dcb->dpmaif_user_ready = false; dcb->trans_enabled = false; mutex_init(&dcb->trans_ctl_lock); @@ -3160,6 +3268,12 @@ static int mtk_dpmaif_sw_init(struct mtk_data_blk *data_blk, const struct dpmaif goto err_init_port; } + ret = mtk_dpmaif_pm_init(dcb); + if (ret < 0) { + dev_err(DCB_TO_DEV(dcb), "Failed to initialize dpmaif PM, ret=%d\n", ret); + goto err_init_pm; + } + ret = mtk_dpmaif_fsm_init(dcb); if (ret < 0) { dev_err(DCB_TO_DEV(dcb), "Failed to initialize dpmaif fsm, ret=%d\n", ret); @@ -3177,6 +3291,8 @@ static int mtk_dpmaif_sw_init(struct mtk_data_blk *data_blk, const struct dpmaif err_init_irq: mtk_dpmaif_fsm_exit(dcb); err_init_fsm: + mtk_dpmaif_pm_exit(dcb); +err_init_pm: mtk_dpmaif_port_exit(dcb); err_init_port: mtk_dpmaif_drv_res_exit(dcb); @@ -3207,6 +3323,7 @@ static int mtk_dpmaif_sw_exit(struct mtk_data_blk *data_blk) mtk_dpmaif_irq_exit(dcb); mtk_dpmaif_fsm_exit(dcb); + mtk_dpmaif_pm_exit(dcb); mtk_dpmaif_port_exit(dcb); mtk_dpmaif_drv_res_exit(dcb); mtk_dpmaif_cmd_srvs_exit(dcb); @@ -3862,6 +3979,7 @@ static int mtk_dpmaif_rx_napi_poll(struct napi_struct *napi, int budget) int work_done = 0; int ret; + pm_runtime_get(DCB_TO_DEV(dcb)); if (likely(rxq->started)) { ret = mtk_dpmaif_rx_data_collect_more(rxq, budget, &work_done); stats->rx_done_last_cnt[rxq->id] += work_done; @@ -3877,6 +3995,8 @@ static int mtk_dpmaif_rx_napi_poll(struct napi_struct *napi, int budget) mtk_dpmaif_drv_intr_complete(dcb->drv_info, DPMAIF_INTR_DL_DONE, rxq->id, 0); } + pm_runtime_put(DCB_TO_DEV(dcb)); + return work_done; } diff --git a/drivers/net/wwan/mediatek/mtk_pm.c b/drivers/net/wwan/mediatek/mtk_pm.c new file mode 100644 index 000000000000..6505df09ce06 --- /dev/null +++ b/drivers/net/wwan/mediatek/mtk_pm.c @@ -0,0 +1,1004 @@ +// SPDX-License-Identifier: BSD-3-Clause-Clear +/* + * Copyright (c) 2022, MediaTek Inc. + */ + +#include +#include +#include +#include +#include +#include +#include +#include + +#include "mtk_dev.h" +#include "mtk_fsm.h" +#include "mtk_reg.h" + +#define LINK_CHECK_RETRY_COUNT 30 + +static int mtk_pm_wait_ds_lock_done(struct mtk_md_dev *mdev, u32 delay) +{ + struct mtk_md_pm *pm = &mdev->pm; + u32 polling_time = 0; + u32 reg = 0; + + do { + /* Delay some time to poll the deep sleep status. */ + udelay(pm->cfg.ds_lock_polling_interval_us); + + reg = mtk_hw_get_ds_status(mdev); + if ((reg & 0x1F) == 0x1F) + return 0; + + polling_time += pm->cfg.ds_lock_polling_interval_us; + } while (polling_time < delay); + dev_err(mdev->dev, "achieving max polling time %d res_state = 0x%x\n", delay, reg); + + return -ETIMEDOUT; +} + +static int mtk_pm_try_lock_l1ss(struct mtk_md_dev *mdev, bool report) +{ + int ret; + + mtk_hw_set_l1ss(mdev, L1SS_BIT_L1(L1SS_PM), false); + ret = mtk_pm_wait_ds_lock_done(mdev, mdev->pm.cfg.ds_lock_polling_max_us); + + if (ret) { + dev_err(mdev->dev, "Failed to lock L1ss!\n"); + if (report) + mtk_except_report_evt(mdev, EXCEPT_LINK_ERR); + } + + return ret; +} + +static int mtk_pm_reset(struct mtk_md_dev *mdev) +{ + struct mtk_md_pm *pm = &mdev->pm; + + if (!test_bit(SUSPEND_F_INIT, &pm->state)) { + set_bit(SUSPEND_F_INIT, &pm->state); + pm_runtime_get_noresume(mdev->dev); + } + + return 0; +} + +static int mtk_pm_init_late(struct mtk_md_dev *mdev) +{ + struct mtk_md_pm *pm = &mdev->pm; + + mtk_hw_unmask_ext_evt(mdev, pm->ext_evt_chs); + mtk_hw_unmask_irq(mdev, pm->irq_id); + mtk_hw_set_l1ss(mdev, L1SS_BIT_L1(L1SS_PM), true); + + /* Clear init flag */ + if (test_bit(SUSPEND_F_INIT, &pm->state)) { + clear_bit(SUSPEND_F_INIT, &pm->state); + pm_runtime_put_noidle(mdev->dev); + } + + return 0; +} + +static bool mtk_pm_except_handle(struct mtk_md_dev *mdev, bool report) +{ + if (mtk_hw_link_check(mdev)) { + /* report EXCEPT_LINK_ERR event if report is true; */ + if (report) + mtk_except_report_evt(mdev, EXCEPT_LINK_ERR); + return false; + } + + return true; +} + +/* mtk_pm_ds_lock - Lock device power state to prevent it entering deep sleep. + * @mdev: pointer to mtk_md_dev + * @user: user who issues lock request. + * + * This function locks device power state, any user who + * needs to interact with device shall make sure that + * device is not in deep sleep. + * + * Return: return value is 0 on success, a negative error + * code on failure. + */ +int mtk_pm_ds_lock(struct mtk_md_dev *mdev, enum mtk_user_id user) +{ + struct mtk_md_pm *pm = &mdev->pm; + unsigned long flags = 0; + u32 reg; + + if (test_bit(SUSPEND_F_INIT, &pm->state) || + test_bit(SUSPEND_F_SLEEP, &pm->state)) { + reinit_completion(&pm->ds_lock_complete); + complete_all(&pm->ds_lock_complete); + atomic_inc(&pm->ds_lock_refcnt); + return 0; + } + + spin_lock_irqsave(&pm->ds_spinlock, flags); + if (atomic_inc_return(&pm->ds_lock_refcnt) == 1) { + reinit_completion(&pm->ds_lock_complete); + mtk_hw_ds_lock(mdev); + reg = mtk_hw_get_ds_status(mdev); + /* reg & 0xFF = 0b1111 1111 indicates linkdown, + * reg & 0xFF = 0b0001 1111 indicates ds lock is locked. + */ + if ((reg & 0xFF) == 0x1F) { + complete_all(&pm->ds_lock_complete); + spin_unlock_irqrestore(&pm->ds_spinlock, flags); + return 0; + } + mtk_hw_send_ext_evt(mdev, EXT_EVT_H2D_PCIE_DS_LOCK); + } + spin_unlock_irqrestore(&pm->ds_spinlock, flags); + + return 0; +} + +/* mtk_pm_ds_unlock - Unlock device power state. + * @mdev: pointer to mtk_md_dev + * @user: user who issues unlock request. + * + * This function unlocks device power state, after all users + * unlock device power state, the device will enter deep sleep. + * + * Return: return value is 0 on success, a negative error + * code on failure. + */ +int mtk_pm_ds_unlock(struct mtk_md_dev *mdev, enum mtk_user_id user) +{ + struct mtk_md_pm *pm = &mdev->pm; + u32 unlock_timeout; + + atomic_dec(&pm->ds_lock_refcnt); + if (test_bit(SUSPEND_F_INIT, &pm->state) || + test_bit(SUSPEND_F_SLEEP, &pm->state)) + return 0; + + unlock_timeout = pm->cfg.ds_delayed_unlock_timeout_ms; + if (!atomic_read(&pm->ds_lock_refcnt)) { + cancel_delayed_work(&pm->ds_unlock_work); + schedule_delayed_work(&pm->ds_unlock_work, msecs_to_jiffies(unlock_timeout)); + } + + return 0; +} + +/* mtk_pm_ds_wait_complete -Try to get completion for a while. + * + * @mdev: pointer to mtk_md_dev + * @user: user id + * + * The function is not interruptible. + * + * Return: return value is 0 on success, a negative error + * code on failure. + */ +int mtk_pm_ds_wait_complete(struct mtk_md_dev *mdev, enum mtk_user_id user) +{ + struct mtk_md_pm *pm = &mdev->pm; + u32 unlock_timeout; + int res; + /* 0 if timed out, and positive (at least 1, + * or number of jiffies left till timeout) if completed. + */ + unlock_timeout = pm->cfg.ds_lock_wait_timeout_ms; + res = wait_for_completion_timeout(&pm->ds_lock_complete, msecs_to_jiffies(unlock_timeout)); + + if (res > 0) + return 0; + + /* only dump register here */ + res = mtk_pm_except_handle(mdev, false); + return res ? -ETIMEDOUT : -EIO; +} + +static void mtk_pm_ds_unlock_work(struct work_struct *work) +{ + struct delayed_work *dwork = to_delayed_work(work); + struct mtk_md_dev *mdev; + struct mtk_md_pm *pm; + unsigned long flags; + + pm = container_of(dwork, struct mtk_md_pm, ds_unlock_work); + mdev = container_of(pm, struct mtk_md_dev, pm); + + flags = 0; + spin_lock_irqsave(&pm->ds_spinlock, flags); + if (!atomic_read(&pm->ds_lock_refcnt)) + mtk_hw_ds_unlock(mdev); + spin_unlock_irqrestore(&pm->ds_spinlock, flags); +} + +/* mtk_pm_entity_register - Register pm entity into mtk_md_pm's list entry. + * @mdev: pointer to mtk_md_dev + * @user: pm entity + * + * After registration, pm entity's related callbacks + * could be called upon pm event happening. + * + * Return: return value is 0 on success, a negative error + * code on failure. + */ +int mtk_pm_entity_register(struct mtk_md_dev *mdev, + struct mtk_pm_entity *md_entity) +{ + struct mtk_md_pm *pm = &mdev->pm; + struct mtk_pm_entity *entity; + + mutex_lock(&pm->entity_mtx); + list_for_each_entry(entity, &pm->entities, entry) { + if (entity->user == md_entity->user) { + mutex_unlock(&pm->entity_mtx); + return -EALREADY; + } + } + list_add_tail(&md_entity->entry, &pm->entities); + mutex_unlock(&pm->entity_mtx); + + return 0; +} + +/* mtk_pm_entity_unregister - Unregister pm entity from mtk_md_pm's list entry. + * @mdev: pointer to mtk_md_dev + * @user: pm entity + * + * Return: return value is 0 on success, a negative error + * code on failure. + */ +int mtk_pm_entity_unregister(struct mtk_md_dev *mdev, + struct mtk_pm_entity *md_entity) +{ + struct mtk_pm_entity *entity, *cursor; + struct mtk_md_pm *pm = &mdev->pm; + + mutex_lock(&pm->entity_mtx); + list_for_each_entry_safe(cursor, entity, &pm->entities, entry) { + if (cursor->user == md_entity->user) { + list_del(&cursor->entry); + mutex_unlock(&pm->entity_mtx); + return 0; + } + } + mutex_unlock(&pm->entity_mtx); + + return -EALREADY; +} + +/* mtk_pm_check_dev_reset - Check if device power off after suspended. + * @mdev: pointer to mtk_md_dev + * + * Return: true indicates device is powered off after suspended, + * false indicates device is not powered off after suspended. + */ +bool mtk_pm_check_dev_reset(struct mtk_md_dev *mdev) +{ + return mdev->pm.resume_from_l3; +} + +static int mtk_pm_reinit(struct mtk_md_dev *mdev) +{ + struct mtk_md_pm *pm = &mdev->pm; + + if (!test_bit(SUSPEND_F_INIT, &pm->state)) { + set_bit(SUSPEND_F_INIT, &pm->state); + pm_runtime_get_noresume(mdev->dev); + } + + clear_bit(SUSPEND_F_SLEEP, &pm->state); + + /* in init stage, no need to report exception event */ + return mtk_pm_try_lock_l1ss(mdev, false); +} + +static int mtk_pm_entity_resume_early(struct mtk_md_dev *mdev) +{ + struct mtk_md_pm *pm = &mdev->pm; + struct mtk_pm_entity *entity; + int ret; + + list_for_each_entry(entity, &pm->entities, entry) { + if (entity->resume_early) { + ret = entity->resume_early(mdev, entity->param); + if (ret) + return ret; + } + } + + return 0; +} + +static int mtk_pm_entity_resume(struct mtk_md_dev *mdev) +{ + struct mtk_md_pm *pm = &mdev->pm; + struct mtk_pm_entity *entity; + int ret; + + list_for_each_entry(entity, &pm->entities, entry) { + if (entity->resume) { + ret = entity->resume(mdev, entity->param); + if (ret) + return ret; + } + } + + return 0; +} + +static int mtk_pm_entity_suspend(struct mtk_md_dev *mdev) +{ + struct mtk_md_pm *pm = &mdev->pm; + struct mtk_pm_entity *entity; + int ret; + + list_for_each_entry(entity, &pm->entities, entry) { + if (entity->suspend) { + ret = entity->suspend(mdev, entity->param); + if (ret) + return ret; + } + } + + return 0; +} + +static int mtk_pm_entity_suspend_late(struct mtk_md_dev *mdev) +{ + struct mtk_md_pm *pm = &mdev->pm; + struct mtk_pm_entity *entity; + int ret; + + list_for_each_entry(entity, &pm->entities, entry) { + if (entity->suspend_late) { + ret = entity->suspend_late(mdev, entity->param); + if (ret) + return ret; + } + } + + return 0; +} + +static void mtk_pm_ctrl_entity_resume(struct mtk_md_dev *mdev) +{ + struct mtk_md_pm *pm = &mdev->pm; + struct mtk_pm_entity *entity; + + list_for_each_entry(entity, &pm->entities, entry) { + if (entity->user == MTK_USER_CTRL && entity->resume) { + entity->resume(mdev, entity->param); + break; + } + } +} + +static void mtk_pm_dev_ack_fail_handle(struct mtk_md_dev *mdev) +{ + mtk_pm_except_handle(mdev, true); + mtk_pm_ctrl_entity_resume(mdev); +} + +static int mtk_pm_enable_wake(struct mtk_md_dev *mdev, u8 dev_state, u8 system_state, bool enable) +{ +#ifdef CONFIG_ACPI + union acpi_object in_arg[3]; + struct acpi_object_list arg_list = { 3, in_arg }; + struct pci_dev *bridge; + acpi_status acpi_ret; + acpi_handle handle; + + if (acpi_disabled) { + dev_err(mdev->dev, "Unsupported, acpi function isn't enable\n"); + return -ENODEV; + } + + bridge = pci_upstream_bridge(to_pci_dev(mdev->dev)); + if (!bridge) { + dev_err(mdev->dev, "Unable to find bridge\n"); + return -ENODEV; + } + + handle = ACPI_HANDLE(&bridge->dev); + if (!handle) { + dev_err(mdev->dev, "Unsupported, acpi handle isn't found\n"); + return -ENODEV; + } + if (!acpi_has_method(handle, "_DSW")) { + dev_err(mdev->dev, "Unsupported,_DSW method isn't supported\n"); + return -ENODEV; + } + + in_arg[0].type = ACPI_TYPE_INTEGER; + in_arg[0].integer.value = enable; + in_arg[1].type = ACPI_TYPE_INTEGER; + in_arg[1].integer.value = system_state; + in_arg[2].type = ACPI_TYPE_INTEGER; + in_arg[2].integer.value = dev_state; + acpi_ret = acpi_evaluate_object(handle, "_DSW", &arg_list, NULL); + if (ACPI_FAILURE(acpi_ret)) + dev_err(mdev->dev, "_DSW method fail for parent: %s\n", + acpi_format_exception(acpi_ret)); + + return 0; +#else + dev_err(mdev->dev, "Unsupported, CONFIG ACPI hasn't been set to 'y'\n"); + + return -ENODEV; +#endif +} + +static int mtk_pm_suspend_device(struct mtk_md_dev *mdev, bool is_runtime) +{ + struct mtk_md_pm *pm = &mdev->pm; + unsigned long flags; + u32 suspend_timeout; + int ret; + + if (test_bit(SUSPEND_F_INIT, &pm->state)) + return -EBUSY; + + ret = mtk_pm_try_lock_l1ss(mdev, true); + if (ret) + return -EBUSY; + + set_bit(SUSPEND_F_SLEEP, &pm->state); + + mtk_fsm_pause(mdev); + mtk_except_stop(mdev); + + ret = mtk_pm_entity_suspend(mdev); + if (ret) + goto err_suspend; + + reinit_completion(&pm->pm_ack); + reinit_completion(&pm->pm_ack_sap); + mtk_hw_send_ext_evt(mdev, EXT_EVT_H2D_PCIE_PM_SUSPEND_REQ); + mtk_hw_send_ext_evt(mdev, EXT_EVT_H2D_PCIE_PM_SUSPEND_REQ_AP); + + suspend_timeout = pm->cfg.suspend_wait_timeout_ms; + ret = wait_for_completion_timeout(&pm->pm_ack, msecs_to_jiffies(suspend_timeout)); + if (!ret) { + dev_err(mdev->dev, "Suspend MD timeout!\n"); + mtk_pm_dev_ack_fail_handle(mdev); + ret = -ETIMEDOUT; + goto err_suspend; + } + suspend_timeout = pm->cfg.suspend_wait_timeout_sap_ms; + ret = wait_for_completion_timeout(&pm->pm_ack_sap, msecs_to_jiffies(suspend_timeout)); + if (!ret) { + dev_err(mdev->dev, "Suspend sAP timeout!\n"); + mtk_pm_dev_ack_fail_handle(mdev); + ret = -ETIMEDOUT; + goto err_suspend; + } + + ret = mtk_pm_entity_suspend_late(mdev); + if (ret) + goto err_suspend; + + cancel_delayed_work_sync(&pm->ds_unlock_work); + if (!atomic_read(&pm->ds_lock_refcnt)) { + spin_lock_irqsave(&pm->ds_spinlock, flags); + mtk_hw_ds_unlock(mdev); + spin_unlock_irqrestore(&pm->ds_spinlock, flags); + } + + if (is_runtime) + mtk_pm_enable_wake(mdev, 3, 0, true); + + mtk_hw_set_l1ss(mdev, L1SS_BIT_L1(L1SS_PM), true); + + dev_info(mdev->dev, "Suspend success.\n"); + + return ret; + +err_suspend: + mtk_fsm_start(mdev); + mtk_except_start(mdev); + clear_bit(SUSPEND_F_SLEEP, &pm->state); + return ret; +} + +static int mtk_pm_do_resume_device(struct mtk_md_dev *mdev) +{ + struct mtk_md_pm *pm = &mdev->pm; + u32 resume_timeout; + int ret; + + mtk_pm_try_lock_l1ss(mdev, true); + + ret = mtk_pm_entity_resume_early(mdev); + if (ret) + goto err_resume; + + reinit_completion(&pm->pm_ack); + reinit_completion(&pm->pm_ack_sap); + + mtk_hw_send_ext_evt(mdev, EXT_EVT_H2D_PCIE_PM_RESUME_REQ); + mtk_hw_send_ext_evt(mdev, EXT_EVT_H2D_PCIE_PM_RESUME_REQ_AP); + + resume_timeout = pm->cfg.resume_wait_timeout_ms; + ret = wait_for_completion_timeout(&pm->pm_ack, msecs_to_jiffies(resume_timeout)); + if (!ret) { + dev_err(mdev->dev, "Resume MD fail!\n"); + mtk_pm_dev_ack_fail_handle(mdev); + ret = -ETIMEDOUT; + goto err_resume; + } + resume_timeout = pm->cfg.resume_wait_timeout_sap_ms; + ret = wait_for_completion_timeout(&pm->pm_ack_sap, msecs_to_jiffies(resume_timeout)); + if (!ret) { + dev_err(mdev->dev, "Resume sAP fail!\n"); + mtk_pm_dev_ack_fail_handle(mdev); + ret = -ETIMEDOUT; + goto err_resume; + } + + ret = mtk_pm_entity_resume(mdev); + if (ret) + goto err_resume; + + mtk_hw_set_l1ss(mdev, L1SS_BIT_L1(L1SS_PM), true); + dev_info(mdev->dev, "Resume success.\n"); + +err_resume: + mtk_fsm_start(mdev); + mtk_except_start(mdev); + clear_bit(SUSPEND_F_SLEEP, &pm->state); + + return ret; +} + +static int mtk_pm_resume_device(struct mtk_md_dev *mdev, bool is_runtime, bool atr_init) +{ + enum mtk_pm_resume_state resume_state; + struct mtk_md_pm *pm = &mdev->pm; + int ret = 0; + + if (is_runtime) + mtk_pm_enable_wake(mdev, 0, 0, false); + + if (unlikely(test_bit(SUSPEND_F_INIT, &pm->state))) { + clear_bit(SUSPEND_F_SLEEP, &pm->state); + return 0; + } + + resume_state = mtk_hw_get_resume_state(mdev); + + if ((resume_state == PM_RESUME_STATE_INIT && atr_init) || + resume_state == PM_RESUME_STATE_L3) + mdev->pm.resume_from_l3 = true; + else + mdev->pm.resume_from_l3 = false; + dev_info(mdev->dev, "Resume Enter: resume state = %d, is_runtime = %d, atr_init = %d\n", + resume_state, is_runtime, atr_init); + switch (resume_state) { + case PM_RESUME_STATE_INIT: + if (!atr_init) + break; + fallthrough; + case PM_RESUME_STATE_L3: + ret = mtk_hw_reinit(mdev, REINIT_TYPE_RESUME); + if (ret) { + mtk_pm_except_handle(mdev, false); + dev_err(mdev->dev, "Failed to reinit HW in resume routine!\n"); + return ret; + } + + mtk_pm_entity_resume_early(mdev); + mtk_pm_entity_resume(mdev); + + mtk_fsm_evt_submit(mdev, FSM_EVT_COLD_RESUME, + FSM_F_DFLT, NULL, 0, EVT_MODE_TOHEAD); + /* No need to start except, for hw reinit will do it later. */ + mtk_fsm_start(mdev); + mtk_fsm_evt_submit(mdev, FSM_EVT_REINIT, + FSM_F_DFLT, NULL, 0, EVT_MODE_BLOCKING); + dev_info(mdev->dev, "Resume success from L3.\n"); + return 0; + case PM_RESUME_STATE_L2_EXCEPT: + ret = mtk_hw_reinit(mdev, REINIT_TYPE_RESUME); + if (ret) { + mtk_pm_except_handle(mdev, false); + dev_err(mdev->dev, "Failed to reinit HW in PM!\n"); + return ret; + } + mtk_hw_unmask_irq(mdev, pm->irq_id); + fallthrough; + case PM_RESUME_STATE_L1_EXCEPT: + mtk_pm_entity_resume_early(mdev); + mtk_pm_entity_resume(mdev); + set_bit(SUSPEND_F_INIT, &pm->state); + mtk_fsm_start(mdev); + mtk_except_start(mdev); + dev_info(mdev->dev, "Resume success from exception.\n"); + return 0; + case PM_RESUME_STATE_L2: + ret = mtk_hw_reinit(mdev, REINIT_TYPE_RESUME); + if (ret) { + dev_err(mdev->dev, "Failed to reinit HW in PM!\n"); + return ret; + } + mtk_hw_unmask_irq(mdev, pm->irq_id); + fallthrough; + case PM_RESUME_STATE_L1: + break; + default: + set_bit(SUSPEND_F_INIT, &pm->state); + cancel_delayed_work_sync(&pm->resume_work); + schedule_delayed_work(&pm->resume_work, HZ); + return 0; + } + + return mtk_pm_do_resume_device(mdev); +} + +static void mtk_pm_resume_work(struct work_struct *work) +{ + struct delayed_work *dwork = to_delayed_work(work); + struct mtk_md_dev *mdev; + struct mtk_md_pm *pm; + int ret = 0; + int cnt = 0; + + pm = container_of(dwork, struct mtk_md_pm, resume_work); + mdev = container_of(pm, struct mtk_md_dev, pm); + + do { + ret = mtk_hw_link_check(mdev); + if (!ret) + break; + /* Wait for 1 second to check link state. */ + msleep(1000); + cnt++; + } while (cnt < LINK_CHECK_RETRY_COUNT); + + if (!ret) { + mtk_except_report_evt(mdev, EXCEPT_LINK_ERR); + return; + } + mtk_fsm_evt_submit(mdev, FSM_EVT_COLD_RESUME, FSM_F_DFLT, NULL, 0, EVT_MODE_TOHEAD); + /* No need to start except, for hw reinit will do it */ + mtk_fsm_start(mdev); + /* FSM_EVT_REINIT is full reinit */ + mtk_fsm_evt_submit(mdev, FSM_EVT_REINIT, FSM_F_FULL_REINIT, NULL, 0, 0); + dev_info(mdev->dev, "Resume success within delayed work.\n"); +} + +int mtk_pm_suspend(struct device *dev) +{ + struct mtk_md_dev *mdev; + struct pci_dev *pdev; + + pdev = to_pci_dev(dev); + mdev = pci_get_drvdata(pdev); + + dev_info(mdev->dev, "Enter suspend."); + return mtk_pm_suspend_device(mdev, false); +} + +int mtk_pm_resume(struct device *dev, bool atr_init) +{ + struct mtk_md_dev *mdev; + struct pci_dev *pdev; + + pdev = to_pci_dev(dev); + mdev = pci_get_drvdata(pdev); + + dev_info(mdev->dev, "Enter resume."); + return mtk_pm_resume_device(mdev, false, atr_init); +} + +int mtk_pm_freeze(struct device *dev) +{ + struct mtk_md_dev *mdev; + struct pci_dev *pdev; + + pdev = to_pci_dev(dev); + mdev = pci_get_drvdata(pdev); + + dev_info(mdev->dev, "Enter freeze."); + return mtk_pm_suspend_device(mdev, false); +} + +int mtk_pm_poweroff(struct device *dev) +{ + struct mtk_md_dev *mdev; + struct pci_dev *pdev; + + pdev = to_pci_dev(dev); + mdev = pci_get_drvdata(pdev); + + return mtk_pm_suspend_device(mdev, false); +} + +int mtk_pm_restore(struct device *dev, bool atr_init) +{ + struct mtk_md_dev *mdev; + struct pci_dev *pdev; + + pdev = to_pci_dev(dev); + mdev = pci_get_drvdata(pdev); + + return mtk_pm_resume_device(mdev, false, atr_init); +} + +int mtk_pm_thaw(struct device *dev, bool atr_init) +{ + struct mtk_md_dev *mdev; + struct pci_dev *pdev; + + pdev = to_pci_dev(dev); + mdev = pci_get_drvdata(pdev); + + return mtk_pm_resume_device(mdev, false, atr_init); +} + +int mtk_pm_runtime_suspend(struct device *dev) +{ + struct mtk_md_dev *mdev; + struct pci_dev *pdev; + + pdev = to_pci_dev(dev); + mdev = pci_get_drvdata(pdev); + + return mtk_pm_suspend_device(mdev, true); +} + +int mtk_pm_runtime_resume(struct device *dev, bool atr_init) +{ + struct mtk_md_dev *mdev; + struct pci_dev *pdev; + + pdev = to_pci_dev(dev); + mdev = pci_get_drvdata(pdev); + + return mtk_pm_resume_device(mdev, true, atr_init); +} + +int mtk_pm_runtime_idle(struct device *dev) +{ + pm_schedule_suspend(dev, 20 * MSEC_PER_SEC); + return -EBUSY; +} + +void mtk_pm_shutdown(struct mtk_md_dev *mdev) +{ + mtk_pm_suspend_device(mdev, false); +} + +static void mtk_pm_fsm_state_handler(struct mtk_fsm_param *fsm_param, void *data) +{ + struct mtk_md_dev *mdev; + struct mtk_md_pm *pm; + + pm = data; + mdev = container_of(pm, struct mtk_md_dev, pm); + switch (fsm_param->to) { + case FSM_STATE_ON: + if (fsm_param->evt_id == FSM_EVT_REINIT) + mtk_pm_reinit(mdev); + break; + + case FSM_STATE_READY: + mtk_pm_init_late(mdev); + break; + + case FSM_STATE_OFF: + mtk_pm_reset(mdev); + break; + + case FSM_STATE_MDEE: + if (fsm_param->fsm_flag == FSM_F_MDEE_INIT) + mtk_pm_reinit(mdev); + break; + + default: + break; + } +} + +static int mtk_pm_irq_handler(int irq_id, void *data) +{ + struct mtk_md_dev *mdev; + struct mtk_md_pm *pm; + + pm = data; + mdev = container_of(pm, struct mtk_md_dev, pm); + mtk_hw_clear_sw_evt(mdev, D2H_SW_EVT_PM_LOCK_ACK); + mtk_hw_clear_irq(mdev, irq_id); + complete_all(&pm->ds_lock_complete); + mtk_hw_unmask_irq(mdev, irq_id); + return IRQ_HANDLED; +} + +static int mtk_pm_ext_evt_handler(u32 status, void *data) +{ + int pm_suspend_ack_sap = 0; + int pm_resume_ack_sap = 0; + struct mtk_md_dev *mdev; + int pm_suspend_ack = 0; + int pm_resume_ack = 0; + struct mtk_md_pm *pm; + int pm_ds_lock = 0; + + pm = data; + mdev = container_of(pm, struct mtk_md_dev, pm); + + if (status & EXT_EVT_D2H_PCIE_DS_LOCK_ACK) + pm_ds_lock = 1; + + if (status & EXT_EVT_D2H_PCIE_PM_SUSPEND_ACK) + pm_suspend_ack = 1; + + if (status & EXT_EVT_D2H_PCIE_PM_RESUME_ACK) + pm_resume_ack = 1; + + if (status & EXT_EVT_D2H_PCIE_PM_SUSPEND_ACK_AP) + pm_suspend_ack_sap = 1; + + if (status & EXT_EVT_D2H_PCIE_PM_RESUME_ACK_AP) + pm_resume_ack_sap = 1; + + mtk_hw_clear_ext_evt(mdev, status); + + if (pm_ds_lock) + complete_all(&pm->ds_lock_complete); + + if (pm_suspend_ack || pm_resume_ack) + complete_all(&pm->pm_ack); + + if (pm_suspend_ack_sap || pm_resume_ack_sap) + complete_all(&pm->pm_ack_sap); + + mtk_hw_unmask_ext_evt(mdev, status); + + return IRQ_HANDLED; +} + +/* mtk_pm_init - Initialize pm fields of struct mtk_md_dev. + * @mdev: pointer to mtk_md_dev + * + * This function initializes pm fields of struct mtk_md_dev, + * after that the driver is capable of performing pm related + * functions. + * + * Return: return value is 0 on success, a negative error + * code on failure. + */ +int mtk_pm_init(struct mtk_md_dev *mdev) +{ + struct mtk_md_pm *pm = &mdev->pm; + int irq_id = -1; + int ret; + + INIT_LIST_HEAD(&pm->entities); + + spin_lock_init(&pm->ds_spinlock); + mutex_init(&pm->entity_mtx); + + init_completion(&pm->ds_lock_complete); + init_completion(&pm->pm_ack); + init_completion(&pm->pm_ack_sap); + + INIT_DELAYED_WORK(&pm->ds_unlock_work, mtk_pm_ds_unlock_work); + INIT_DELAYED_WORK(&pm->resume_work, mtk_pm_resume_work); + + atomic_set(&pm->ds_lock_refcnt, 0); + pm->ds_lock_sent = 0; + pm->ds_lock_recv = 0; + + pm->cfg.ds_delayed_unlock_timeout_ms = 100; + pm->cfg.ds_lock_wait_timeout_ms = 50; + pm->cfg.suspend_wait_timeout_ms = 1500; + pm->cfg.resume_wait_timeout_ms = 1500; + pm->cfg.suspend_wait_timeout_sap_ms = 1500; + pm->cfg.resume_wait_timeout_sap_ms = 1500; + pm->cfg.ds_lock_polling_max_us = 10000; + pm->cfg.ds_lock_polling_min_us = 2000; + pm->cfg.ds_lock_polling_interval_us = 10; + + /* Set init event flag to prevent device from suspending. */ + set_bit(SUSPEND_F_INIT, &pm->state); + + mtk_pm_try_lock_l1ss(mdev, false); + + device_init_wakeup(mdev->dev, true); + + /* register sw irq for ds lock. */ + irq_id = mtk_hw_get_irq_id(mdev, MTK_IRQ_SRC_PM_LOCK); + if (irq_id < 0) { + dev_err(mdev->dev, "Failed to allocate Irq id!\n"); + ret = -EFAULT; + goto err_start_init; + } + + ret = mtk_hw_register_irq(mdev, irq_id, mtk_pm_irq_handler, pm); + if (ret) { + dev_err(mdev->dev, "Failed to register irq!\n"); + ret = -EFAULT; + goto err_start_init; + } + pm->irq_id = irq_id; + + /* register mhccif interrupt handler. */ + pm->ext_evt_chs = EXT_EVT_D2H_PCIE_PM_SUSPEND_ACK | + EXT_EVT_D2H_PCIE_PM_RESUME_ACK | + EXT_EVT_D2H_PCIE_PM_SUSPEND_ACK_AP | + EXT_EVT_D2H_PCIE_PM_RESUME_ACK_AP | + EXT_EVT_D2H_PCIE_DS_LOCK_ACK; + + ret = mtk_hw_register_ext_evt(mdev, pm->ext_evt_chs, mtk_pm_ext_evt_handler, pm); + if (ret) { + dev_err(mdev->dev, "Failed to register ext event!\n"); + ret = -EFAULT; + goto err_reg_ext_evt; + } + + /* register fsm notify callback */ + ret = mtk_fsm_notifier_register(mdev, MTK_USER_PM, + mtk_pm_fsm_state_handler, pm, FSM_PRIO_0, false); + if (ret) { + dev_err(mdev->dev, "Failed to register fsm notifier!\n"); + ret = -EFAULT; + goto err_reg_fsm_notifier; + } + + return 0; + +err_reg_fsm_notifier: + mtk_hw_unregister_ext_evt(mdev, pm->ext_evt_chs); +err_reg_ext_evt: + if (irq_id >= 0) + mtk_hw_unregister_irq(mdev, irq_id); +err_start_init: + return ret; +} + +/* mtk_pm_exit_early - Acquire device ds lock at the beginning + * of driver exit routine. + * @mdev: pointer to mtk_md_dev + * + * Return: return value is 0 on success, a negative error + * code on failure. + */ +int mtk_pm_exit_early(struct mtk_md_dev *mdev) +{ + /* In kernel device_del, system pm is already removed from pm entry list + * and runtime pm is forbidden as well, thus no need to disable + * PM here. + */ + + return mtk_pm_try_lock_l1ss(mdev, false); +} + +/* mtk_pm_exit - PM exit cleanup routine. + * @mdev: pointer to mtk_md_dev + * + * Return: return value is 0 on success, a negative error + * code on failure. + */ +int mtk_pm_exit(struct mtk_md_dev *mdev) +{ + struct mtk_md_pm *pm; + + if (!mdev) + return -EINVAL; + + pm = &mdev->pm; + + cancel_delayed_work_sync(&pm->ds_unlock_work); + cancel_delayed_work_sync(&pm->resume_work); + + mtk_fsm_notifier_unregister(mdev, MTK_USER_PM); + mtk_hw_unregister_ext_evt(mdev, pm->ext_evt_chs); + mtk_hw_unregister_irq(mdev, pm->irq_id); + + return 0; +} diff --git a/drivers/net/wwan/mediatek/pcie/mtk_cldma_drv_t800.c b/drivers/net/wwan/mediatek/pcie/mtk_cldma_drv_t800.c index c58ec64a59bf..51c903f6f664 100644 --- a/drivers/net/wwan/mediatek/pcie/mtk_cldma_drv_t800.c +++ b/drivers/net/wwan/mediatek/pcie/mtk_cldma_drv_t800.c @@ -339,6 +339,7 @@ static void mtk_cldma_tx_done_work(struct work_struct *work) struct trb *trb; int i; + pm_runtime_get(mdev->dev); again: for (i = 0; i < txq->req_pool_size; i++) { req = txq->req_pool + txq->free_idx; @@ -365,6 +366,7 @@ static void mtk_cldma_tx_done_work(struct work_struct *work) DIR_TX, txq->txqno, QUEUE_XFER_DONE); if (state) { if (unlikely(state == LINK_ERROR_VAL)) { + pm_runtime_put(mdev->dev); mtk_except_report_evt(mdev, EXCEPT_LINK_ERR); return; } @@ -385,6 +387,7 @@ static void mtk_cldma_tx_done_work(struct work_struct *work) mtk_cldma_unmask_intr(mdev, txq->hw->base_addr, DIR_TX, txq->txqno, QUEUE_XFER_DONE); mtk_cldma_clear_ip_busy(mdev, txq->hw->base_addr); + pm_runtime_put(mdev->dev); } static void mtk_cldma_rx_done_work(struct work_struct *work) @@ -406,6 +409,7 @@ static void mtk_cldma_rx_done_work(struct work_struct *work) else bm_pool = rxq->hw->cd->trans->ctrl_blk->bm_pool; + pm_runtime_get(mdev->dev); do { for (i = 0; i < rxq->req_pool_size; i++) { req = rxq->req_pool + rxq->free_idx; @@ -455,6 +459,7 @@ static void mtk_cldma_rx_done_work(struct work_struct *work) break; if (unlikely(state == LINK_ERROR_VAL)) { + pm_runtime_put(mdev->dev); mtk_except_report_evt(mdev, EXCEPT_LINK_ERR); return; } @@ -469,6 +474,7 @@ static void mtk_cldma_rx_done_work(struct work_struct *work) mtk_cldma_unmask_intr(mdev, rxq->hw->base_addr, DIR_RX, rxq->rxqno, QUEUE_XFER_DONE); mtk_cldma_mask_ip_busy_to_pci(mdev, rxq->hw->base_addr, rxq->rxqno, IP_BUSY_RXDONE); mtk_cldma_clear_ip_busy(mdev, rxq->hw->base_addr); + pm_runtime_put(mdev->dev); } static int mtk_cldma_isr(int irq_id, void *param) @@ -960,6 +966,43 @@ int mtk_cldma_start_xfer_t800(struct cldma_hw *hw, int qno) return 0; } +void mtk_cldma_suspend_t800(struct cldma_hw *hw) +{ + struct mtk_md_dev *mdev = hw->mdev; + struct txq *txq; + struct rxq *rxq; + int i; + + mtk_cldma_stop_queue(mdev, hw->base_addr, DIR_TX, ALLQ); + mtk_cldma_stop_queue(mdev, hw->base_addr, DIR_RX, ALLQ); + + for (i = 0; i < HW_QUEUE_NUM; i++) { + txq = hw->txq[i]; + if (txq) + flush_work(&txq->tx_done_work); + + rxq = hw->rxq[i]; + if (rxq) + flush_work(&rxq->rx_done_work); + } + + mtk_hw_mask_irq(mdev, hw->pci_ext_irq_id); +} + +void mtk_cldma_resume_t800(struct cldma_hw *hw) +{ + struct mtk_md_dev *mdev = hw->mdev; + int i; + + mtk_cldma_hw_init(hw->mdev, hw->base_addr); + for (i = 0; i < HW_QUEUE_NUM; i++) { + if (hw->rxq[i]) + mtk_cldma_resume_queue(hw->mdev, hw->base_addr, DIR_RX, hw->rxq[i]->rxqno); + } + + mtk_hw_unmask_irq(mdev, hw->pci_ext_irq_id); +} + static void mtk_cldma_hw_reset(struct mtk_md_dev *mdev, int hif_id) { u32 val = mtk_hw_read32(mdev, REG_DEV_INFRA_BASE + REG_INFRA_RST0_SET); diff --git a/drivers/net/wwan/mediatek/pcie/mtk_cldma_drv_t800.h b/drivers/net/wwan/mediatek/pcie/mtk_cldma_drv_t800.h index 470a40015f77..f4fc6b55a96e 100644 --- a/drivers/net/wwan/mediatek/pcie/mtk_cldma_drv_t800.h +++ b/drivers/net/wwan/mediatek/pcie/mtk_cldma_drv_t800.h @@ -18,5 +18,7 @@ int mtk_cldma_txq_free_t800(struct cldma_hw *hw, int vqno); struct rxq *mtk_cldma_rxq_alloc_t800(struct cldma_hw *hw, struct sk_buff *skb); int mtk_cldma_rxq_free_t800(struct cldma_hw *hw, int vqno); int mtk_cldma_start_xfer_t800(struct cldma_hw *hw, int qno); +void mtk_cldma_suspend_t800(struct cldma_hw *hw); +void mtk_cldma_resume_t800(struct cldma_hw *hw); void mtk_cldma_fsm_state_listener_t800(struct mtk_fsm_param *param, struct cldma_hw *hw); #endif diff --git a/drivers/net/wwan/mediatek/pcie/mtk_pci.c b/drivers/net/wwan/mediatek/pcie/mtk_pci.c index 47727567b0c5..2db1bb52aa55 100644 --- a/drivers/net/wwan/mediatek/pcie/mtk_pci.c +++ b/drivers/net/wwan/mediatek/pcie/mtk_pci.c @@ -368,6 +368,11 @@ static int mtk_pci_clear_irq(struct mtk_md_dev *mdev, int irq_id) return 0; } +static void mtk_pci_clear_sw_evt(struct mtk_md_dev *mdev, enum mtk_d2h_sw_evt evt) +{ + mtk_pci_mac_write32(mdev->hw_priv, REG_SW_TRIG_INTR_CLR, BIT(evt)); +} + static int mtk_mhccif_register_evt(struct mtk_md_dev *mdev, u32 chs, int (*evt_cb)(u32 status, void *data), void *data) { @@ -614,6 +619,14 @@ static int mtk_pci_get_hp_status(struct mtk_md_dev *mdev) return priv->rc_hp_on; } +static void mtk_pci_write_pm_cnt(struct mtk_md_dev *mdev, u32 val) +{ + struct mtk_pci_priv *priv = mdev->hw_priv; + + mtk_pci_write32(mdev, priv->cfg->mhccif_rc_base_addr + + MHCCIF_RC2EP_PCIE_PM_COUNTER, val); +} + static u32 mtk_pci_get_resume_state(struct mtk_md_dev *mdev) { return mtk_pci_mac_read32(mdev->hw_priv, REG_PCIE_DEBUG_DUMMY_3); @@ -636,6 +649,7 @@ static const struct mtk_hw_ops mtk_pci_ops = { .mask_irq = mtk_pci_mask_irq, .unmask_irq = mtk_pci_unmask_irq, .clear_irq = mtk_pci_clear_irq, + .clear_sw_evt = mtk_pci_clear_sw_evt, .register_ext_evt = mtk_mhccif_register_evt, .unregister_ext_evt = mtk_mhccif_unregister_evt, .mask_ext_evt = mtk_mhccif_mask_evt, @@ -648,6 +662,7 @@ static const struct mtk_hw_ops mtk_pci_ops = { .link_check = mtk_pci_link_check, .mmio_check = mtk_pci_mmio_check, .get_hp_status = mtk_pci_get_hp_status, + .write_pm_cnt = mtk_pci_write_pm_cnt, }; static void mtk_mhccif_isr_work(struct work_struct *work) @@ -1194,12 +1209,117 @@ static const struct pci_error_handlers mtk_pci_err_handler = { .resume = mtk_pci_io_resume, }; +static bool mtk_pci_check_init_status(struct mtk_md_dev *mdev) +{ + if (mtk_pci_mac_read32(mdev->hw_priv, REG_ATR_PCIE_WIN0_T0_SRC_ADDR_LSB) == + ATR_WIN0_SRC_ADDR_LSB_DEFT) + /* Device reboots and isn't configured ATR, so it is default value. */ + return TRUE; + return FALSE; +} + +static int __maybe_unused mtk_pci_pm_suspend(struct device *dev) +{ + return mtk_pm_suspend(dev); +} + +static int __maybe_unused mtk_pci_pm_resume(struct device *dev) +{ + struct pci_dev *pdev = to_pci_dev(dev); + struct mtk_md_dev *mdev; + bool atr_init; + + mdev = pci_get_drvdata(pdev); + atr_init = mtk_pci_check_init_status(mdev); + + return mtk_pm_resume(dev, atr_init); +} + +static int __maybe_unused mtk_pci_pm_freeze(struct device *dev) +{ + return mtk_pm_freeze(dev); +} + +static int __maybe_unused mtk_pci_pm_thaw(struct device *dev) +{ + struct pci_dev *pdev = to_pci_dev(dev); + struct mtk_md_dev *mdev; + bool atr_init; + + mdev = pci_get_drvdata(pdev); + atr_init = mtk_pci_check_init_status(mdev); + + return mtk_pm_thaw(dev, atr_init); +} + +static int __maybe_unused mtk_pci_pm_poweroff(struct device *dev) +{ + return mtk_pm_poweroff(dev); +} + +static int __maybe_unused mtk_pci_pm_restore(struct device *dev) +{ + struct pci_dev *pdev = to_pci_dev(dev); + struct mtk_md_dev *mdev; + bool atr_init; + + mdev = pci_get_drvdata(pdev); + atr_init = mtk_pci_check_init_status(mdev); + + return mtk_pm_restore(dev, atr_init); +} + +static int __maybe_unused mtk_pci_pm_runtime_suspend(struct device *dev) +{ + return mtk_pm_runtime_suspend(dev); +} + +static int __maybe_unused mtk_pci_pm_runtime_resume(struct device *dev) +{ + struct pci_dev *pdev = to_pci_dev(dev); + struct mtk_md_dev *mdev; + bool atr_init; + + mdev = pci_get_drvdata(pdev); + atr_init = mtk_pci_check_init_status(mdev); + + return mtk_pm_runtime_resume(dev, atr_init); +} + +static int __maybe_unused mtk_pci_pm_runtime_idle(struct device *dev) +{ + return mtk_pm_runtime_idle(dev); +} + +static void mtk_pci_pm_shutdown(struct pci_dev *pdev) +{ + struct mtk_md_dev *mdev; + + mdev = pci_get_drvdata(pdev); + + return mtk_pm_shutdown(mdev); +} + +static const struct dev_pm_ops mtk_pci_pm_ops = { + .suspend = mtk_pci_pm_suspend, + .resume = mtk_pci_pm_resume, + .freeze = mtk_pci_pm_freeze, + .thaw = mtk_pci_pm_thaw, + .poweroff = mtk_pci_pm_poweroff, + .restore = mtk_pci_pm_restore, + + SET_RUNTIME_PM_OPS(mtk_pci_pm_runtime_suspend, mtk_pci_pm_runtime_resume, + mtk_pci_pm_runtime_idle) +}; + static struct pci_driver mtk_pci_drv = { .name = "mtk_pci_drv", .id_table = mtk_pci_ids, .probe = mtk_pci_probe, .remove = mtk_pci_remove, + .driver.pm = &mtk_pci_pm_ops, + .shutdown = mtk_pci_pm_shutdown, .err_handler = &mtk_pci_err_handler }; diff --git a/drivers/net/wwan/mediatek/pcie/mtk_reg.h b/drivers/net/wwan/mediatek/pcie/mtk_reg.h index 1159c29685c5..f568a2273879 100644 --- a/drivers/net/wwan/mediatek/pcie/mtk_reg.h +++ b/drivers/net/wwan/mediatek/pcie/mtk_reg.h @@ -25,6 +25,10 @@ enum mtk_ext_evt_d2h { EXT_EVT_D2H_EXCEPT_CLEARQ_DONE = 1 << 3, EXT_EVT_D2H_EXCEPT_ALLQ_RESET = 1 << 4, EXT_EVT_D2H_BOOT_FLOW_SYNC = 1 << 5, + EXT_EVT_D2H_PCIE_PM_SUSPEND_ACK = 1 << 11, + EXT_EVT_D2H_PCIE_PM_RESUME_ACK = 1 << 12, + EXT_EVT_D2H_PCIE_PM_SUSPEND_ACK_AP = 1 << 13, + EXT_EVT_D2H_PCIE_PM_RESUME_ACK_AP = 1 << 14, EXT_EVT_D2H_ASYNC_HS_NOTIFY_SAP = 1 << 15, EXT_EVT_D2H_ASYNC_HS_NOTIFY_MD = 1 << 16, }; From patchwork Tue Nov 22 11:27:10 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?WWFuY2hhbyBZYW5nICjmnajlvabotoUp?= X-Patchwork-Id: 24311 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp2149311wrr; Tue, 22 Nov 2022 03:43:31 -0800 (PST) X-Google-Smtp-Source: AA0mqf6b1c32AsEy6NqBEOyJRgosdUl39XTAyuarHf5aV8b9/1Pce7q9u8Nj4q9l5UQCVfRMbLaw X-Received: by 2002:a63:f04d:0:b0:470:5d17:a62e with SMTP id s13-20020a63f04d000000b004705d17a62emr4509047pgj.620.1669117411317; Tue, 22 Nov 2022 03:43:31 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1669117411; cv=none; d=google.com; s=arc-20160816; b=BaWVXpoaO/vkSQWjvjBprLkcnKlXiPxFXGp1TYw8CHbSJryt1HyO9PrV3hdRKrlHlA O2Vv084f3enFSYMHHj6BAyjNUYepCq03Ncv4400nZXcNnIaGQQcCJsxijKmfYBnGv3ta 0uHuOU2UVDhpnFuDnXZBE5xJ9h1WDZrJKcOnxJMLYwTuGm75LtOF4V31t1OFZpKgBXMl l+dnmEaAGwM6MS62YCA7SKqj8OkAeCgqd6jupbNYhXTvECyySQ2ITLOC33VZEvOhYckb Ob2kkjhWBIhxer+fO8qi9DIeTZD4vOoFi4lAUKI3r/YvvrMjd8t+xWA0yOMhAw9N9nz0 yLqw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:message-id:date:subject:cc:to:from :dkim-signature; bh=HLfCmvTA7XX9kHpprIfVaZHlyBrC288B65o6uhfAvVg=; b=BbWlTsPgrOrNBw716P0FQ7RTOu6045NquvhHUUBANAX09TG4mD31ldmjHToXOQJ85/ Haljs4Z/9VsPmUj23MQ0SSYRvQqtiizWj1EhYA1L63wobvbgIF3Ro+XkmmFtnoX5PP4Z cbVJcF1qDlpu0jAH13qUtaGB1bxIosS8IGzCyL9mGDjlEBbVOI60pO2ZLY+GTRKJMFQ2 AFCVBBSBM8arm/hRkhQPLFYFuwZa7FiPmKDeicE/WDYkFF9EpuXMhMUpmR/1oD5AM4ZK jt24it8rvg9dNKzh7mTqIC4NgTj2ognkd68R3Q9qqbNzdonV94KTGNs28XysTck55VKH 3usw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@mediatek.com header.s=dk header.b=QcQ9rKFc; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=mediatek.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id 23-20020a631257000000b00476eab03fbbsi6407523pgs.21.2022.11.22.03.43.17; Tue, 22 Nov 2022 03:43:31 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@mediatek.com header.s=dk header.b=QcQ9rKFc; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=mediatek.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232404AbiKVLc5 (ORCPT + 99 others); Tue, 22 Nov 2022 06:32:57 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49888 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233502AbiKVLcI (ORCPT ); Tue, 22 Nov 2022 06:32:08 -0500 Received: from mailgw02.mediatek.com (unknown [210.61.82.184]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CEB295288E; Tue, 22 Nov 2022 03:27:24 -0800 (PST) X-UUID: e76ffcae5c714fee94eba8a586db9c6b-20221122 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=mediatek.com; s=dk; h=Content-Type:MIME-Version:Message-ID:Date:Subject:CC:To:From; bh=HLfCmvTA7XX9kHpprIfVaZHlyBrC288B65o6uhfAvVg=; b=QcQ9rKFcHLZE9DKEVcitV7RjbzhcUIxC/KEg89SEYZiS9Ij1sRjSml3EPPmziJAriUOSW/DQLDC7aPvshmk8HmQNN63MeL7yBpOx2shuw8UaBFWD7yigda+vas4E20n5JOL8YB/iJKPsndzWk3/VlmjpbfhWzlV4tjdMK3xSvN8=; X-CID-P-RULE: Release_Ham X-CID-O-INFO: VERSION:1.1.12,REQID:a7780134-a06a-406c-a025-c71dc2660dc1,IP:0,U RL:0,TC:0,Content:0,EDM:0,RT:0,SF:95,FILE:0,BULK:0,RULE:Release_Ham,ACTION :release,TS:95 X-CID-INFO: VERSION:1.1.12,REQID:a7780134-a06a-406c-a025-c71dc2660dc1,IP:0,URL :0,TC:0,Content:0,EDM:0,RT:0,SF:95,FILE:0,BULK:0,RULE:Spam_GS981B3D,ACTION :quarantine,TS:95 X-CID-META: VersionHash:62cd327,CLOUDID:bca3fbf8-3a34-4838-abcf-dfedf9dd068e,B ulkID:2211221927200U8INVQ6,BulkQuantity:0,Recheck:0,SF:38|28|17|19|48,TC:n il,Content:0,EDM:-3,IP:nil,URL:11|1,File:nil,Bulk:nil,QS:nil,BEC:nil,COL:0 X-UUID: e76ffcae5c714fee94eba8a586db9c6b-20221122 Received: from mtkmbs10n1.mediatek.inc [(172.21.101.34)] by mailgw02.mediatek.com (envelope-from ) (Generic MTA with TLSv1.2 ECDHE-RSA-AES256-GCM-SHA384 256/256) with ESMTP id 945321861; Tue, 22 Nov 2022 19:27:18 +0800 Received: from mtkmbs11n2.mediatek.inc (172.21.101.187) by mtkmbs11n2.mediatek.inc (172.21.101.187) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.792.15; Tue, 22 Nov 2022 19:27:16 +0800 Received: from mcddlt001.gcn.mediatek.inc (10.19.240.15) by mtkmbs11n2.mediatek.inc (172.21.101.73) with Microsoft SMTP Server id 15.2.792.15 via Frontend Transport; Tue, 22 Nov 2022 19:27:14 +0800 From: Yanchao Yang To: Loic Poulain , Sergey Ryazanov , Johannes Berg , "David S . Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , netdev ML , kernel ML CC: MTK ML , Liang Lu , Haijun Liu , Hua Yang , Ting Wang , Felix Chen , Mingliang Xu , Min Dong , Aiden Wang , Guohao Zhang , Chris Feng , Yanchao Yang , Lambert Wang , Mingchuang Qiao , Xiayu Zhang , Haozhe Chang , MediaTek Corporation Subject: [PATCH net-next v1 13/13] net: wwan: tmi: Add maintainers and documentation Date: Tue, 22 Nov 2022 19:27:10 +0800 Message-ID: <20221122112710.161020-1-yanchao.yang@mediatek.com> X-Mailer: git-send-email 2.18.0 MIME-Version: 1.0 X-MTK: N X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_MSPIKE_H2,SPF_HELO_PASS, SPF_PASS,UNPARSEABLE_RELAY autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1750196458308720911?= X-GMAIL-MSGID: =?utf-8?q?1750196458308720911?= From: MediaTek Corporation Adds maintainers and documentation for MediaTek TMI 5G WWAN modem device driver. Signed-off-by: Felix Chen Signed-off-by: MediaTek Corporation --- .../networking/device_drivers/wwan/index.rst | 1 + .../networking/device_drivers/wwan/tmi.rst | 48 +++++++++++++++++++ MAINTAINERS | 11 +++++ 3 files changed, 60 insertions(+) create mode 100644 Documentation/networking/device_drivers/wwan/tmi.rst diff --git a/Documentation/networking/device_drivers/wwan/index.rst b/Documentation/networking/device_drivers/wwan/index.rst index 370d8264d5dc..8298629b4d55 100644 --- a/Documentation/networking/device_drivers/wwan/index.rst +++ b/Documentation/networking/device_drivers/wwan/index.rst @@ -10,6 +10,7 @@ Contents: iosm t7xx + tmi .. only:: subproject and html diff --git a/Documentation/networking/device_drivers/wwan/tmi.rst b/Documentation/networking/device_drivers/wwan/tmi.rst new file mode 100644 index 000000000000..3655779bf692 --- /dev/null +++ b/Documentation/networking/device_drivers/wwan/tmi.rst @@ -0,0 +1,48 @@ +.. SPDX-License-Identifier: BSD-3-Clause-Clear + +.. Copyright (c) 2022, MediaTek Inc. + +.. _tmi_driver_doc: + +==================================================== +TMI driver for MTK PCIe based T-series 5G Modem +==================================================== +The TMI(T-series Modem Interface) driver is a WWAN PCIe host driver developed +for data exchange over PCIe interface between Host platform and MediaTek's +T-series 5G modem. The driver exposes control plane and data plane interfaces +to applications. The control plane provides device node interfaces for control +data transactions. The data plane provides network link interfaces for IP data +transactions. + +Control channel userspace ABI +============================= +/dev/wwan0at0 character device +------------------------------ +The driver exposes an AT port by implementing AT WWAN Port. +The userspace end of the control channel pipe is a /dev/wwan0at0 character +device. Application shall use this interface to issue AT commands. + +/dev/wwan0mbim0 character device +-------------------------------- +The driver exposes an MBIM interface to the MBIM function by implementing +MBIM WWAN Port. The userspace end of the control channel pipe is a +/dev/wwan0mbim0 character device. Applications shall use this interface +for MBIM protocol communication. + +Data channel userspace ABI +========================== +wwan0-X network device +---------------------- +The TMI driver exposes IP link interfaces "wwan0-X" of type "wwan" for IP +traffic. Iproute network utility is used for creating "wwan0-X" network +interfaces and for associating it with the MBIM IP session. + +The userspace management application is responsible for creating a new IP link +prior to establishing an MBIM IP session where the SessionId is greater than 0. + +For example, creating a new IP link for an MBIM IP session with SessionId 1: + + ip link add dev wwan0-1 parentdev wwan0 type wwan linkid 1 + +The driver will automatically map the "wwan0-1" network device to MBIM IP +session 1. diff --git a/MAINTAINERS b/MAINTAINERS index a96c60c787af..eac544b274ac 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -13058,6 +13058,17 @@ L: netdev@vger.kernel.org S: Supported F: drivers/net/wwan/t7xx/ +MEDIATEK TMI 5G WWAN MODEM DRIVER +M: Yanchao Yang +M: Min Dong +M: MediaTek Corporation +R: Liang Lu +R: Haijun Liu +R: Lambert Wang +L: netdev@vger.kernel.org +S: Supported +F: drivers/net/wwan/tmi/ + MEDIATEK USB3 DRD IP DRIVER M: Chunfeng Yun L: linux-usb@vger.kernel.org